id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
225811587 | pes2o/s2orc | v3-fos-license | KRUKENBERG TUMORS FROM BREAST CANCER – LITERATURE REVIEW
Les tumeurs de Krukenberg sont des métastases ovariennes de différentes tumeurs malignes, provenant le plus souvent du tractus gastro-intestinal. Cependant, dans de rares cas, d’autres tumeurs primaires, même avec une localisation extra-abdominale, peuvent conduire au développement de telles lésions. Ni le mécanisme de développement, ni la signification pronostique ni la stratégie thérapeutique ne sont clairement définis à ce jour. Il semble que ces lésions soient plus fréquemment rencontrées chez les patients porteurs de mutations BRCA1/2. Le plus souvent, les tumeurs de Krukenberg sont des lésions bilatérales associées à une ascite et à des zones fermes au niveau de la surface ovarienne. Le but de cet article est de passer en ABSTRACT
INTRODUCTION
Defined by the presence of tumoral cells at the level of the ovaries, Krukenberg tumors usually originate from gastrointestinal metastases and represent one of the most relevant arguments in favour of the "seed and soil" theory, which has been proposed by Paget in 1889 1 . According to this theory, specific malignancies have predilection for specific sites, purely independent from anatomical or vascular factors.
When it comes to ovarian pathology, it has been widely demonstrated that a significant number of cases diagnosed with intra-abdominal malignancies, especially gastro-intestinal malignancies, will lead to the appearance of ovarian metastases 2 . Since most often ovarian metastases occur in patients with gastric cancer, certain authors considered that this notion should be used only to refer to this particular situation, while other authors consider that any metastatic involvement of the ovaries should be considered as Krukenberg tumors, irrespective of the origin of the primary tumor [3][4][5][6] . In rare cases, extra-abdominal malignancies, such as breast cancer, might lead to the appearance of Krukenberg tumors. However, neither the mechanism of development, prognostic significance or therapeutic strategy are not clearly defined so far. Most often, Krukenberg tumors are bilateral lesions associated with ascites and with firm areas at the level of the ovarian surface 7-9 .
TUMORS
It is estimated that Krukenberg tumors represent up to 28% of all ovarian tumors, primary breast cancer being responsible for up to 31% of cases, the most commonly involved histopathological subtype being represented by ductal invasive carcinoma and lobular carcinoma 8,10 . Interestingly, in a study conducted by Lumb and Mackenzie, the authors reported the fact that 29.4% of breast cancer patients submitted to prophylactic oophorectomy had histopathological proofs of ovarian metastases, while Webb et al demonstrated that up to 31% of cases might associate this pathology 11,12 .
Most often, these tumors are bilateral and occur in women younger than 45 years of age 10 . The young age at the diagnosis of these lesions has a double explanation: first of all, breast cancer is frequently more common in younger patients, while the second reason is related to a higher tropism of the ovaries for metastatic cells during the reproductive age 13 .
As for the moment of development of Krukenberg tumors, Le Bouedec et al reported that this interval ranged between 1.5 and 12 years, while Gagnon et al reported a median interval of 11.5 months and a median overall survival of 16 months after Krukenberg tumor diagnosis 14,15 .
An interesting situation is when the initial diagnosis is of the ovarian tumor and the primary malignancy is diagnosed only later. Therefore, in the research conducted by Ho et al, the case of a 62-year-old patient who was initially diagnosed with a large ovarian mass was presented; at that moment, a total hysterectomy with bilateral adnexectomy were performed, the histopathological and immunohistochemical studies confirming the metastatic origin of the lesion; finally, the histopathological diagnosis was of a metastatic lobular carcinoma of the breast. Although at clinical examination no tumor was palpable at the level of breasts, the performed mammography demonstrated the presence of a suspect lesion measuring 6/4 mm, which was biopsied, the presence of a lobular carcinoma being confirmed 16 .
ROUTE OF DISSEMINATION FOR MALIGNANT CELLS
Since the possibility of ovarian metastases with different origins has been demonstrated, attention was focused on identifying the routes of spread which are responsible for the development of such metastases 17,18 . One of the most widely approved theories considers that malignant emboli from the primary tumor block the peritumoral lymph nodes, forcing in this way the lymphatic flow to take a descendent direction. This theory is supported by the fact that most often tumoral involvement of the ovary is seen at the level of the cortex and the hilum and only in rare cases at the level of the ovarian surface 9,19 .
Another theory, which tried to explain the development of Krukenberg tumors in breast cancer patients, is related to the presence of BRCA1/2 mutation 20 .
ASSOCIATED WITH KRUKENBERG TUMORS
As for the histopathological types of breast carcinomas more frequently associated with Krukenberg revue les données existantes concernant les tumeurs de Krukenberg du cancer du sein.
tumors, it seems that invasive ductal carcinoma is most commonly encountered 15,21 .
One of the largest studies conducted on this issue included 14 patients diagnosed with Krukenberg tumors from breast cancer; among these cases, there were 4 cases with invasive ductal carcinoma, 4 cases with invasive lobular carcinoma, 4 cases with adenocarcinoma not otherwise specified, 2 cases of ductal and lobular carcinomas and other 2 cases with unspecified carcinomas; in 87% cases, the Krukenberg tumors were bilateral, having a mean diameter of 8 cm. Further on, the diagnosis was established by histopathological and immunohistochemical studies 22 .
When it comes to the differential diagnosis of Krukenberg tumors with breast cancer origin, most frequently a Brenner tumor should be excluded; however, the bilaterality of the lesions, in association with the presence of vascular emboli and with the absence of omental deposits and of the transition from benign to malignant epithelium, might orientate the diagnosis [23][24][25] .
ROLE OF SURGERY FOR KRUKENBERG TUMORS ORIGINATING FROM BREAST CANCER
The role of surgery and the inf luence of Krukenberg tumors on the long-term outcomes of breast cancer patients have been usually investigated in larger studies, conducted on the issue of Krukenberg tumors with different origins, due to the relatively low number of such cases.
For example, in a study conducted by Jiang et al, on 54 cases diagnosed with Krukenberg tumors, there were 3 cases initially diagnosed with breast cancer. All 3 cases were submitted to adnexectomy, two of them died at 31.7 months and 48.2 months respectively, while the third one was still alive at the 48 months follow-up. Moreover, the authors underlined the fact that at a median follow-up of 30 months, 79.6% of all cases died because of disease progression, the median survival being of 17.8 months. These data enable us to consider that, although they can be considered as a sign of the systemic disease, Krukenberg tumors from breast cancer seem to be associated with a better outcome when compared to other primary tumors 26 .
Another study, conducted on the theme of clinical and prognostic factors in patients with Krukenberg tumors, was published by the Chinese authors conducted by Wu et al in 2015; the authors conducted a study on 128 patients diagnosed with Krukenberg tumors between 1990 and 2010, eight of them presenting primary breast cancer 13 . The authors reported a median overall survival of 16 months, which was significantly influenced by the origin of the primary tumor; therefore, patients with Krukenberg tumors originating from breast cancer reported a significantly better mean overall survival when compared to cases initially diagnosed with gastric cancer (31 months versus 11 months, p<0.0001). In the meantime, the authors underlined the fact that breast cancer patients reported the best outcomes after metastasectomy when compared to all the other subtypes; therefore, breast cancer patients reported a median survival of 31 months, followed by cases with colorectal cancer -with a mean overall survival of 21.5 months and gastric cancer -with a mean overall survival of 11 months. This fact was explained by the authors through the observation that breast cancer usually associates a better long-term outcome when compared to gastro-intestinal cancers, especially when compared to gastric cancer, which is usually diagnosed in more advanced stages of the disease; the authors underlined the fact that gastric cancer is also associated with poorer performance status and anaemia, inducing in this way a poorer long-term outcome. Other prognostic factors were related to the time of diagnosis (synchronous lesions being associated with poorer outcomes when compared to metachronous lesions) and to the presence of extra-ovarian metastatic disease (therefore, the presence of extra-ovarian lesions is associated with significantly poorer outcomes when compared to cases presenting ovarian limited disease). Moreover, the multivariate analysis demonstrated that synchronous lesions, presence of pelvic invasion, ascites and the absence of surgical treatment were associated with poorer outcomes 13 .
CONCLUSIONS
Though not as common as in gastrointestinal cancers, the Krukenberg tumors from breast cancer might be encountered. Although multiple mechanisms have been proposed so far, the development of these lesions is poorly understood; however, it seems that these lesions are more frequently encountered in patients with BRCA1/2 mutations. When it comes to the therapeutic strategy in such cases and to the prognostic factors, it seems that the absence of extra-ovarian diseases, as well as the association of radical surgery, might improve the long-term outcomes. | 2020-06-25T09:09:25.388Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "2e31313b6027e5756b2c674ab7d2e86c843c1323",
"oa_license": "CCBYNC",
"oa_url": "https://umbalk.org/wp-content/uploads/2020/05/16.KRUKENBERG-TUMORS-FROM-BREAST-CANCER.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "64d6252d7f07933be16cf537272ed26610fbb3cf",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
119180899 | pes2o/s2orc | v3-fos-license | Relating the cosmological constant and slow roll to conformal symmetry breaking
We show that a theory with conformal invariance, which is explicitly broken by small terms, provides a solution to the fine tuning problem of the cosmological constant. In the absence of the symmetry breaking terms, the cosmological constant is zero. Its value in the full theory is controlled by the symmetry breaking terms. The symmetry breaking terms also provide the slow roll conditions, which may be useful in constructing a model of inflation.
Introduction
It has been argued that if conformal invariance is broken by a soft mechanism then it might be possible to preserve its consequences even in the full quantum theory [1][2][3][4][5][6]. This allows the possibility of constructing a theory in which the cosmological constant can be set identically to zero. Let us consider the mechanism proposed in [2]. We consider a toy model with two real scalar fields. The Lagrangian density, in four dimensions, may be written as, where L G and L M are the gravity and matter Lagrangian The L SB breaks conformal invariance explicitly and was not included in [2]. We shall specify it below. We notice that L M does not include all the terms allowed by conformal invariance. It is possible to write down one more term quartic in the fields, which has been set to zero. As explained in [2], this is necessary to break scale invariance spontaneously. In the full quantum theory this is needed in order to have a well defined perturbative expansion.
The model as specified above has a conformal anomaly and hence breaks scale invariance. Within the framework of dimensional regularization this is traced to the fact that the couplings, λ and λ 1 , are not dimensionless when d = 4. However it is possible to generalize the action such that it maintains conformal invariance in d dimensions. Let us define the field ω such that [2], In d = 4−ǫ dimensions, we can make all terms in the action conformally invariant by multiplying them with a suitable power of ω. In particular the potential term gets modified to, . The scalar field kinetic energy terms as well as the term proportional to R remains unchanged. In d = 4, the potential terms will involve fractional powers of the field. These terms are handled by expanding the fields around their classical values. For example, let χ 0 and φ 0 represent the classical values of the fields χ and φ respectively andχ andφ represent the corresponding quantum fluctuations around the classical solution. Hence we can express χ and φ as, As long as χ 0 = 0 and φ 0 = 0, quantum expansion is well defined. Actually we only require the classical value, ω 0 , of the field, ω, defined in Eq. 4, to be non-zero. Hence a necessary condition for a consistent perturbative expansion in this theory is that ω 0 = 0. This procedure is called the GR-SI prescription in [2].
The classical values of the scalar fields, χ 0 and φ 0 , generate all the dimensional parameters in the theory, such as, the gravitational constant, the electroweak scale, the Higgs mass etc. As we shall see later, φ 0 = λ 1 χ 0 . Making a quantum expansion, we find that the mass terms of the scalar fields are given by, Hence the field, (φ − λ 1χ ) becomes massive. We shall choose the parameter range such that λ 1 << 1. Hence the massive field is dominantly equal toφ. The orthogonal combination, proportional to, (χ + λ 1φ ), remains massless. This field is dominantlyχ.
Following the procedure described above, [2] show that the standard predictions of conformal invariance are preserved by the theory. In particular, the theory predicts a massless dilaton, at all orders in the perturbation theory. Despite the presence of conformal invariance, the theory does predict running coupling constant. At one loop, [2,7] also argue that the Higgs mass is stable under quantum corrections. In the toy model under consideration, the Higgs field is identified with the field φ. However it has been argued that this problem does not really get solved since, in the presence of the Planck scale and the electroweak scale, the theory requires some very small parameters, which have to be fine tuned at each order [8].
One has to impose some constraints on the parameter values in order that the perturbation theory remains well defined. At all orders in perturbation theory, one has to impose a constraint on parameters, such that conformal invariance is spontaneously broken. If this is not preserved then the perturbation theory does not make sense. Once this condition is imposed, the theory predicts a massless dilaton in this theory.
Another important point is that removal of all divergences might require terms of the kind, φ 6 /χ 2 . Such terms are allowed by scale invariance. Hence the perturbation theory may be more complicated in these theories, requiring large number of parameters [9,10]. Due to the presence of such terms, the theory is not renormalizable. Hence it looses predictive power at mass scale above Planck mass. This is not a very serious problem since the additional terms are suppressed by Planck mass. Furthermore it may be related to the non-renormalizability of gravity. This is because the scalar fields, which might lead to such terms, are intrinsically tied to gravity. For example, the classical values of the fields, χ and φ, generate the gravitational constant, G. In any case, even in the presence of such terms, the consequences of conformal invariance remain valid. The theory also predicts zero cosmological constant. The reason is that it has no dimensional parameter. Hence the effective potential, at any order in perturbation theory contains terms which are quartic in fields, multiplied by a function of the ratio of the fields, such as r = φ/χ. We may express the effective potential as, V ef f = χ 4 U (r) [2]. We shall assume that V ef f is such that, in the absence of symmetry breaking terms, it is minimized for χ 0 and φ 0 not equal to 0 or ±∞. The minimization conditions then imply, where r 0 = φ 0 χ 0 . The one loop effective potential has been explicitly constructed in [2]. After imposing the conditions, Eq. 8, it is found that conformal symmetry is spontaneously broken at this order also. By conformal invariance, the potential displays degenerate minima, such that χ 0 and φ 0 take a continuous range of values and r 0 = χ 0 /φ 0 remains fixed. Eq. 8 also implies that, and hence leads to zero cosmological constant.
One may be concerned that the constraints, Eq. 8, might themselves require fine tuning [8] of parameters. However this does not arise, in the following sense. Consider the potential of the model, where we have included all the possible terms that can arise in the potential, consistent with conformal invariance. The minimization conditions, Eq. 8, can be satisfied only if, This is not fine tuning in the sense that we do not need to maintain a very small value of λ 2 . We may compare this with the standard problem of fine tuning of the cosmological constant [11,12].
The problem is most severe if we have to fine tune the cosmological constant at each order to a very small value. If we can set the cosmological constant identically to zero, even if there is no symmetry demanding this, then this is not as severe a problem. Of course, ideally it would be elegant if a symmetry or some other mechanism may demand a vanishing cosmological constant. However in the absence of such a mechanism, it would still represent progress if at each order in perturbation theory we don't need to fine tune the cosmological constant to a very small value and can simply set it to zero. In the present case also no symmetry requires Eq. 11. However we do need to impose this constraint in order that perturbation theory is well defined. Furthermore, it is satisfying that we do not need to fine tune λ 2 to a very small value. We point out that there is currently considerable effort to study the potential implications of conformal invariance in cosmology or high energy physics. Several model being studied are based on local conformal invariance [13][14][15][16][17][18][19][20][21][22]. The implications of global scale invariance has also been investigated [23][24][25][26][27][28].
Explicit conformal symmetry breaking
We next add a small conformal symmetry breaking term in the action. This term is of the form, Here m 1 and m 2 are the mass terms of the two fields and Λ a cosmological constant. For aesthetic reasons, we may choose to set Λ = 0, but the theory does not require it. As long as these terms are zero, such terms cannot be generated, at any order in perturbation theory, by the action which is symmetric under conformal transformations. Hence we can choose Λ, m 1 and m 2 to be arbitrarily small without any fine tuning. Let us first set Λ = 0. It effect will be discussed later. The basic point is that the mass terms lift the degeneracy in the potential. The location of the global minimum depends on the choice of symmetry breaking terms. We point out that by a suitable choice of such terms, the global minimum might arise at non-zero values of χ and φ. At any particular time the fields may take values such that the potential is not at its minimum. Hence it will produce an effective cosmological constant. The fields will also evolve slowly, as assumed in several models of inflation [29] or dark energy [30]. The slow roll is now controlled by the small symmetry breaking part of the action. We display this mechanism by a choosing simple model. We set m 2 ≈ 0. We choose the parameters β 1 and β 2 to be small compared to unity. These parameters need not be very small and hence do not require acute fine tuning. In the absence of symmetry breaking terms, the potential is minimized for We shall assume that λ 1 << 1 and hence φ 0 << χ 0 . As we shall see, we require χ 0 >> M P L , such that β 1 χ 0 ≈ M P L . The value of λ 1 need not be very small, since φ 0 may be of order Planck or GUT scale. Alternatively, φ 0 might be of the order of electroweak scale. In this case λ 1 does require acute fine tuning, which is related to the standard problem of maintaining a low Higgs mass in the presence of a very large mass scale in the scalar potential. This problem is not solved in this model [2,8]. In the full theory, including symmetry breaking terms, Eq. 13 will not yield the true minimum of the potential.
Let us first work directly in the Jordon frame and, for simplicity, just ignore the term proportional to R. As we shall, for our choice of parameters, we get the same result in Einstein frame. We are interested in solving the scalar field equations of motion in order to determine the effective cosmological constant. The equations of motion, ignoring space derivatives are given by,φ where H is the Hubble parameter. Assuming an approximate solution of the form, Eq. 13, we find thatφ ≈ 0. Here we set the second derivatives of the fields equal to zero, since they likely to be more suppressed in comparison to the first derivatives. We also find that, For slow roll conditions to be satisfied, we requirė which implies that, Hence for slow roll, we require that the symmetry breaking terms are much smaller than the Hubble parameter. Such small terms would normally require acute fine tuning. However in the present case these are protected by conformal symmetry.
The solution leads to vacuum energy equal to m 2 1 χ 2 0 /2. Hence, in order that it generates a sufficiently large value of the effective cosmological constant, we require, In the Jordon frame, the gravitational constant undergoes a slow evolution, which has been ignored in the above equation. This evolution can be consistently ignored as long as the slow roll conditions are satisfied. It is useful to perform the entire calculation in the Einstein frame which, as we shall see below, leads to the same result. Eq. 18, along with the slow roll condition leads to the constraint, We point out that the model contains some small parameters, such as, β 1 and λ 1 , which are not protected by conformal invariance. However these parameters need not be very small.
Their precise values depend on the model under consideration. For our purposes these may be of the order of 10 −3 . The possibility that λ 1 may be very small and its associated fine tuning has already been discussed above. The important point is that the mass parameters, such as, m 1 , are extremely tiny in comparison to other mass scales, such as M P l , φ 0 etc. Their small value, however, is protected against quantum corrections by conformal invariance.
We next perform the calculation in the Einstein frame. We make the conformal transformation, such that, where The Lagrangian density in terms of the transformed variables can be written as, where V is the potential, where, as before, we shall assume, m 2 ≈ 0. We next obtain the equations of motion keeping only the time derivative. Using the slow roll approximation we drop the second derivatives. Under this approximation, we obtain, − 3Hφ + 9H 4π In the absence of symmetry breaking terms we again find the same result as Eq. 13, withφ = 0 andχ = 0. Solving the full equations, assuming the relationship Eq. 13, between classical values of φ and χ, we find that bothφ andχ are related to the symmetry breaking terms. The second terms on the left hand side of both the equations are negligible since, β 1 << 1 and β 2 << 1. Given that, at leading order, ω ∼ β 1 χ 0 ∼ M P L , we again find thatχ is given by Eq. 15. In the present caseφ = 0. However it is clear thatφ <<χ, being suppressed by the factor β 2 2 φ 0 /(β 2 1 χ 0 ). Hence we again get exactly the same condition, Eq. 19, as obtained in the Jordon frame.
Higher Orders
The above analysis may be performed at any order in perturbation theory using the effective potential. While computing the quantum contributions to effective potential, we ignore the symmetry breaking terms. The symmetry breaking terms are assumed to be extremely small and hence are expected to give negligible contributions at higher orders to the effective potential. Hence the effective potential at any order can be expressed as, where, the term χ 4 U (r) is obtained entirely from the symmetry preserving part of the action. We require that, in the absence of symmetry breaking terms, at each order the effective potential displays a minimum, where its value is nonzero and finite. Hence we have to impose some conditions on the counter terms so that this holds [2]. Due to conformal invariance this minimum value of the potential can only be zero. We now replace V in Eq. 24 by V ef f . We are interested in a solution subject to the conditions specified by Eq. 8. Imposing these conditions in Eq. 24, we find that,φ andχ are both proportional to symmetry breaking terms. The value ofχ is again given by Eq. 15 anḋ φ <<χ. Hence we can maintain their small values without any fine tuning.
Non-zero cosmological constant
We next discuss the case where the symmetry breaking terms contain a non-zero cosmological constant, Λ. In this case we set the masses, m 1 and m 2 , equal to zero. This case is very simple. The degeneracy of the minimum does not get lifted, i.e. the minimum is exactly degenerate even when we include symmetry breaking terms. Hence the equations of motion satisfy Eq. 13 exactly. The theory now has non-zero cosmological constant. However it does not receive large corrections from the symmetry preserving terms. At higher orders also Eq. 13 is maintained by a suitable choice of counter terms [2]. Hence we still have a degenerate minima, with the minimum value approximately equal to zero, up to the corrections due to symmetry breaking term, Λ.
Applications to inflation and dark energy
The mechanism that we have discussed above may be applied either to inflation or to dark energy. Let us first discuss the case of inflation. In this case it is simplest to choose symmetry breaking terms such that Λ = 0. We can choose the mass terms, m 1 and m 2 , sufficiently small to satisfy the slow roll conditions. Inflation ends when the fields reach the true minima of the potential. The phenomenon acts like the standard large field inflation [29]. The symmetry breaking terms have to be of the order of the inflationary scale. Hence this theory will have conformal breaking of the order of inflationary scale and will not solve the fine tuning problem of dark energy. However the inflationary slow roll condition can be met without any fine tuning.
Alternatively we may accommodate inflation by fine tuning the symmetry preserving terms and the symmetry breaking terms may be only of the order of dark energy. In this case we may either introduce an explicit cosmological constant or masses, m 1 and m 2 . In case of cosmological constant, the constraint on the field χ, Eq. 19, is not applicable. However this constraint is applicable if dark energy is generated by the masses, which leads to a slow evolution of the fields.
Conclusions
In this paper we have shown that conformal symmetry provides a mechanism which partially alleviates the problem of fine tuning of the cosmological constant. We use the GR-SI prescription in which the conformal invariance can be maintained in the full quantum theory [2]. However the perturbation theory gets more complicated and renormalizability of the theory does not remain maintained [9,10]. Hence the theory looses predictability beyond a certain mass scale, which in the present model is taken to be the Planck scale. Hence this absence of renormalizability is not a very serious issue at low energies. The conformal invariance in the theory is spontaneously broken for a certain range of parameters. The perturbation theory makes sense only if this can be accomplished. Hence we have to impose an additional constraint on the theory, not required by conformal invariance. We have argued that this constraint does not amount to fine tuning of a parameter since it does not involve maintaining a small value of a parameter at each order in perturbation theory. It simply requires setting some parameter value identically to zero. Given this constraint, the perturbation theory can be well defined.
If we impose exact conformal invariance on the theory, then it predicts zero cosmological constant. We introduce small conformal symmetry breaking terms. These involve mass terms of scalar fields and/or explicit cosmological constant. Since the symmetry preserving part does not generate such terms at any order in the perturbation theory, we can maintain their small value without any fine tuning. We may identify the cosmological constant with dark energy. Alternatively the scalar mass terms lead to slowly rolling scalar field and hence can also generate dark energy. Another possibility is that the model may be applied to generate inflation. Detailed application of the model to dark energy or inflation is not pursued in this paper. | 2014-05-30T04:55:11.000Z | 2014-05-30T00:00:00.000 | {
"year": 2014,
"sha1": "f336989bcc8f3fe2cf94be320be48eab09207881",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1405.7775",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "f336989bcc8f3fe2cf94be320be48eab09207881",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
202215847 | pes2o/s2orc | v3-fos-license | Doxorubicin-loaded human serum albumin nanoparticles overcome transporter-mediated drug resistance in drug-adapted cancer cells
Resistance to systemic drug therapy is a major reason for the failure of anticancer therapies. Here, we tested doxorubicin-loaded human serum albumin (HSA) nanoparticles in the neuroblastoma cell line UKF-NB-3 and its ABCB1-expressing sublines adapted to vincristine (UKF-NB-3rVCR1) and doxorubicin (UKF-NB-3rDOX20). Doxorubicin-loaded nanoparticles displayed increased anticancer activity in UKF-NB-3rVCR1 and UKF-NB-3rDOX20 cells relative to doxorubicin solution, but not in UKF-NB-3 cells. UKF-NB-3rVCR1 cells were re-sensitised by nanoparticle-encapsulated doxorubicin to the level of UKF-NB-3 cells. UKF-NB-3rDOX20 cells displayed a more pronounced resistance phenotype than UKF-NB-3rVCR1 cells and were not re-sensitised by doxorubicin-loaded nanoparticles to the level of parental cells. ABCB1 inhibition using zosuquidar resulted in similar effects like nanoparticle incorporation, indicating that doxorubicin-loaded nanoparticles successfully circumvent ABCB1-mediated drug efflux. The limited re-sensitisation of UKF-NB-3rDOX20 cells to doxorubicin by circumvention of ABCB1-mediated efflux is probably due to the presence of multiple doxorubicin resistance mechanisms. So far, ABCB1 inhibitors have failed in clinical trials probably because systemic ABCB1 inhibition results in a modified body distribution of its many substrates including drugs, xenobiotics, and other molecules. HSA nanoparticles may provide an alternative, more specific way to overcome transporter-mediated resistance.
. Despite substantial improvements over recent decades, the prognosis for many cancer patients remains unacceptably poor. The outlook is particularly grim for patients that are diagnosed with disseminated (metastatic) disease who cannot be successfully treated by local treatment (surgery, radiotherapy) and depend on systemic drug therapy, because the success of systemic therapies is typically limited by therapy resistance [2][3][4].
Drug efflux mediated by transporters including adenosine triphosphate (ATP)-binding cassette (ABC) transporters has been shown to play a crucial role in cancer cell drug resistance [2,5]. ABCB1 (also known as P-glycoprotein or MDR1) seems to play a particularly important role in cancer cell drug resistance as a highly promiscuous transporter that mediates the cellular efflux of a wide range of structurally different substrates including many anticancer drugs. Different studies have reported that nanometer-sized drug carrier systems can bypass efflux-mediated drug resistance [6]. This includes various nanoparticle and liposome formulations of the ABCB1 substrate doxorubicin [7][8][9][10][11][12].
Here, we here investigated the effects of doxorubicin-loaded human serum albumin (HSA) nanoparticles in ABCB1expressing neuroblastoma cells. HSA nanoparticles are easy to produce [13][14][15][16][17], and HSA is a well-tolerated material. It is the most abundant protein in human blood plasma and used in many pharmaceutical formulations, in particular as part of critical care treatment [18].
Results
Nanoparticle size, polydispersity and drug load HSA nanoparticles were prepared by desolvation as previously described [13][14][15][16][17]. The nanoparticles were stabilised by the cross-linking of free amino groups present in albumin. Three different nanoparticle preparations were produced using glutaraldehyde at amounts that corresponded to a theoretical cross-linking of 40% (HSA 40% nanoparticles), 100% (HSA 100% nanoparticles), or 200% (HSA 200% nanoparticles) of the amino groups that are available in the HSA molecules. A nonstabilised (0% cross-linking) formulation was used as a control. The resulting particle sizes and polydispersity indices are shown in Table 1. HSA (0%) nanoparticles displayed a large particle size of almost 1 µm and a high polydispersity of 0.5, confirming that no stable nanoparticles had formed ( Table 1). The three HSA nanoparticle preparations stabilised by the different glutaraldehyde concentrations displayed similar diameters between 460 and 500 nm and polydispersity indices in the range of 0.153 and 0.213, indicating a narrow but not monodisperse size distribution ( Table 1).
The spherical shape and narrow size distribution of HSA nanoparticles was confirmed by scanning electron microscopy (SEM) as depicted for nanoparticles stabilised by a 100% crosslinking degree ( Figure 1). For these nanoparticles a zeta potential of −12.5 ± 1.8 mV (n = 6) was detected, indicating only a moderate stabilisation by electrostatic repulsion.
While HSA (40%), HSA (100%), and HSA (200%) nanoparticles displayed similar drug loads between 152 and 191 µg doxorubicin/mg nanoparticle, HSA (0%) nanoparticles had bound 371 µg doxorubicin/mg HSA (Table 1). This probably reflected the higher accessibility of doxorubicin binding sites, which are known to be available on HSA [19], in HSA molecules in solution compared to the accessible binding sites available in HSA nanoparticles.
Effects of doxorubicin-loaded nanoparticles on neuroblastoma cells
The effects of doxorubicin applied in solution or incorporated into HSA (0%), HSA (40%), HSA (100%), or HSA (200%) nanoparticles on neuroblastoma cell viability are shown in Figure 3. The numerical values are presented in Supporting Information File 1, Table S1. Empty control nanoparticles did not affect cell viability in the investigated concentrations.
In UKF-NB-3 r DOX 20 cells, however, the differences between doxorubicin solution and doxorubicin nanoparticles only reached statistical significance for doxorubicin-loaded HSA (200%) nanoparticles ( Figure 3). The reasons for this may include that nanoparticle-incorporated doxorubicin does not completely avoid ABCB1-mediated efflux from UKF-NB-3 r DOX 20 cells and/or that doxorubicin resistance is caused by multiple resistance mechanisms and that avoidance of ABCB1mediated transport is not sufficient to re-sensitise UKF-NB-3 r DOX 20 cells to doxorubicin to the level of UKF-NB-3 cells.
To further study the role of ABCB1 as a doxorubicin resistance mechanism in UKF-NB-3 r DOX 20 cells, we performed additional experiments in which we combined the ABCB1 inhibitor zosuquidar and doxorubicin applied as a solution or nanoparticle preparations in UKF-NB-3 r DOX 20 and UKF-NB-3 cells.
Zosuquidar (1 µM) did not affect the efficacy of doxorubicin solution or nanoparticle-bound doxorubicin in parental UKF-NB-3 cells ( Figure 5), which do not display noticeable ABCB1 activity [20,22,23]. These experiments also confirmed that there is no significant difference in the anticancer activity between doxorubicin solution and doxorubicin nanoparticles in UKF-NB-3 cells, despite an apparent trend in the first set of experiments ( Figure 3).
In UKF-NB-3 r DOX 20 cells, the addition of zosuquidar resulted in an increased sensitivity to free doxorubicin ( Figure 5). The doxorubicin IC 50 decreased by 2.5-fold from 91 ng/mL in the absence of zosuquidar to 37 ng/mL in the presence of zosuquidar, but not to the level of UKF-NB-3 cells (4.6 ng/mL) (Supporting Information File 1, Table S2). This confirmed that ABCB1 is one among multiple resistance mechanisms that contribute to the doxorubicin resistance phenotype observed in UKF-NB-3 r DOX 20 .
In this set of experiments, doxorubicin-loaded nanoparticles displayed a significantly increased activity compared to doxorubicin solution in UKF-NB-3 r DOX 20 cells ( Figure 5). This finding together with the non-significant trend observed in the first set of experiments ( Figure 3) suggests that doxorubicinloaded nanoparticles do indeed exert stronger effects against UKF-NB-3 r DOX 20 cells than doxorubicin solution. Zosuquidar only moderately increased the efficacy of doxorubicin nanoparticles further (1.1-1.8-fold) in UKF-NB-3 r DOX 20 cells ( Figure 5, Supporting Information File 1, Table S2). In particular, the anticancer effects of doxorubicin-loaded HSA (200%) nanoparticles, the most active nanoparticle preparation in UKF-NB-3 r DOX 20 cells, displayed a doxorubicin IC 50 of 20 ng/mL, which was not further reduced by addition of zosuquidar (doxorubicin IC 50 : 18 ng/mL) ( Figure 5, Table S2). Hence, the increased anticancer activity of doxorubicin incorporated into HSA nanoparticles appears to be primarily caused by circumventing the ABCB1-mediated doxorubicin efflux in UKF-NB-3 r DOX 20 cells. Table S2. *P < 0.05 relative to the doxorubicin IC 50 in the absence of zosuquidar; § P < 0.05 relative to doxorubicin solution.
Discussion
The occurrence of drug resistance is the major reason for the failure of systemic anticancer therapies [2]. Here, we investigated the effects of doxorubicin-loaded HSA nanoparticles on the viability of the neuroblastoma cell line UKF-NB-3 and its sublines adapted to doxorubicin (UKF-NB-3 r DOX 20 ) and vincristine (UKF-NB-3 r VCR 1 ), which both display ABCB1 activity and resistance to doxorubicin. The HSA nanoparticles were prepared by desolvation and stabilised by glutaraldehyde, which crosslinks amino groups present in albumin molecules [13][14][15][16][17]. Glutaraldehyde was used at molar concentrations that corresponded to 40% (Dox HSA (40%) nanoparticles), 100% (Dox HSA (100%) nanoparticles), or 200% (Dox HSA (200%) nanoparticles) theoretical cross-linking of the 59 amino groups available per HSA molecule [24]. The resulting nanoparticles ranged from 463 to 486 nm in diameter and had a low polydispersity index in the range of 0.2.
Doxorubicin-loaded nanoparticles displayed similar activity as doxorubicin solution in the parental UKF-NB-3 cell line, but exerted stronger effects than doxorubicin solution in the ABCB1-expressing UKF-NB-3 sub-lines. The UKF-NB-3 r VCR 1 cells were similarly sensitive to doxorubicin-loaded nanoparticles as parental UKF-NB-3 cells to doxorubicin solution (and doxorubicin-loaded nanoparticles). This suggests that the doxorubicin resistance of UKF-NB-3 r VCR 1 cells exclusively depends on ABCB1 expression. In concordance, the ABCB1 inhibitor zosuquidar re-sensitised UKF-NB-3 r VCR 1 cells to the level of parental UKF-NB-3 cells.
The UKF-NB-3 r DOX 20 cells displayed a more pronounced doxorubicin resistance phenotype than UKF-NB-3 r VCR 1 cells and were neither re-sensitised by nanoparticle-encapsulated doxorubicin nor by zosuquidar to the level of UKF-NB-3 cells. This suggests that UKF-NB-3 r DOX 20 cells have developed multiple doxorubicin resistance mechanisms. In contrast, adaptation of UKF-NB-3 r VCR 1 cells to vincristine, a tubulinbinding agent with an anticancer mechanism of action that is not related to that of the topoisomerase II inhibitor doxorubicin [2,20,25,26], did not result in the acquisition of changes that confer doxorubicin resistance beyond ABCB1 expression.
Furthermore, zosuquidar did not increase the efficacy of doxorubicin-loaded HSA (100%) and HSA (200%) nanoparticles and only modestly enhanced the efficacy of doxorubicinloaded HSA (40%) nanoparticles. Together, these data confirm that administration of doxorubicin as HSA nanoparticles resulted in the circumvention of ABCB1-mediated drug efflux. The difference between HSA (40%) nanoparticles and the other two preparations may be explained by elevated drug release due to the lower degree of cross-linking.
Interestingly, high concentrations of the cross-linker glutaraldehyde did not affect the efficacy of the resulting doxorubicinloaded nanoparticles although high glutaraldehyde concentrations might have been expected to affect drug release and/or to covalently bind to doxorubicin via its amino group.
Notably, the results differ from a recent similar study in which nanoparticles prepared from poly(lactic-co-glycolic acid) (PLGA) or polylactic acid (PLA), two other biodegradable materials approved by the FDA and EMA for human use [27,28], did not bypass ABCB1-mediated drug efflux [29]. Differences in the mode of uptake and cellular distribution of the nanoparticles from different materials may be responsible for these discrepancies. HSA nanoparticles may be internalised upon interaction with cellular albumin receptors [30,31]. Notably, nab-paclitaxel, an HSA nanoparticle-based preparation of paclitaxel (another ABCB1 substrate [21]), which is approved for the treatment of different forms of cancer [32], had previously been shown not to avoid ABCB1-mediated drug efflux [33]. However, nab-paclitaxel is not produced by the use of crosslinkers, and the interaction of paclitaxel with albumin may differ from that of doxorubicin. Hence, variations in drug binding and drug release kinetics may be responsible for this difference.
Despite the prominent role of ABCB1 as a drug resistance mechanism, attempts to exploit it as drug target have failed so far, despite the development of highly specific allosteric ABCB1 inhibitors (of which zosuquidar is one) [5,21]. One reason for this is that ABCB1 is expressed at various physiological borders and involved in the control of the body distribution of its many endogenous and exogenous substrates. Systemic ABCB1 inhibition can therefore result in toxicity as a consequence of a modified body distribution of anticancer drugs (and other drugs that are co-administered for conditions other than cancer), xenobiotics, and other molecules. Hence, the use of drug carrier systems to bypass ABC transporter-mediated drug efflux is conceptually very attractive because it can (in contrast to inhibitors of ABCB1 or other transporters) overcome resistance mediated by multiple transporters and does not result in the systemic inhibition of transporter function at physiological barriers. However, cancer cells may be characterised by multiple further resistance mechanisms and just bypassing transporter-mediated efflux may not be sufficient to achieve therapeutic response (as illustrated by our current finding that UKF-NB-3 r DOX 20 cells cannot be fully re-sensitised to doxorubicin by zosuquidar) [2,5,21]. Hence, our results demonstrate that more sophisticated, personalised therapies will need to be developed. Such therapies will depend on an improved understanding of the resistance status of cancer cells to a certain drug beyond its transporter status. If biomarkers become available that predict cancer cell response to a certain drug more reliably, nanoparticles can be used to transport drugs under circumvention of transporter-mediated efflux into cancer cells that are likely to respond to them.
In conclusion, doxorubicin-loaded HSA nanoparticles produced by desolvation and cross-linking using glutaraldehyde overcome (in contrast to other nanoparticle systems) transporter-mediated drug resistance in drug-adapted neuroblastoma cells. However, our data also show that bypassing of transporter-mediated drug efflux may not be sufficient to sensitise cancer cells, which have developed multiple resistance mechanisms, to the level of sensitive parental cells.
Experimental Reagents and chemicals
HSA and glutaraldehyde were obtained from Sigma-Aldrich Chemie GmbH (Karlsruhe, Germany). Dulbecco's phosphate buffered saline (PBS) was purchased from Biochrom GmbH (Berlin, Germany). Doxorubicin was obtained from LGC Standards GmbH (Wesel, Germany). All chemicals were of analytical grade and used as received.
Human serum albumin (HSA) nanoparticle preparation by desolvation HSA nanoparticles were prepared by desolvation as previously described [13][14][15][16][17]. 100 µL of a 1% (w/v) aqueous doxorubicin solution was added to 500 µL of a 40 mg/mL (w/v) HSA solution and incubated for 2 h at room temperature under stirring (550 rpm, Cimaric i Multipoint Stirrer, ThermoFisher Scientific, Langenselbold, Germany). 4 mL of ethanol 96% was added at room temperature under stirring using a peristaltic pump (Ismatec ecoline, Ismatec, Wertheim-Mondfeld, Germany) at a flow rate of 1 mL/min. After the desolvation process, the resulting nanoparticles were stabilised/cross-linked using different amounts of glutaraldehyde that corresponded to different percentages of the theoretical amount that is necessary for the quantitative cross-linking of the 60 primary amino groups present in the HSA molecules of the particle matrix. The addition of 4.7 µL 8% (w/v) aqueous glutaraldehyde solution resulted in a theoretical cross-linking of 40% of the HSA amino groups, the addition of 11.8 µL 8% (w/v) aqueous glutaraldehyde solution in 100% cross-linking, and the addition of 23.6 µL 8% (w/v) aqueous glutaraldehyde solution in 200% cross-linking. The suspension was then stirred for 12 h at 550 rpm. The particles were purified by centrifugation (at 16,000g for 12 min) and resuspension steps performed three times in purified water. During the particle purification the supernatants were collected and the drug content was measured by high-performance liquid chromatography (HPLC) as described below. The loading efficiency of doxorubicin in the nanoparticles was calculated based on the difference between the doxorubicin amount used for nanoparticle preparation and the unbound amount detected in the collected supernatants.
Determination of particle size distribution The average particle size and the polydispersity were measured by photon correlation spectroscopy (PCS) using a Malvern zetasizer nano instrument (Malvern Instruments, Herrenberg, Germany). The resulting particle suspensions were diluted 1:100 with purified water and measured at a temperature of 22 °C using a backscattering angle of 173°.
The zeta potential was measured in the same instrument by laser Doppler microelectrophoresis to provide information about the surface charge of the nanoparticles. Thus, the nanoparticle dilutions described above were transferred into a folded capillary cell and the experiment was conducted at 22 °C.
Morphological analysis of nanoparticles by scanning electron microscopy (SEM) 3 µL of diluted HAS nanoparticle suspension (0.25 mg/mL) was applied on a 0.1 µm membrane filter (Isopore TM membrane filter, Merck Millipore, Darmstadt, Germany) and dried overnight in a desiccator. Afterwards, the membrane filter was sputtered with gold (Sputter SCD 040, BALTEC, Liechtenstein) under argon atmosphere. SEM was performed on a CamScan CS4 microscope (Cambridge Scanning Company, Cambridge, United Kingdom) and the sample was visualised with an accelerating voltage of 10 kV, a working distance of 10 mm, and 10,000-fold magnification.
Doxorubicin quantification via HPLC-UV
The amount of doxorubicin that was incorporated into the nanoparticles was determined by HPLC-UV (HPLC 1200 series, Agilent Technologies GmbH, Böblingen, Germany) using a LiChroCART 250 × 4 mm LiChrospher 100 RP 18 column (Merck KGaA, Darmstadt, Germany). The mobile phase was a mixture of water and acetonitrile (70:30) containing 0.1% trifluoroacetic acid [16]. In order to obtain symmetric peaks a gradient was used. In the first 6 min the percentage of A was reduced from 70% to 50%. Subsequently within 2 min the amount of A was further decreased to 20% and then within another 2 min increased again to 70%. These conditions were held for a final 5 min, resulting in a total runtime of 15 min. While using a flow rate of 0.8 mL/min, an elution time for doxorubicin of t = 7.5 min was achieved. The detection of doxorubicin was performed at a wavelength of 485 nm [34].
Cell culture
The neuroblastoma cell line UKF-NB-3, which harbours a MYCN amplification (a major indicator of high-risk disease and poor prognosis [35]), was established from a stage 4 neuroblastoma patient [20]. The UKF-NB-3 sub-lines adapted to growth in the presence of doxorubicin 20 ng/mL (UKF-NB-3 r DOX 20 ) [20] or vincristine 1 ng/mL (UKF-NB-3 r VCR 1 ) were established by continuous exposure to step-wise increasing drug concentrations as previously described [20,36] and derived from the resistant cancer cell line (RCCL) collection [37].
All cells were propagated in Iscove's modified Dulbecco's medium (IMDM) supplemented with 10% foetal calf serum, 100 IU/mL penicillin and 100 µg/mL streptomycin at 37 °C. The drug-adapted sub-lines were continuously cultured in the presence of the indicated drug concentrations. The cells were routinely tested for mycoplasma contamination and authenticated by short tandem repeat profiling.
Cell viability assay
Cell viability was determined by 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay modified after Mosman [38], as previously described [39]. 2 × 10 4 cells suspended in 100 µL of cell culture medium were plated per well in 96-well plates and incubated in the presence of various doxorubicin concentrations (free or nanoparticle-encapsulated) for 120 h. Where indicated, free or nanoparticle-encapsulated doxorubicin was combined with a fixed concentration of 1 µM of the ABCB1 inhibitor zosuquidar. Then, 25 µL of MTT solution (2 mg/mL (w/v) in PBS) was added per well, and the plates were incubated at 37 °C for an additional 4 h. After this, the cells were lysed using 200 µL of a buffer containing 20% (w/v) sodium dodecylsulfate and 50% (v/v) N,N-dimethylformamide with the pH adjusted to 4.7 at 37 °C for 4 h. The absorbance was determined at 570 nm for each well using a 96-well multiscanner. After subtracting of the background absorption, the results are expressed as percentage viability relative to control cultures which received no drug. The drug concentrations that inhibited cell viability by 50% (IC 50 ) were determined using CalcuSyn (Biosoft, Cambridge, UK).
Statistical testing
The results are expressed as the mean ± standard deviation of at least three experiments. The Student's t-test was used for comparing two groups. Three and more groups were compared by ANOVA followed by the Student-Newman-Keuls test. P-values lower than 0.05 were considered to be significant.
Supporting Information
Doxorubicin IC 50 s in neuroblastoma cells in the absence or presence of the ABCB1 inhibitor zosuquidar. Effects of doxorubicin applied as solution or incorporated into HSA nanoparticles on neuroblastoma cell viability. Effects of doxorubicin solution or doxorubicin HSA nanoparticles on neuroblastoma cells with or without zosuquidar.
Supporting Information File 1
Additional experimental details. | 2019-06-14T14:06:40.612Z | 2019-05-31T00:00:00.000 | {
"year": 2019,
"sha1": "5ea0fee1798722771eef6ac74ba924da9598fc9e",
"oa_license": "CCBY",
"oa_url": "https://www.beilstein-journals.org/bjnano/content/pdf/2190-4286-10-166.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5ea0fee1798722771eef6ac74ba924da9598fc9e",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
13913401 | pes2o/s2orc | v3-fos-license | Longitudinal Neurostimulation in Older Adults Improves Working Memory
An increasing concern affecting a growing aging population is working memory (WM) decline. Consequently, there is great interest in improving or stabilizing WM, which drives expanded use of brain training exercises. Such regimens generally result in temporary WM benefits to the trained tasks but minimal transfer of benefit to untrained tasks. Pairing training with neurostimulation may stabilize or improve WM performance by enhancing plasticity and strengthening WM-related cortical networks. We tested this possibility in healthy older adults. Participants received 10 sessions of sham (control) or active (anodal, 1.5 mA) tDCS to the right prefrontal, parietal, or prefrontal/parietal (alternating) cortices. After ten minutes of sham or active tDCS, participants performed verbal and visual WM training tasks. On the first, tenth, and follow-up sessions, participants performed transfer WM tasks including the spatial 2-back, Stroop, and digit span tasks. The results demonstrated that all groups benefited from WM training, as expected. However, at follow-up 1-month after training ended, only the participants in the active tDCS groups maintained significant improvement. Importantly, this pattern was observed for both trained and transfer tasks. These results demonstrate that tDCS-linked WM training can provide long-term benefits in maintaining cognitive training benefits and extending them to untrained tasks.
Introduction
Working memory (WM) serves as the mental workspace permitting the maintenance and manipulation of information over short delays. Unfortunately, aging impairs WM-a worrisome and frustrating development beginning in our mid-20s [1]. This decline is likely caused by agerelated cortical volume loss, particularly in frontoparietal regions engaged in WM [2,3]. Furthermore, with age, these regions change their functional activation patterns during working memory tasks, showing greater bilateral recruitment at lower task demands (reviewed in [4][5][6]). This may reflect recruitment of additional frontal resources to maintain performance [7].
To date, tDCS has only rarely been paired with WM training in healthy older adults [79]. This large and growing population will certainly increase the demand for interventions, which can improve WM. Given the research showing that transfer effects are modest or non-existent following behavioral WM training, neuromodulatory techniques may provide the added neural 'boost' to enhance and prolong transfer effects. The current study tested whether longitudinal right frontoparietal tDCS-linked WM training would improve WM and show significant transfer to untrained tasks. We predicted that active frontoparietal tDCS would improve WM performance on trained and transfer tasks in participants who received active tDCS rather than sham stimulation. The pattern of enhanced frontal activations in the healthy aging suggested that prefrontal stimulation might provide optimal benefits compared to parietal stimulation.
Materials and Methods
To investigate the longitudinal effects of tDCS-linked WM training in a healthy aging population, we tested participants in a tDCS-linked WM training paradigm in which they completed 10 training sessions consisting of 10 minutes of sham (control) or active anodal tDCS. Participants returned for follow-up testing 1-month after training ended. The WM training tasks included verbal and visuospatial tasks and the Operation Span (OSpan) task [80]. To assess transfer, participants completed a set of transfer tasks during the first, tenth, and follow-up sessions.
pacemakers, a history of neurological or psychiatric disease, or those on medications modulating brain excitability (e.g. neuroleptic, hypnotic, antidepressant). We randomly assigned participants to one of 4 groups (control (sham), PFC, PPC, PFC/PPC alternating) so that each group had 18 participants. Groups showed no significant differences as a function of age (p = .98; control 64.33 (5.24)
Behavioral Measures and Training Sequence
Participants completed 10 consecutive weekday sessions (2 weeks: Monday-Friday) and a followup session 1-month after the 10 th session. During the first session, prior to stimulation, participants completed the MMSE [81], forward and backward digit span [82], color-word Stroop task [83], and spatial 2-back task [84]. The digit span, Stroop and 2-back tasks were considered untrained transfer tasks because they were only completed on the 1 st , 10 th and follow-up sessions. During sessions 1-10, participants received tDCS (parameters below) during which they practiced the visuospatial WM task. After stimulation, the participants completed the visuospatial WM tasks and the Automated Operation Span [80]. The 1 st , 10 th and follow-up sessions, lasted 75-90 minutes; sessions 2-9 lasted~60 minutes. Participants sat 57 cm from the stimulus monitor during computerized tasks.
We picked a series of difficult WM tasks for training. These tasks were selected because they tap in to core WM capabilities. Improving performance on core WM tasks should theoretically strengthen cognitive skills and lead to near and far transfer of performance gains. Due to the between subject nature of the task, we purposefully chose not to use adaptive tasks in nature as done in many cognitive training studies [24,85]. Due to the between subject comparisons in performance gains, we made efforts to ensure that all tasks were equally difficult. Furthermore, no participants neared ceiling on any trained tasks indicating that participants were sufficiently challenged by each training task.
Transfer tasks
Digit Span (near transfer). This task measures short-term and WM capacity. Participants repeated a string of spoken numbers aloud as heard (forward) or in reverse order (backward). The number of digits increments by one digit until the participant failed two trials of the same length.
Stroop Task (far transfer). This task measures selective attention. Color names are printed in congruent or incongruent ink colors, and participants press a button [1][2][3][4][5][6][7] that corresponds with the color of the ink, rather than the printed word. Participants initially completed practice trials to familiarize themselves with the correct response button. We instructed participants to answer as quickly and accurately as possible. Participants had unlimited response times. There were a total of 100 trials equally divided among congruent and incongruent trial types. Spatial 2-back (near transfer). This task measures WM performance. We instructed participants to remember the location of stimuli (green circles: 3°visual angle) appearing sequentially in one of nine locations (500 ms), followed by a blank delay (2000 ms). Participants pressed 'j' when the stimulus matched the location presented two trials previously; they pressed 'f' if the presented circle did not match. All participants completed at least 45 practice trials until they were comfortable with the timed task. The experimental task consisted of 138 trials (66% non-match) and lasted~6 minutes.
Trained Tasks
Visuospatial WM. These WM paradigms varied task demands (visual, spatial) and retrieval demands (recognition, recall); see Fig 1. The visual stimuli consisted of 20 grayscale drawings of common objects (e.g. cat, fence) [86]. Stimuli appeared in a 4x4 grid containing five items, followed by a delay interval filled with a checkerboard and a memory probe (unspeeded). The timing of individual tasks varied to try to equate performance across tasks based on pilot data. The four paradigms were presented separately in two pseudo-randomly ordered 25-trial blocks.
In the visual recognition task, five items were presented (500 ms), followed by a delay (750 ms), then one probe item returned, and participants made a new/old judgment, indicating whether or not the item was previously seen. In the spatial recognition task, five items were presented (200 ms) followed by a delay period filled with a checkerboard (4000 ms). Participants then decided if the returning item was in a new or old spatial location compared to the first five that were presented. In both the visual and spatial recognition trials, participants pressed the keys 'o' or 'n' to indicate whether the item or location was old or new, respectively.
In the visual recall task, five items were presented (2000 ms) followed by a delay period filled with a checkerboard (500 ms). 16 items then returned, filling each of the possible squares. Participants decided which 1 of the 16 items was present in the first 5. In the spatial recall task, five items were presented (200 ms) followed by a delay period filled with a checkerboard (4000 ms). Participants remember consonants (1000 ms) then solve arithmetic problems before reporting the letter sequence. Right: Visuospatial WM paradigm. A) Visual recognition trials start with the presentation of the stimulus array (500 ms) followed by a delay period (750 ms) and the appearance of a probe item. Participants reported whether the probe item was 'old' or 'new'. B) Location recognition trials began with the stimulus array (200 ms) followed by a delay period (4000 ms). Participants reported whether the probe location was 'old' or 'new'. C) Visual recall trials begin with stimulus presentation (2000 ms) followed by a delay (500 ms). The probe array contained 15 new and 1 old item, which participants were asked to identify. D) Location recall trials begin with stimulus presentation (200 ms) followed by a delay period (4000 ms). At probe, an array of filled locations appeared and participants reported which filled location had been occupied at encoding.
Twelve images then returned each filled with a new picture, and only 1 of those 12 filled spatial locations was previously filled with the initial 5 items. In both the visual and spatial recall trials participants selected the correct item or location by selecting a letter (A-P) that corresponded with each of the 16 cells; see Fig 1.
The Automated Operation Span (OSpan). This is a task of divided attention in which participants must solve arithmetic problems while simultaneously encoding and maintaining a list of letters [80]. Participants must recall letters after they complete the arithmetic problems. The task lasted~10 minutes and consisted of nine sets of letters, which ranged from 3 to 7 total letters. We measured performance by letter recall and math accuracy (scores range from 0 to 50).
TDCS Protocol
There were 4 tDCS groups: anodal PFC (F4 International 10-20 EEG System [87]), anodal PPC (P4), alternating anodal PFC and PPC, or sham stimulation (control). Participants were randomly assigned to a group and were blinded as to the tDCS protocol they received. The experimenters were aware of the stimulation protocol the participants received each session. The first site for the alternating PFC/PPC group was counterbalanced (i.e. PFC first or PPC first) across participants. Sham stimulation location was counterbalanced between PFC and PPC locations. There was no cathodal stimulation group, as we were only interested in improving performance and cathodal stimulation is generally linked to interruption of function. We also did not include a no-contact group that received only tDCS with no WM training, as previous research suggests that tDCS alone during rest exerts no effect on behavioral outcomes [61,88].
Stimulation consisted of a single continuous direct current delivered by a battery-driven continuous stimulator (Eldith MagStim, GmbH, Ilmenau, Germany). Current (1.5 mA, 10 minutes) was delivered through two 5 x 7 cm 2 electrodes housed in saline-soaked sponges. Sham stimulation included 20 seconds of ramping up and down at the beginning and end of stimulation to give the participant a physical sense of stimulation associated with current change. This effectively blinds participants to their stimulation condition [89]. Furthermore, no participants indicated that they believed that they were receiving sham stimulation, as tDCS was a novel research technique for all 72 participants. In all conditions, one electrode was placed over the target location at either F4 or P4 (International 10-20 EEG system) and the reference electrode was placed on the contralateral cheek. This reference location has previously been used effectively in tDCS studies of cognitive abilities [62,63,68,69,73,[90][91][92].
During the 10-minute stimulation/sham period, participants received task instructions and practiced all four visuospatial WM paradigms. Previous research shows that participating in cognitive tasks during tDCS benefits later performance [61]. After stimulation/sham, the electrodes were removed and the experimental trials began. TDCS effects last~1 hour [40,72,93]; this study was designed to last less than an hour so that tDCS effects were present throughout testing, however some studies show shorter duration of tDCS effects [94,95].
Current Flow Modeling
To determine whether the tDCS stimulation was stimulating the frontoparietal networks central to WM performance, we modeled current flow. High-resolution models were derived from previous MRI data (1mm T2-weighted scan), not individually for participants in the current study. The MRI scans were segmented into several tissues: skin, fat, bone, CSF, gray matter, white matter, air, and deep brain structures. Segmentation was carried out using Simpleware ScanIP (Simpleware Ltd., Exeter, UK). The electrodes were created in SolidWorks (Dassault Systèmes Corp., Waltham, MA) and oriented on the head using ScanCAD (Simpleware Ltd., Exeter, UK). The head, now with the electrodes placed, was imported back to ScanIP to generate a volumetric mesh.
The meshes were then imported to a finite element solver, COMSOL Multiphysics 3.5 (COMSOL Inc., Burlington, MA). A model based on the data was created in the software's AC/ DC module. Typical electrical conductivities (S/m) were assigned to each of the tissues and electrodes: Skin 0.465, Fat 0.025, Bone 0.01, CSF 1.65, Gray matter 0.276, White matter 0.126, Air 1e-15, Electrodes 5.99e7, Gel 0.3 [96]. Deep structures were treated as either white matter or gray matter. To simulate direct current stimulation, certain boundary conditions were applied. The surface of the anode was assigned a current density (-n Á J = 1), the surface of the cathode was grounded (V = 0), internal boundaries were assigned continuity (n Á (J 1 -J 2 ) = 0), and the remaining surfaces were considered insulated (n Á J = 0). The Laplace equation (V: potential, o: conductivity) was then solved [96]. After the simulation was run, the electric field magnitude was plotted on the surface of the gray matter.
Current Modeling
We modeled current flow to more precisely identify the spatial extent of brain stimulation after anodal tDCS to PFC and PPC sites; see Fig 2. This analysis confirmed that tDCS to the PFC supplied current to PFC regions, but current also reached orbitofrontal and ventral temporal regions. Similarly, the PPC site stimulated PPC as well as more posterior occipital and ventral temporal regions. To our surprise, there was considerable overlap of current flow, suggesting that regardless of stimulation site, current reached frontoparietal networks strongly activated during WM performance.
TDCS Effects
Based on the current modeling data showing overlapping current flow regardless of tDCS site, we first tested whether active tDCS predicted significant WM training and/or transfer benefits when compared to control (sham tDCS). To do this we created composite normalized difference scores termed benefit indices, by calculating normalized difference scores as follows: [(session 10 performance-session 1 performance)/(session 10 performance + session 1 performance)] for each participant and task. This normalization reduced variability across individuals' performances and facilitated comparison across tasks with different scoring conventions. Furthermore, this comparison, which we previously employed in tDCS studies [62,68,97,98], allows for analysis of improvement across all tasks, at each time point, which we then followed with individual analysis by task (see below). Composite indices were completed for the each of the 5 trained tasks and summing them to form the trained task benefit index, and separately for the 3 transfer tasks and summing those to form the transfer task benefit index. This provided us with a composite score for trained task improvement and a composite score for transfer task improvement. For the first analysis, performance in the control condition was compared to performance in the active groups, collapsing across the three active tDCS groups to form the 'combined active' group. We note that the same patterns emerged when raw measures of performance were used (for raw data see Table 1). Furthermore, performance on each trained and transfer task during session 1 was equivalent across all groups (Table 1).
First, we tested the hypothesis that active tDCS promoted greater training and transfer gains as compared to control after 10 sessions of training. A repeated-measures ANOVA comparing the two cumulative benefit indices (trained, transfer) with the between-subjects factor of tDCS condition (combined active, control). After 10 sessions of training, there was a significant main effect of benefit index (trained, transfer) such that all participants had greater improvement on trained tasks (F 1, 70 = 62.06, MSE = 3.45, p <.001, partial η 2 <.47), this was expected, because there were more trained [5] than transfer tasks [3]. Importantly, there was no significant effect of tDCS condition (F 1, 70 = .83, MSE = .02, p = .37, partial η 2 = .01), and no interaction of tDCS group x benefit index (F 1, 70 = .89, MSE = .05, p = .35, partial η 2 = .01). To summarize, after ten sessions of WM training, both tDCS groups (active, control) showed equivalent improvement on trained and transfer tasks. Thus, WM training was effective and there were no differences as a function of group. In other words, by the end of the 10 sessions, active tDCS did not lead to significantly greater training gains or rate of training.
However, at follow-up, after a month of no contact, a different pattern emerged. Repeating the analysis described above, using benefit indices from baseline incorporating follow-up performance, there was again a main effect of benefit index (F 1, 70 = 34.54, MSE = 2.06, p <.001, partial η 2 = .33), such that the trained benefit index was significantly greater than the transfer benefit index across both groups. Importantly, there was also a significant main effect of tDCS group (F 1, 70 = 7.32, MSE = .25, p <.01, partial η 2 = 0.10) such that the active tDCS group showed significantly greater performance across trained and transfer tasks; see Fig 3. This was driven largely by improvements on the more difficult spatial 2-back and OSpan tasks (see below). There was no group x benefit index interaction (F 1, 70 = .10, MSE = .01, p = .75, partial η 2 <.01). These findings demonstrate that active tDCS to frontoparietal sites sustained practice gains for trained WM tasks and enhanced transfer task performance. In other words, all groups showed practice related improvement, but only active tDCS sustained these gains.
A subsequent question arises as to how the different active stimulation groups performed compared to each other, as we grouped them together in the above analysis. To answer this, we tested the hypothesis that the different stimulation sites resulted in different training or transfer gains. We compared the two cumulative benefit indices (trained, transfer) across the three active tDCS groups (PFC, PPC, PFC/PPC) and found no main effect of site after session 10 (F 2, 51 = .04, MSE <.01, p = .96, partial η 2 <.01, all pairwise comparisons p >.80, or followup testing (F 2, 51 = .30, MSE = .02, p = .74, partial η 2 = .01, all pairwise comparisons p >.46). In other words, all active tDCS groups resulted in equivalent benefits regardless of stimulation site.
Analysis by Task
Lastly, we tested the hypothesis that the training and transfer gains for the active tDCS groups were disproportionally driven by individual tasks. A criticism would be that by grouping the tasks' benefit indices together, we are hiding the individual effects tDCS has on each task. To investigate this possibility, we conducted a repeated measures ANOVA for the 5 trained task benefit indices from follow-up for the combined active tDCS group. There was a significant effect of task, (F 4, 204 = 11.51, MSE = .19, p <.001, partial η 2 = 0.18), such that the two recall tasks and the OSpan task provided significantly greater gains when compared to the two recognition tasks (recognition verbal compared to all other trained tasks: all p's <.02, recognition spatial compared to all other trained tasks: all p's <.04); see Fig 4. The only other significant difference was verbal recall provided significantly greater training gains than spatial recall (p = .03). There was no difference between the OSpan and either recall task (both p's >. 19). The task x group interaction was not significant (F 8, 204 = 1.34, MSE = .02, p = .22, partial η 2 = 0.05). For the transfer tasks, the significant benefit was driven by the spatial 2-back task; see Fig 4. As above, there was a main effect of transfer task (F 2, 102 = 14.95, MSE = .30, p <.001, partial η 2 = 0.23), showing significantly greater improvement on the spatial 2-back task compared to the digit span and Stroop tasks (both p's <.001). The pairwise comparison between the digit span and Stroop tasks was not significant (p = .45). The task x group interaction was not significant (F 4, 102 = .28, MSE = .01, p = .89, partial η 2 = 0.01). In sum, across trained and transfer tasks, the more challenging and adaptive tasks showed greater gains and transfer was observed for the near transfer task alone.
Discussion
For many, maintaining cognitive performance is a priority in the aging process. Here, we confirmed the effectiveness of WM training paradigms and demonstrated that tDCS combined with WM training will lead to longer-lasting benefits in older adults. After ten sessions, all participants significantly improved across tasks. This was true regardless of whether the participant received active or sham tDCS. This finding is encouraging as it supports previous findings, which show the importance of WM training in enhancing or recovering cognitive skills in healthy older adults [13,15,[23][24][25][26][27][28][29][30][31][32][33][34][35][36][37]. At follow-up, after one month of no contact, participants who received active tDCS performed significantly better on trained and transfer tasks than the control sham group. The magnitude of this effect did not vary as a function of stimulation site, PFC or PPC. In other words, tDCS-linked WM benefits emerged after training ended showing that tDCS helped maintain practice gains over time and enhanced transfer task performance. Thus, WM training when combined with tDCS offers promise in maintaining WM gains over longer periods of time. We offer that tDCS extends WM training benefits. This is especially relevant in a population concerned about their cognition: the healthy aging.
As noted, the current study provides convergent support for previous WM training studies reporting improved task performance after training [13,15,[23][24][25][26][27][28][29][30][31][32][33][34][35][36][37]. In these studies, training benefits assess performance with measures that conflate practice effects with strengthened WM skills. Importantly, a recent meta-analysis of WM training studies found no difference in the training benefits associated with adaptive and nonadaptive training paradigms [99]. This report addresses a possible criticism of the current work in which nonadaptive training tasks were employed. We did observe the largest transfer effect in the most difficult near transfer task, the spatial 2-back WM. The two other near transfer measures, the Stroop task and the digit span showed no transfer effects. This is consistent with previous training studies, which report transfer for challenging WM tasks that require rapid updating (e.g. the n-back task; [29,85,100]).
The general benefit of tDCS holds promise in several domains. These data join four other studies showing that tDCS-linked cognitive training enhances performance across various domains. For example, six training sessions pairing the Stroop task with bilateral oppositional tDCS (left PFC anodal, right PFC cathodal) improved young adults' performance [101]. Yet participants who received bilateral stimulation to the PPC (left anodal, right cathodal) showed improved numerical learning but impaired Stroop performance [101]. However, no follow-up measure was reported. A second study paired five training sessions with a related technique, transcranial random noise stimulation, to the PFC and found significantly enhanced arithmetic learning [102]. Next, one study paired tDCS with computer-assisted cognitive training in older adults [79]. They found that bilateral anodal tDCS to the PFC paired with verbal WM training improved trained task performance (verbal WM, digit span). Digit span improvements lasted seven days but they did not test transfer. Importantly, one other study found near transfer WM gains following 10 sessions of tDCS to the PFC in college aged students [103]. Our data replicate and extend these findings and show that longitudinal tDCS benefits WM over a potentially therapeutic time frame and it is appropriate for use in healthy older adults. Furthermore, these benefits were found following two different stimulation sites and persisted after one month of no contact.
Although tDCS shows promise for cognitive maintenance, the mechanism underlying longterm changes remains unclear. Previous evidence demonstrated temporary modulation of motor cortex [93], although long-term effects are reported to follow PFC stimulation [78,[104][105][106]. We targeted frontoparietal networks implicated in WM performance, and it appears that at least two, not exclusive, substrates could be contributing to the observed behavioral changes. One possibility is that tDCS strengthened the frontoparietal connections engaged during WM tasks. This may explain why there was no significant performance difference as a function of stimulation site. A second explanation for tDCS-linked WM benefits is that tDCS-linked WM training strengthened frontostriatal connections and enhanced striatal dopaminergic activity. This interpretation is based on work showing that frontostriatal activity is important for learning and WM updating (reviewed in [107]) and findings that WM training enhances striatal dopaminergic activity during WM updating in older [108] and younger adults [109], particularly in challenging WM tasks [110].
Although these findings demonstrate the feasibility and the durability of tDCS-linked WM training, there are several limitations to address in future investigations. A first question relates to the lack of spatial specificity afforded by the tDCS technique itself. As we found a benefit of tDCS at two different stimulation sites, we cannot rule out that tDCS to any portion of the cortex could lead to improved WM training gains. The PFC and PPC sites are both associated with WM performance and they were selected to enhance our likelihood of observing WM benefits. Future studies should include control locations expected to show no effect on WM, to clarify whether general stimulation promotes training gains. Given the exploratory nature of this first longitudinal study we elected to focus on training groups that seemed most likely to reveal improved performance rather than no change. Additionally, we cannot rule out those participants in the active tDCS groups showed placebo related benefits from the sensation of 1.5 mA tDCS. However, we believe the tDCS-naïve participants in the current study were unaware of the possibility of a sham condition as no participants indicated that they believed to be in a control group. Previous research finds that the tDCS sensation with 1.0 mA is not discernable from sham stimulation for naïve and experienced participants [111], however at 2.0 mA participants are able to detect the difference [112]. It is important to note however, that in those two studies participants received both active and sham tDCS in two different sessions, whereas in the current study participants received only active or sham tDCS.
A second parameter to optimize is the length of training. In the present manuscript, we used two weeks of WM training. However, one recent meta-analysis found that the type of training overshadowed the impact of training duration [99]. This was determined by a lack of a dose-response relationship between training length and near-transfer outcomes. Thus, the duration of training may be a secondary factor. Clearly, the shorter the training to achieve maximal benefits the better. Thirdly, this initial foray into tDCS paired with WM training revealed that performance benefits transferred. This is the most encouraging finding with regard to real-world application. Future work is now needed to clarify several aspects of transfer effects. New studies will need to include a stimulation control group that receives tDCS without WM training. This would clarify whether tDCS alone provided long-term benefits rather than the combination of tDCS+WM training. We think this is unlikely, because not every WM task benefits from tDCS (e.g. [68] easy WM tasks; [113]), and tDCS alone during rest exerts no effect on behavioral outcomes [61,88], making a general, tDCS-induced long-lasting WM improvement unlikely. Additional work will be needed to ascertain the extent of transfer effects and to refine protocols to enable far transfer to other cognitive domains. Finally, with regard to transfer benefits, our transfer tasks were completed at three different time points in this study making them 'trained' to a certain degree. However, if the benefits really reflected training then we should have observed the same improvements in the sham group, which was not the case.
A final limitation is that the training tasks were computer-based. Future work should include tasks with greater ecological validity to clarify the translational power of tDCS in healthy aging and special aging populations. Furthermore, measures of far transfer will be needed to assess changes in remote cognitive domains such as fluid intelligence. Ideally, training will improve skills like sustained attention, that show transfer to daily skills like driving [114]. Improvement on cognitive functioning becomes important for maintaining autonomy and quality of life. Future work is needed to test longitudinal effects of tDCS in diverse tasks and in heterogeneous populations to predict who will benefit and for how long. However, the reality that neuroscience will be playing an important translational role has arrived. We offer this early work as encouragement to those of us engaged in the aging process and interested in maintaining cognitive function. | 2016-05-08T13:27:42.590Z | 2015-04-07T00:00:00.000 | {
"year": 2015,
"sha1": "a706f53b32eaed8596d05f62b61dbbd4c1e3f268",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0121904&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d0c1d6059783daf45980f8f990fa775038bdfc19",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
269442938 | pes2o/s2orc | v3-fos-license | Identification of distinct symptom profiles in prostate cancer patients with cancer-related cognitive impairment undergoing androgen deprivation therapy: A latent class analysis
Objective To identify latent classes of cognitive impairment and co-occurring symptoms (fatigue, pain, sleep disturbance, depression) as clusters in patients with prostate cancer undergoing androgen deprivation therapy and to explore the predictors among distinct latent classes. Methods A total of 228 patients with prostate cancer were recruited in this cross-sectional study. The assessment instrument included the Perceived Cognitive Impairment Scale, the Fatigue Severity Scale, the Athens Insomnia Scale, the Brief Pain Inventory, the Patient Health Questionnaire-9, the UCLA Loneliness Scale, the International Physical Activity Questionnaire - Short Form, the Charlson comorbidity index, and General Information questionnaire. The identification of different patient subgroups was done by the latent class analysis. Results The study identified three distinct latent classes: all low symptoms (class 1, 32%), high depression symptoms (class 2, 37.7%), and high physical symptoms (fatigue, sleep disturbance, and pain) with high cognitive impairment (class 3, 30.3%). Patients who had higher Charlson comorbidity index (P = 0.003) scores were more likely to be classified in class 3. Patients with higher loneliness scores (P < 0.001; P < 0.001) were significantly more likely to fall into class two or three than in class 1. However, having a higher level of physical activity (P = 0.014; P < 0.001) increased the likelihood of being in class 1. Conclusions This study exhibited the inter-individual variability of symptom experience in prostate cancer patients with cognitive impairment undergoing androgen deprivation therapy. The result suggests that more emphasis should be placed on screening for fatigue, sleep disturbance, and pain, and future interventions should focus on loneliness and physical activity.
Introduction
Prostate cancer (PC) is a prevalent malignant tumor that poses a significant threat to men's health worldwide.Its incidence rate is among the highest of all malignant tumors in men aged over 60 years globally, with 1,216,139 estimated emerging cases in 2020. 1 Androgen deprivation therapy (ADT) is an effective treatment for PC, and about half of patients will eventually receive ADT. 2 Consequently, ADT can cause various systemic symptoms and metabolic alterations. 3Unfortunately, these treatment-related side effects can have a significant impact on the quality of life for PC patients and worsen the treatment burden associated with ADT. 4 The literature on the correlation between ADT and cognitive impairment remains mixed, suggesting that other factors may contribute to cognitive impairment. 5,68][9] These symptoms are also prevalently reported among PC patients with cognitive impairment, especially following ADT. 6,10However, exploring symptom clusters of cognitive impairment and common co-occurring symptoms remains poorly understood in PC patients undergoing ADT. 113][14] The Symptom Interactional Framework suggests that the shared or interactive mechanisms in physiological, psychological, behavioral, or sociocultural factors may cause the symptoms to occur individually or in clusters. 15However, research regarding comorbidity burden, loneliness, physical activity, and psychoneurological symptoms in PC patients remains scarce.PC patients who were physically inactive and who had higher comorbidity burden experienced higher levels of fatigue, pain, and depression, with a poorer quality of life. 12,13Additionally, chronic loneliness may indirectly contribute to the progression of depression in PC patients. 16Thus, identifying the latent class of symptom clusters with cognitive impairment as the primary symptom may provide us with a better comprehension of the symptom profile in PC patients treated with ADT.Furthermore, expanding cognitive impairment research beyond prior ADT exposure and focusing on modifiable factors has the potential to provide a new perspective for managing symptom clusters in PC patients.
Previous studies of ADT symptomatology are still predominantly in the form of individual symptoms rather than symptom clusters, which may not fully allow us to understand the symptom presentation in ADT recipients.In addition, oncology patients' symptom experiences may differ based on inter-individual variability. 17Using a variable-centered approach to categorize symptom clusters and the subsequent placement of patients into identified categories may not provide an understanding of which survivor needs more intensive symptom management. 11A person-centered approach, such as latent class analysis (LCA), presents an opportunity to identify subgroups of oncology patients with similarities in symptom phenotypes that distinguish them from the other subgroups. 8he present study, for PC patients undergoing ADT, aims to identify latent classes of cognitive impairment and co-occurring symptoms as clusters using LCA in PC patients treated with ADT and to explore the predictors among distinct latent classes.
Participants
Between June 2023 and January 2024, we recruited outpatients with PC who received ADT through convenience sampling at a tertiary general hospital in Zhejiang Province, China.The inclusion criteria for participation were as follows: (a) pathologically diagnosed with prostate cancer; (b) receiving ADT and had > 8 weeks of ADT 18 ; (c) aged !18 years; and (d) aware of their condition and given consent in this study.Participants who were critically ill and had a history of cognitive impairment were excluded.
The sample size in this study followed the requirements of logistic regression analysis and was required to be at least 10 times the number of independent variables, which are 18 in this study.Thus, the minimum sample size required was 198, accounting for a possible 10% of invalid questionnaires. 9
Sociodemographic and clinical characteristics
We collected age, body mass index (BMI), marital status, occupation, household monthly income per capita (HMIPC), and education level via self-report.Latest prostate-specific antigen (PSA), Gleason scores (prostate cancer severity score), and time since ADT via medical records were also collected.Occupation was reported based on the classifications defined by Nucci. 19
Cognitive impairment and common co-occurring symptoms
The Perceived Cognitive Impairment scale (CogPCI): Participants' cognitive impairment was assessed using CogPCI, an 18-item subscale of The Functional Assessment of Cancer Therapy Cognitive Function (FACT-Cog, version 3). 20In this study, the 18-item CogPCI subscale (Chinese version) was employed to measure participants' cognitive function for the past week, each with a score of 0-4.The total score ranges from 0 to 72 points, and higher scores indicate less cognitive impairment.A score of CogPCI < 54 indicates cognitive impairment. 9The reliability for the CogPCI was 0.874 in this study.
The Fatigue Severity Scale (FSS): The severity of participants' fatigue was measured by the FSS for the past 7 days, which consists of seven items. 21Higher scores indicate more severe fatigue on a seven-point Likert scale, ranging from 0 to 7, with a total score of 9-63.A clinically significant level of fatigue was defined as an FSS score !36.The reliability for the FSS was 0.829 in this study.
The Brief Pain Inventory (BPI): The 3-item intensity subscale of the BPI evaluated the pain intensity of participants and was rated on a 0 ("no pain") to 10 ("pain as bad as the patient can imagine"). 22The total score is an average score of three items.The BPI score ! 4 indicates that patients have a clinically significant level of pain. 23The reliability of the BPI was 0.923 in this study.
The Athens Insomnia Scale (ASI): The ASI was applied to measure sleep disturbance of participants. 24Higher scores indicate more severe sleep disturbance on a four-point Likert scale, ranging from 0 to 3, with a total score of 0-24.The ASI score > 6 indicates that patients have a clinically significant level of sleep disturbance.The reliability for the ASI was 0.826 in this study.
The Patient Health Questionnaire (PHQ-9): Participants' depression was measured using the PHQ-9, each with a score of 0-3. 25 The total score ranges from 0 to 27 points, and higher scores indicate more severe depression.A clinically significant level of depression was defined as a PHQ-P score !10. 26 The reliability for the PHQ-9 was 0.779 in this study.
Comorbidity conditions
Comorbidities were collected through a questionnaire and were checked by reviewing medical records.The Charlson comorbidity index (CCI) score included 19 different medical conditions, and each comorbidity condition was weighted according to its impact on mortality. 27oreover, the CCI was calculated as the sum of the comorbidity score, with a higher score indicating more severe comorbidity burden. 13
Loneliness
The loneliness was assessed by the six-item UCLA Loneliness Scale (ULS-6). 28The ULS-6 was formed by Chinese scholars based on the eight-item UCLA Loneliness Scale by deleting two non-lonely reverse scoring entries. 29Each item is assigned a score of 1-4 for a total score of 6-24.A higher score indicates stronger sense of loneliness.In our research, the reliability for the ULS-6 was 0.80.
Physical activity
The seven-item International Physical Activity Questionnaire -Short Form (IPAQ-SF) was used to report the amount of time spent on physical activity of three intensities: vigorous, moderate, and walking in the past week. 30The frequency (days/week) and duration (minutes/day) spent on each intensity of physical activity were multiplied by the metabolic equivalent (MET) values (walking ¼ 3.3, moderate intensity ¼ 4, vigorous intensity ¼ 8).The total energy spent in physical activity was calculated by summing the MET min/week in individuals' light, moderate, and vigorous activities.The American Cancer Society (ACS) recommends that cancer survivors should conduct at least 150 min of moderate-intensity or 75 min of vigorous-intensity physical activity per week, equivalent to at least 600 MET minutes of physical activity every week. 31In our research, the reliability for the IPAQ ranges from 0.72 to 0.81.
Data collection
Patient questionnaire information was collected from patients faceto-face and one-to-one using a translated Chinese version of the instrument and by three clinical nurses who had received rigorous training in advance.To ensure privacy and to avoid interruptions, the survey was conducted in the meeting room of the urology clinic.The survey comprised eight questionnaires and took approximately 20 minutes on average.The questionnaire was read, and the investigator explained some items to participants who had difficulty comprehending, and then they were checked for completeness.Finally, all questionnaires collected were kept confidential and used solely for research purposes.
Data analysis
The IBM SPSS Statistics version 26.0 and Mplus version 8.3 were used for statistical analysis.Descriptive statistics were calculated for sociodemographic characteristics, clinical characteristics, physical activity, CCI, and loneliness variables.Continuous variables were reported as the means and standard deviation (SDs), and categorical variables were presented as frequencies and percentages.
LCA was used to classify participants into different classes based on the dichotomized scores of cognitive impairment, fatigue, sleep disturbance, pain, and depression.Each symptom was dichotomized into a binary variable using the cutoff for clinical utility.The best-fitting model was determined based on model selection criteria, such as the Akaike information criterion (AIC), Bayesian information criterion (BIC), adjusted Bayesian information criterion (aBIC), entropy, Vuong-Lo-Mendell-Rubin (LMR) test, and bootstrapped likelihood ratio test (BLRT).Relatively low AIC, BIC, and aBIC values, significant P values for LMR and BLRT, and an entropy greater than 0.8 indicated the optimal number of latent classes. 32,33Furthermore, the clinical significance of the final number of categories should be considered. 9Following the identification of optimal latent classes, multinomial logistic regression was used to examine the predictors that distinguish between the different latent classes.The significant factors in the χ 2 and Kruskal-Wallis tests were entered into the logistic regression model to identify the final predictors.Statistical significance for all tests was indicated at P < 0.05.
Ethical considerations
This study was approved by the Ethical Committee of the First Affiliated Hospital of Wenzhou Medical University (IRB No. KY2023-245).Informed consent was obtained from all study participants.
Participant characteristics
A total of 242 PC patients were recruited.However, 14 patients were excluded due to incomplete outpatient personal data and invalidated questionnaires.In total, 228 patients were included.Table 1 presents the sociodemographic and clinical characteristics of the total population.Most patients were married (84.2%), with a mean age of 74.46 years (SD ¼ 6.93).In this study, most participants (44.7%) had a Gleason score of nine or higher, and about half of the patients (71.9%) were treated with ADT for more than 1 year.The average CCI was 1.88 (SD ¼ 1.49).
Results of latent class analysis
We selected the 3-class model based on the best fit.Table 2 presents the goodness-of-fit indicators for these latent class models.The LCA results suggest that the 3-class model was preferable, with a lower BIC value than the 2-, 4-, and 5-class models and an entropy higher than 0.8.Furthermore, the LMR and BLRT had no significant difference in the 4class mode (P > 0.05), demonstrating that one lower class number is present rather than the present class number.As a result, the 3-class model was selected for further analysis.
Fig. 1 shows the conditional probabilities of the five symptom indicators for each latent class.Class 1 (n ¼ 73, 32.0%) had the lower frequency of occurrence for all symptom indicators and was labeled "all low."Class 2 (n ¼ 86, 37.7%) was designated as "high depression symptom" and demonstrated a higher frequency of occurrence for depressive symptoms among the three latent classes.Class 3 (n ¼ 69, 30.3%), labeled "high physical symptoms with high cognitive impairment," showed patients in this class had a higher frequency of occurrence for cognitive impairment and physical symptoms, such as fatigue, sleep disturbance, and pain.
Differences in patient characteristics between latent classes
Table 3 summarizes the sociodemographic characteristics, clinical characteristics, physical activity, and loneliness differences between latent classes.Occupation, physical activity, CCI, and loneliness scores revealed statistically significant differences between the three latent classes (P < 0.05).Patients in Class 1 had a higher level of physical activity and the lowest scores of CCI and loneliness.
Discussion
To our knowledge, the study represents the first attempt to identify the latent class of cognitive impairment and co-occurring symptoms using LCA in PC patients undergoing ADT.Moreover, we explored the sociodemographic characteristics, clinical characteristics, CCI scores, loneliness, and physical activity, which create differences between symptom subgroups in ADT recipients.
Profiles of cognitive impairment and co-occurring symptoms
The study identified three distinct latent classes: all-low-symptom group (32.0%), high depression symptoms group (37.7%), and high physical symptoms with high cognitive impairment group (30.3%).The results suggest that ADT recipients with cognitive impairment experienced distinct symptom profiles.Patients with high cognitive impairment tend to have clinical significance such as fatigue, pain, and insomnia with probabilities of 83.4%, 57.7%, and 100%, respectively.This finding is in line with previous research on cancer that found that higher levels of fatigue, pain, and sleep disturbance were correlated with worse cognitive function. 7,9Moreover, previous reports have confirmed the existence of subgroups of multiple psychoneurological symptoms and suggest inter-individual variability of symptom experience in cancer ADT, androgen deprivation therapy; CCI, Charlson comorbidity index; MET, metabolic equivalent; SD, standard deviation; ULS-6, The 6-item UCLA Loneliness Scale.
populations. 8Notably, cognitive impairment and common co-occurring symptoms may share common biological mechanisms. 34For example, a systematic review on cancer treatment-related psychoneurological symptoms has shown that proinflammatory immune markers and gut microbiome were associated with cognitive impairment, fatigue, sleep disturbance, pain, and depression. 35
Predictors of different identified latent classes
This study also explored the critical predictors in identifying latent classes of symptom clusters with cognitive impairment.We found that the three latent classes revealed significant differences in the CCI scores, loneliness, and physical activity level.Patients with higher comorbidity burden were significantly more likely to belong to the high physical symptoms with the high cognitive impairment group.This finding was in line with prior research showing that the comorbidity burden of cancer patients could further aggravate the effect of treatment on cancer-related symptoms. 13Moreover, comorbidities, such as diabetes and kidney disease, can affect cognitive function by changing the cerebrovascular structure and facilitating neurodegenerative changes. 36,37Therefore, health care providers should give due consideration to the impact of comorbidity on the physical symptoms and cognitive functioning of PC patients before the onset of ADT and design interventions to prevent the development and worsening of comorbidity after ADT, which improves patient-centered care and treatment outcomes.
Notably, patients in the high depression symptoms group and those in the high physical symptoms with high cognitive impairment group had a higher level of loneliness than the all-low group.Our finding, that PC patients on ADT with higher levels of loneliness had higher rates of cognitive impairment and comorbid symptoms, is consistent with the previous research that has shown a positive correlation between loneliness and all of these symptoms. 14Loneliness may also cause a state of chronic psychosocial stress characterized by pathological activation of the hypothalamic-pituitary-adrenocortical axis and impairments in immune function. 38Chronic overactivation of this stress response, as might occur when patients consider life to arise from persistently and profoundly lacking meaningful connections, may cause or exacerbate the experience of both somatic and psychological symptoms. 39,40Prominently, such negative impact of loneliness can inform clinical carers and researchers supporting ADT recipients of their inadequacy in care and the need for interventions to reduce the patients' loneliness and ease the symptom burden of this population.
It is well known that PC patients can benefit from adequate physical activity and physical activity may play an active role in preventing or delaying relapse, easing overall disease burden, and improving survival for the patients. 41PC patients who achieved the ACS-recommended level of physical activity (! 600 MET minutes) were less likely to fall into high physical symptoms with the high cognitive impairment group and high depression symptoms group but more likely to fall into the all-low-symptom group.Importantly, this study found that adequate physical activity may be correlated with lower rates of cognitive impairment and physical symptoms.Adam et al. reported similar findings that patients in the all-low-symptom subgroup might be more active. 12Further, the potential role of the gut microbiota linked to inflammatory pathways on psychoneurological symptom outcomes following cancer treatment in patients may be influenced by physical activity at different levels. 35,42nterestingly, the relation between physical activity and subgroup membership was strongest between the high physical symptoms with high cognitive impairment group and all-low-symptom group (Table 4), highlighting the potential role of physical activity in cognitive impairment and physical symptoms.Furthermore, the association of adequate physical activity with low-risk cognitive impairment highlights the importance of understanding the facilitators of physical activity and further developing interventions to enhance physical activity levels for ADT recipients.
Implications for practice
Our findings demonstrate the need for health care providers to be aware that ADT recipients will experience cognitive impairment and common co-occurring symptoms.It is crucial to be proactive in providing patients with pre-emptive education programs to help them adapt to the side effects of ADT. 2 Moreover, our study suggests that ADT recipients with cognitive impairment experienced distinct symptom phenotypes, suggesting that clinicians can identify ADT recipients who are more likely to develop cognitive impairment through early assessment of patients' fatigue, sleep disturbances, pain, and depression symptoms.Also, early and effective management of cognitive impairment of ADT recipients can be achieved through symptom cluster interventions (e.g., acupressure and orthostatic decompression) tailored to individual differences and patient needs. 43Furthermore, clinical nurses and researchers working with ADT recipients should design additional interventions for them to know the risk of adverse outcomes from comorbidity and provide evidence-based self-management strategies (e.g., adequate exercise, nutritional supplementation, stress management) to reduce the comorbidity burden of the patients.Finally, future studies should do more to understand whether reducing loneliness and promoting physical activity can prevent and mitigate the symptom clusters with cognitive impairment.
Limitations
This study has several limitations.Our study's small sample sizes limited the statistical power to detect group differences, and large samples and multicenter studies should be conducted in the future to further enhance the applicability of the findings.Additionally, due to the crosssectional design, it is currently unknown how the identified latent classes may change over time.Future research could categorize patients The model included occupation (professional or highly intellectual work ¼ reference), physical activity (! 600 MET min ¼ reference), CCI (unit ¼ 1 point) and ULS-6 (unit ¼ 1 point).CCI, Charlson comorbidity index; CI, confidence interval; MET, metabolic equivalent; ULS-6, The six-item UCLA Loneliness Scale.
according to their evolving symptom trajectories, providing opportunities for early or preventative interventions.Our dichotomization of each symptom may cause a loss of useful information and efficacy as symptoms will likely be continuous.Furthermore, the comorbidity may have occurred after the diagnosis of PC and receipt of ADT, which may bias the findings.Finally, we only measured cognitive impairment using self-reported instruments and recommended including objective measures of cognitive function.
Conclusions
Our findings suggest that ADT recipients with cognitive impairment experience substantially distinct symptom phenotypes and multiple cooccurring symptoms, with a prevalence of high physical symptoms.Higher CCI and loneliness scores and a low level of physical activity are critical predictors of patients in the high physical symptom with high cognitive impairment group.The discovery of symptom phenotypes and influencing factors is useful in identifying PC patients undergoing ADT with high-risk cognitive impairment.In addition, health care providers should focus on cognitive function in ADT recipients who have multiple comorbidities.Future research would benefit from interventions targeting loneliness and physical activity, which may indirectly prevent and treat cognitive impairment and co-occurring symptoms in PC patients receiving ADT.
Fig. 1 .
Fig. 1.Latent class profiles based on occurrence of symptoms.
Table 4
reveals multinomial logistic regression of predictors for each latent class, with class 1 as the reference category.Compared to class 1, patients who had higher CCI scores (odds ratio [OR] ¼ 1.543, P ¼ 0.003) were more likely to be classified as class 3. Patients with higher
Table 2
Fit indices for LCA model.
Table 4
Multinomial logistic regression results: identifying the predictors of each class (N ¼ 228). | 2024-04-29T15:23:45.236Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "bcacab049396771f2f8e0300d8ac8b523d96ceb8",
"oa_license": "CCBYNCND",
"oa_url": "http://apjon.org/article/S2347562524001197/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6fd20950a198edf310f99efd702bd9f15267b43f",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268979264 | pes2o/s2orc | v3-fos-license | The Nomadic Bug: A Case Report of Salmonella Septic Arthritis of Sternoclavicular Joint in a Healthy Patient
In an otherwise healthy adult, septic arthritis of the sternoclavicular joint is very uncommon. Usually, individuals with a history of intravenous drug usage or those with impaired immune systems are affected. The usual mode of spread is hematogenous spread or direct spread via neighbouring sources of infection. We report a rare case of mediastinitis and lung empyema preceded by sternoclavicular septic arthritis in an otherwise healthy 49-year-old woman due to Salmonella sp. Radiological imaging showed left sternoclavicular joint collection with bone destruction. The literature only contained reports of two prior occurrences of sternoclavicular joint septic arthritis caused by Salmonella. If diagnosed early, patients usually respond to medical treatment such as aspiration and antibiotics, as was the case with our patient.
Introduction
In an adult who is otherwise healthy, septic arthritis of the sternoclavicular joint (SCJ) is very uncommon.Less than 0.5% of all bone and joint infections are said to affect this specific region [1,2].Patients with a history of intravenous drug misuse [3] or those with impaired immune systems [4] are typically affected.However, in the minority of patients, no risk factors were present.Patients with septic arthritis of the SCJ usually present with fever, pain, and local swelling.To avoid morbidity and mortality, sternoclavicular joint septic arthritis must be treated right away.Osteomyelitis, chest wall abscess, and mediastinitis are some of the condition's serious complications [5].Depending on the severity and extent of the disease, the current treatment of choice is intravenous antibiotics, incision and drainage, surgical debridement or en-bloc resection.
This article was previously presented as a meeting poster at the 2021 Virtual Medical Research Symposium on December 14, 2021.
Case Presentation
A 49-year-old woman who never had any medical conditions before, came to our medical centre with a twoweek history of abruptly developing left shoulder pain that radiated to the left neck and upper chest.She had a fever for a single day.There was no past medical history of trauma or neck interventions.Any movement of the left upper limb exacerbated the agony.She otherwise did not experience dyspnea.There were no noticeable skin changes or swelling palpable in the area of the left chest wall.She could not sleep at all because of the ache.She also experienced fullness in the left upper sternal area, but visible swelling was apparent.
Upon examination, the left sternal notch region showed some tenderness.No cervical lymph node could be felt.An X-ray of the cervical spine showed no abnormalities.The initial chest X-ray revealed clean lung fields throughout, with otherwise left upper zone opacities.A collection inferior to the left SCJ that extended into the left thoracic region with capsular distension was found during an ultrasound neck examination (Figure 1a-1c).The collection inferior to the left SCJ joint observed on ultrasound was further verified by a contrastenhanced computed tomography (CECT) of the neck and thorax.Additional radiological findings of a left apical pleural collection and mediastinitis were also discovered (Figure 2a-2b).Figure 2b-2c also showed many lytic areas at the sternal end of the left first rib, which indicated bone destruction.Correlating with the brief clinical history, the CT appearances were suggestive of an inflammatory or infective condition.A possible diagnosis of septic arthritis of the left sternoclavicular joint associated with left lung empyema was made.
FIGURE 2: (a & b) CECT thorax showed a collection (yellow dotted circle) inferior to the left sternoclavicular joint and posterior to the left anterior first rib associated with left apical pleural abscess (red block arrows). (c & d) CECT thorax in the bone window revealed multiple lytic areas at the sternal end of left first rib associated with probable intraosseous air pockets/vacuum phenomenon in the 1st rib at costosternal junction (yellow block arrows).
As soon as the patient was admitted, intravenous amoxicillin clavulanate as an empirical antibiotic was administered.The pleural collection was aspirated the following day under CT guidance (Figure 3).After being aspirated, 8-10 millilitres of pus mixed with blood were sent for sensitivity testing, microscopy, and culture.The findings of the cytologic analysis showed organisms belonging to the Salmonella group without any indication of malignant cells.Bacteraemia from the Salmonella group was also detected in blood cultures.Tuberculous test results were negative.The patient's condition improved, and the patient was given antibiotics to go home with.In addition to scheduling a second CECT thorax for reevaluation, oral antibiotics were also administered for a duration of six weeks.About seven weeks later, a follow-up CECT thorax (Figure 4) showed that the left apical pleural collection had resolved and that there was minimal residual left SCJ collection.During the subsequent follow-up visit, she stopped complaining of chest and shoulder pain.At the seven-week follow-up, C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR) had also decreased.The patient had responded well to nine weeks of antibiotics, and after four weeks, the patient was scheduled for another follow-up.
Discussion
SCJ is a rare location for septic arthritis owing to less than 0.5% in healthy individuals [1,2].Risk factors for SCJ septic arthritis include trauma, diabetes, intravenous drug use, infection at distant sites and infected central venous line [1,2].A previous case by Sharif et al. (2018) reported malignancy-related immunosuppression and late infection of the previous tracheostomy tract as possible culprits [6].Whereas Womack (2012) reported two cases of urinary tract infection causing haematogenous spread of infection to SCJ [7].However, in the minority of patients, about 23% presented with no risk factors.The usual causative pathogen is Staphylococcus aureus accounting for about 49%, followed by Pseudomonas aeruginosa (10%), Brucella melitensis (7%), and Escherichia coli (5%) [2].The usual method of spreading is through haematogenous spread or from direct extension via adjacent sources of infection.In our patient, given the absence of any preceding risk factor, it is likely from haematogenous spread as she has a positive blood culture of the Salmonella group organism.
Non-typhoid Salmonella is the most common causative pathogen for gastrointestinal infection worldwide.Most patients with non-typhoid Salmonella infection have self-limiting symptoms which usually do not require antibiotics.Common symptoms are fever, abdominal discomfort, nausea and vomiting as well as diarrhoea.Salmonella bacteraemia occurs in about 5-10% of infected patients and some may develop focal infections such as meningitis, bone and joint infections.Immunocompromised patients may have a higher rate of complications and prolonged or recurrent salmonella infection [8].Extra-intestinal salmonella infections may present as pneumonia, meningitis, mycotic aneurysm, osteomyelitis, septic arthritis, or cholangitis.This is especially true for patients who have chronic medical conditions including diabetes, hypertension, connective tissue disease, chronic lung disease, or cancer [9].Antibiotics are typically not necessary for patients with Salmonella gastrointestinal infections due to the self-limiting course of the disease.The patient's general condition and the strain's susceptibility pattern determine which medications are best for them when they have an extra-intestinal localised infection.Third-generation cephalosporins, trimethoprim-sulfamethoxazole, ampicillin, and fluoroquinolones are among the options [9].Individuals who have SCJ septic arthritis typically exhibit pain, localised swelling, and fever.Neck pain is an infrequent additional presentation for these patients, accounting for only 2% of the cases [2].About 60% of the cases are unilateral, usually involving the right sternoclavicular joint [10].From an anatomy perspective, several vital structures such as the subclavian vessels and the phrenic nerve lie in close proximity to the SCJ.Hence, infections affecting the SCJ structures should be treated urgently to avoid spreading and harming the important neighbouring structures [11].
The preferred methods for assessing the degree of SCJ septic arthritis and any local complications, as well as for directing the surgical approach, are computed tomography (CT) or magnetic resonance imaging (MRI) [2].Mediastinitis, joint effusion, joint destruction, and additional complications like empyema or chest wall abscess will be readily detected using CT or MRI imaging.The aspirated pleural fluid culture and sensitivity confirmed the diagnosis of this patient, though this can also be achieved by open or trucut biopsy, aspirated joint fluid culture, or associated abscess [10].In our patient, CT scan helped to confirm the ultrasound findings and to assess the extent of the collection and associated complications as well as to aid the aspiration of the pleural abscess.The significant joint or capsule distension extending to the sternum and clavicle was strongly suggestive of infection rather than a degenerative process [12].
The current therapy choices include intravenous antibiotics, incision and drainage, surgical debridement, or en-bloc resection, depending on the severity and extent of the disease.In the event of osteomyelitis and abscess, sternoclavicular joint excision and pectoralis flap closure are the usual surgical interventions [13].Intravenous antibiotics may be all that is necessary for certain people, while more invasive measures may be required for others.As was the case with our patient, patients typically respond to medical care when discovered early, including aspiration and antibiotics.
Conclusions
Salmonella septic arthritis of the SCJ is extremely rare in an otherwise healthy adult.Given the list of potential problems complicating this condition, early detection with prompt antibiotic administration is crucial to halt the disease progression and minimise further grave complications.Treating physicians should have a high index of suspicion of this condition and request the necessary radiological imagings to confirm the diagnosis.The associated lung empyema which complicated the condition further could be detrimental if not detected early and treated with appropriate surgical and antibiotics treatment.compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work.Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work.Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
FIGURE 1 :
FIGURE 1: (a) Ultrasound of the sternoclavicular joint showed a distended capsule on the left side (big red asterisk) when compared to the right (small yellow asterisk).(b & c) Ultrasound revealed a collection (red 'X') inferior to the left sternoclavicular joint which extended into the left thoracic region.
FIGURE 3 :
FIGURE 3: CT-guided aspiration of the left apical lung collection (red block arrows).The aspirate was sent for microscopy, culture and sensitivity and showed Salmonella group organisms.
FIGURE 4 :
FIGURE 4: (a & b) Repeat CECT thorax seven weeks later revealed resolution of the left apical pleural collection with minimal residual left sternoclavicular joint collection and bone destruction (yellow dotted circles). | 2024-04-07T15:27:40.274Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "d588d3c80f9bd9e7edfd9b8898185052c5fe4537",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/233373/20240405-22160-1ysjoxd.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eef65a0dfb8519ca3b32214690262d1b300e23ff",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
264455830 | pes2o/s2orc | v3-fos-license | Coffee consumption and all-cause and cardiovascular mortality in older adults: should we consider cognitive function?
Background The association between coffee and mortality risk has been found in most previous studies, and recent studies have found an association between coffee consumption and cognition. However, there is still a lack of research exploring whether the association between coffee and mortality is influenced by cognitive function. Objective The purpose of this study was to explore the association of coffee, caffeine intake in coffee and decaffeinated coffee with all-cause mortality and cardiovascular disease (CVD) mortality in older adults with different cognitive performances. Methods The study was based on data from the National Health and Nutrition Examination Survey (NHANES) 2011–2014. Coffee and caffeine consumption data were obtained from two 24-h dietary recalls. Individual cognitive functions were assessed by CERAD-word learning test (CERAD-WLT), animal fluency test (AFT), and digit symbol substitution test (DSST). In addition, principal component analysis (PCA) was performed with the above test scores to create global cognitive score. The lowest quartile of scores was used to classify cognitive performance. Cox regression and restricted cubic spline (RCS) were applied to assess the relationship between coffee and caffeine consumption and mortality. Results In the joint effects analysis, we found that those with cognitive impairment and who reported without drinking coffee had the highest risk of all-cause and cardiovascular mortality compared with others. In the analysis of population with cognitive impairment, for all-cause mortality, those who showed cognitive impairment in the AFT displayed a significant negative association between their total coffee consumption and mortality {T3 (HR [95% CI]), 0.495 [0.291–0.840], p = 0.021 (trend analysis)}. For DSST and global cognition, similar results were observed. Whereas for CERAD-WLT, restricted cubic spline (RCS) showed a “U-shaped” association between coffee consumption and mortality. For CVD mortality, a significant negative trend in coffee consumption and death was observed only in people with cognitive impairment in AFT or DSST. In addition, we observed that decaffeinated coffee was associated with reduced mortality in people with cognitive impairment. Conclusion Our study suggested that the association between coffee consumption and mortality is influenced by cognition and varies with cognitive impairment in different cognitive domains.
Background: The association between coffee and mortality risk has been found in most previous studies, and recent studies have found an association between coffee consumption and cognition.However, there is still a lack of research exploring whether the association between coffee and mortality is influenced by cognitive function.
Objective: The purpose of this study was to explore the association of coffee, caffeine intake in coffee and decaffeinated coffee with all-cause mortality and cardiovascular disease (CVD) mortality in older adults with different cognitive performances.
Methods:
The study was based on data from the National Health and Nutrition Examination Survey (NHANES) 2011-2014.Coffee and caffeine consumption data were obtained from two 24-h dietary recalls.Individual cognitive functions were assessed by CERAD-word learning test (CERAD-WLT), animal fluency test (AFT), and digit symbol substitution test (DSST).In addition, principal component analysis (PCA) was performed with the above test scores to create global cognitive score.The lowest quartile of scores was used to classify cognitive performance.Cox regression and restricted cubic spline (RCS) were applied to assess the relationship between coffee and caffeine consumption and mortality.
Results: In the joint effects analysis, we found that those with cognitive impairment and who reported without drinking coffee had the highest risk of all-cause and cardiovascular mortality compared with others.In the analysis of population with cognitive impairment, for all-cause mortality, those who showed cognitive impairment in the AFT displayed a significant negative association between their total coffee consumption and mortality {T3 (HR [95% CI]), 0.495 [0.291-0.840],p = 0.021 (trend analysis)}.For DSST and global cognition, similar results were observed.Whereas for CERAD-WLT, restricted cubic spline (RCS) showed a "U-shaped" association between coffee consumption and mortality.For CVD mortality, a significant negative trend in coffee consumption and death was observed only in people with cognitive impairment in AFT or DSST.In addition,
Introduction
Coffee's unique flavor and refreshing properties have made it one of the most popular beverages worldwide (1).Studies have shown that coffee may have an ameliorative effect on depression, cardiovascular disease and type 2 diabetes (2-4), and some studies have also revealed it can reduce the risk of all-cause mortality and cardiovascular mortality (5,6).Moreover, several studies previously reported that coffee consumption might reduce the risk of developing cognitive impairment (7,8).
Cognitive impairment is one major health issues facing older adults (9).According to studies, people with cognitive impairment tend to have a higher risk of developing Alzheimer's dementia, one of the leading causes of death among older adults in the United States (10,11).The WHO estimates that the prevalence of dementia will increase exponentially every 20 years and may exceed 115 million by 2050 (12).Besides, a growing number of studies show that older adults with cognitive impairment often suffer from a higher risk of death (13,14).Therefore, the discovery of early preventive measures to reduce the risk of mortality in the cognitively impaired population would be of value to the older adults.In previous meta-analyses and multiple cohort studies, higher coffee and caffeine consumption has been found to be associated with a lower risk of all-cause mortality and cardiovascular mortality (5,6).However, there is still a lack of research to confirm whether this association between coffee and death existed in people with cognitive impairment.To explore this association, we conducted a cohort study using data from the National Health and Nutrition Examination Survey (NHANES) from 2011 to 2014, applying three tests [the Consortium to Establish a Registry for Alzheimer's Disease (CERAD) test, animal fluency test and digit symbol substitution test (DSST)] and principal component analysis (PCA) to define cognitive impairment, and obtained dates of all-cause and cardiovascular mortality by linking to the National Health Center (NHC) death files.
In this study, through analysis of a representative sample of older adults aged 60 years or older in the United States, the associations of total coffee consumption, caffeine intake from coffee, and decaffeinated coffee with all-cause mortality and cardiovascular mortality were explored separately for those who demonstrated cognitive impairment on three cognitive tests or global cognitive scores.
Study design and population
The data analyzed in this study were obtained from two cycles of the National Health and Nutrition Examination Survey (NHANES) (2011-2012 and 2013-2014) and linked to publicuse mortality files up to December 2019 (15).NHANES uses a complex, multistage, probability sampling design (16).Dietary data were collected by conducting home interviews and inviting participants to the mobile examination center (MEC) to complete questionnaires.The study was approved through the National Health Statistics Research Ethics Review Board, and informed consent was obtained from participants prior to data collection.
Cognitive performance assessment
Cognitive function was assessed by the Consortium to Establish a Registry for Alzheimer's Disease (CERAD) word learning subtest, animal fluency test (AFT), and digit symbol substitution test (DSST) (17).These tests were carried out in the mobile examination center (MEC).The CERAD word learning test (CERAD-WLT) assessed immediate and delayed learning of new language information and consisted of three consecutive learning trials, in which participants were asked to read aloud 10 unrelated words that appeared on a computer monitor, one at a time, and recall as many words as possible immediately after all words were presented.The order of these 10 words were different between each of the three trials.After participants completed the AFT and DSST, the delay recall test was administered (18).In the AFT, which was used to assess categorical verbal fluency, participants were asked to name as many animals as possible that came to mind within 1 min, and each animal named was given a score (19).The DSST, on the other hand, is a module of the Wechsler Adult Scale and is used to assess processing speed, sustained attention, and working memory (20).This test was conducted via a paper chart with a key at its top containing nine numbers and symbols.The participants were asked to fill in the 133 blocks adjacent to the numbers in the chart with matching symbols within 2 min, with the final total number of correct matches being the score on the DSST.For the assessment of global cognition, the three cognitive measures were entered into principal component analysis (PCA), and scores on the first unrotated principal component were saved, where higher scores represented better cognitive abilities.The specific analysis
Dietary intake assessment
Data of coffee consumption, as well as caffeine intake from coffee and total energy intake, were obtained from two 24h dietary recall interviews conducted by NHANES.The first interview was conducted in-person in the MEC, while the second interview was conducted by phone 3-10 days later after the first interview.During the MEC interview, the interviewers adopted a set of measurement guides (various glasses, bowls, mugs, bottles, household spoons, measuring cups and spoons, rulers, thick and thin sticks, bean bags, and circles, etc.) for participants to use in reporting food consumption, and after completing the MEC interview, participants were given a food model booklet to use in reporting food amounts at the phone follow-up.The NHANES individual food file, (24, 25) which contained a list of food and beverage consumption that was provided for each participant, and a "USDA (the US Department of Agriculture) food code" consisting of eight numbers was provided to identify these foods and beverages.A food description file corresponding to the code was provided in the USDA's Food and Nutrient Database for Dietary Studies (FNDDS).According to the FNDDS, the USDA food code starting with "921" matches the coffee beverage in the two cycles of 2011-2012 and 2013-2014.Based on the USDA food code, we calculated the total coffee beverage consumption and caffeine intake form coffee of each participant.In addition, based on the food descriptions provided by the FNDDS, we further calculated the consumption of decaffeinated coffee and caffeine intake from coffee for each participant.The total energy intake for each participant was calculated from the NHANES total nutrient intake file (26,27).For the participants who provided comprehensive data from two dietary recalls, we averaged the data from both recalls, while only the data from first recall was used for the participants who only conducted the first dietary recall.
To explore the associations between coffee consumption and mortality more thoroughly, we categorized each type of coffee consumption.For total coffee consumption, we categorized those with no coffee intake into a separate group, and then categorized the coffee consumption into three separate groups depending on the amount of coffee consumed, resulting in four total groups: (1) without coffee consumption; (2) 0.1-262.5 g/day (T1); (3) 262.6-496 g/day (T2); and (4) > 496 g/day (T3).The sample population that did not consume decaffeinated coffee (n = 2,156) was large.Therefore, the decaffeinated coffee consumption population was divided into three groups: (1) no coffee consumption; (2) only decaffeinated coffee consumption; and (3) others.The caffeine intake from coffee was grouped using the same criteria as how the total coffee consumption was grouped: (1) without caffeine intake, (2) 0.1-76 mg/day (T1), (3) 76.1-171.5 mg/day (T2), and (4) > 171.5 mg/day (T3).
Mortality
In this study, the primary outcome was all-cause mortality, while the secondary outcomes were cardiovascular disease (CVD) mortality.The 2019 open-access mortality file provided by the National Health Center (NHC) provided mortality follow-up data of the population included in this study (15), with the followup time defined as the number of months from the date access examination at the MEC to either the date of death or the end of the death follow-up period (31 December 2019), whichever occurred first.The specific causes of death in the mortality file were coded according to the international classification of diseases-10 (ICD-10), in which heart disease was identified as I00-I09, I11, I13, and I20-I51, while I60-I69 represented death from cerebrovascular disease.We used the codes corresponding to heart disease and cerebrovascular disease deaths to identify CVD mortality.
Covariates
Based on previous studies, a series of covariates was selected for the follow-up study (28)(29)(30).The sociodemographic factors included: age, gender, race (Mexican American, other Hispanic, Non-Hispanic White Person, Non-Hispanic Black Person, and other races), poverty income ratio (PIR) (≤ 0.99 or > 0.99), and educational level (less than 9th grade, 9-11th grade, high school graduate/GED or equivalent, some college or AA degree, and college graduate or above).The health-related lifestyle factors included smoking status (never: smoked less than 100 cigarettes in life; former: smoked more than 100 cigarettes in life and smoke not at all now; now: smoke more than 100 cigarettes in life and smoke some days or every day) and drinking status (whether drink 12 alcoholic beverages per year or not).
Clinical variables included body mass index (BMI) (< 25 kg/m 2 , 25-30 kg/m 2 and ≥ 30 kg/m 2 ), diabetes, hypertension, chronic kidney disease (CKD), cardiovascular disease (CVD), and cancer.Diabetes was defined as meeting at least one of the following conditions: (1) self-reported a physician diagnosis of diabetes; (2) taking insulin or other prescription diabetes medication; (3) glycosylated hemoglobin (%) > 6.5%.Hypertension was noted if the participants met at least one of the following three criteria: (1) self-reported history of hypertension; (2) current use of antihypertensive medications; (3) average systolic blood pressure ≥ 140 mm Hg and/or average diastolic blood pressure ≥ 90 mm Hg.CKD was noted if eGFR < 60 mL/min/1.73m 2 or ACR (albumin: creatinine ratio) ≥ 30 mg/g; the eGFR was calculated using the chronic kidney disease epidemiology collaboration (CKD-EPI) equation (31).We had the participants self-report CVD if they suffered from one of the following 5 conditions: coronary heart disease, congestive heart failure, angina, heart attack, and stroke (32).In addition to the above variables, we also included the NHANES survey cycle and total dietary energy intake as covariates.
Statistical analysis
According to the NHANES analysis guidelines, we created a new sample weight by dividing the original sample weights in the two cycles by 2; this new sample weight was used for the following analysis.The normally distributed continuous variables were represented as the mean ± SD, the non-normally distributed continuous variables were represented as the median (IQR: interquartile range), and the categorical variables were represented as the weighted percentage.
To elucidate the relationships between coffee consumption and all-cause and cardiovascular disease (CVD) mortality in people with cognitive impairment, we first explore the joint effect of cognitive function and coffee consumption on mortality, we created four mutually exclusive risk groups: Group 1 for normal cognition and had coffee consumption; Group 2 for normal cognition and without coffee consumption; Group 3 for cognitive impairment and had coffee consumption; and Group 4 for cognitive impairment and without coffee consumption.Cox regression was used to further estimate the hazard ratios of the other three groups when using Group 1 as a reference.The results were represented by hazard ratios (HRs) and 95% confidence intervals (CI).We adjusted for none in crude model.For the multivariate adjustment model, we adjusted age, sex, race, pir, educational levels, smoking status, drinking status, NHANES survey cycle, BMI, total energy intake, hypertension, diabetes, CKD, CVD, and cancer.Considering that the effect of specific coffee consumption on mortality may vary among people with different types of cognitive impairment, we further explored the association between specific coffee (total/decaffeinated coffee) consumption and mortality in people with cognitive impairment using multivariate Cox regression.The trend test (Ptrend) was also used to analyze the linear relationship between coffee consumption and mortality; the restricted cubic spline (RCS) was used to analyze the non-linear relationship.In addition, a series of analyses of caffeine intake in coffee were conducted to make the association between coffee consumption and mortality more reliable.We also conducted a series of sensitivity analyses to test the stability of the results.First, we conducted a replicate analysis using the unweighted sample.Second, considering that the joint effects of higher caffeine content coffee consumption and cognitive function may be different from the regular coffee, an additional analysis was performed.High caffeine coffee was judged by the amount of caffeine in coffee [caffeine content (mg) divided by total coffee mass (g)], using the third quartile of the amount of caffeine in coffee for all single coffees consumed as reported by NHANES participants from 2011 to 2014 as the cut-off point.Coffee above this cut-off point was defined as "higher caffeine content coffee."Third, we classified coffee into two types: instant and non-instant coffee.Then, the joint effect of their consumption and cognitive function on mortality was explored separately.
Characteristics of the study population at baseline
In the population screening, we first included participants aged 60 or above with complete cognitive test data (n = 2,934) and then excluded those with missing dietary recall data and survival follow-up data (n = 225).We also excluded some individuals who had incomplete covariates (covariate with missing values excess 10%) data (n = 125).Ultimately, a total of 2,584 participants were included in this study, with a median follow-up time of 78 months for the included population.The specific screening process and number of people showing cognitive impairment on individual tests is represented in Figure 1.Characteristics such as lifestyle habits and disease history of the overall participants included and the four cognitive impairment groups obtained by different cognitive test scores are shown in detail in Table 1.
Joint effect on all-cause mortality and CVD mortality
Using a weighted Cox proportional hazard model, which adjusts for various confounding variables, to investigate the joint effect (Table 2), We found that the group with cognitive impairment (Group 3, 4) had a significantly higher all-cause and CVD mortality rate compared to Group 1 (P < 0.05).What's more, we found that the group with cognitive impairment and without coffee consumption (Group 4) had the highest all-cause and CVD mortality among the four groups.However, we found that among the two groups (Group 1, 2) that both presented normal cognition, Group 2, the group with coffee consumption, did not display a significant increase in mortality compared to Group 1, as shown for both all-cause and cardiovascular mortality.
The association of coffee consumption with all-cause and CVD mortality among people with cognitive impairment
The association between specific coffee consumption and allcause mortality among older adults with cognitive impairment was presented in Figure 2.This regression adjusted for age, sex, race, PIR, educational levels, smoking status, drinking status, NHANES survey cycle, BMI, total energy intake, hypertension, diabetes, CKD, CVD, and cancer.For older adults with cognitive impairment in the CERAD-WLT, the regression model found a tendency for all-cause mortality to decrease only at T1 (HR = 0.67, 95% CI: 0.45-1.01,P = 0.054), and we further found a "U" shaped relationship between coffee consumption and all-cause mortality through the weighted RCS curve (P for non-linear = 0.016) (Figure 3).In AFT or DSST presented with cognitive impairment, trend tests showed a linear trend between increased coffee consumption and decreased risk of all-cause mortality and cardiovascular mortality (P for trend < 0.05).However, for those who showed cognitive impairment in global cognition, the risk of all-cause mortality decreased significantly only when coffee consumption reached T3 (HR = 0.61, 95% CI: 0.37-0.99,P = 0.046).
The association of caffeine intake from coffee with all-cause and CVD mortality among people with cognitive impairment
For populations that presented with cognitive impairment in the CERAD-WLT, the association between caffeine intake and allcause mortality was observed to be similar to that of total coffee consumption, with a significantly lower risk of mortality for older adults whose caffeine intake was at T1 (HR = 0.59, 95% CI: 0.43-0.81,P < 0.001) (Figure 4).
For AFT, no significant linear trend was observed between caffeine intake and all-cause or cardiovascular mortality, and for all-cause mortality, the risk of death was slightly elevated when caffeine intake was at T3 (HR = 0.58) compared to T2 (HR = 0.53).In addition, the risk of cardiovascular death showed a significant decrease only in the T2 concentration interval (HR = 0.51, 95% CI: 0.28-0.92,P = 0.026).
For DSST, similar associations to coffee consumption were observed between caffeine intake and all-cause mortality or cardiovascular mortality, both indicating linear trends.For global cognition, caffeine intake was similar only when reaching T3 revealed a significant association with reduced risk of allcause mortality.
The association of decaffeinated coffee consumption with all-cause and CVD mortality among people with cognitive impairment
For all-cause mortality, we found that for people with cognitive impairment, those who consumed only decaffeinated coffee showed a significantly lower risk of death compared to those who did not consume coffee (Table 3).For cardiovascular mortality, a significant reduction in mortality was found for those who consumed only decaffeinated in all three scores defined as cognitive impairment except for CERAD-WLT.
Sensitivity analysis
In the sensitivity analysis, an unweighted analysis was performed on the same samples to support validation of the results of the analysis using the weighting procedure.The unweighted analysis results are presented in Supplementary Tables 2-5.For all-cause mortality, similar results to the weighted analysis were observed in the unweighted analysis.However, for cardiovascular mortality, the significance of the association between decaffeinated coffee and death was reduced.
When analyzing coffee with high caffeine content (Supplementary Table 6), we found that among participants who consumed this type of coffee, those with global cognitive impairment did not exhibit an increased risk of all-cause mortality or cardiovascular death compared to those without global cognitive impairment (P > 0.05).At the same time, those who Association of total coffee consumption with (A) all-cause mortality, (B) cardiovascular mortality in older adults presenting with cognitive impairment in CERAD-WLT, AFT, DSST, or global cognition.The blue origin represents P-value > 0.05, the yellow triangle represents P < 0.05.T1: 0.1-262.5 g/day; T2: 262.6-496 g/day; T3: > 496 g/day.Model were adjusted for age, gender, race, PIR, educational levels, smoking status, drinking status, NHANES survey cycle, BMI, total energy intake, hypertension, diabetes, CKD, CVD, and cancer.
For the joint effect of instant coffee or non-instant coffee and cognitive function on mortality (Supplementary Tables 7, 8), we found similar results as when analyzing total coffee consumption.Specifically, those who did not consume instant or non-instant coffee and suffered from cognitive impairment showed the highest risk of all-cause mortality and cardiovascular mortality.
Discussion
In this study, we investigated the associations between coffee consumption and mortality in the older US population with cognitive impairment.Our analysis found a significantly increased risk of all-cause mortality and CVD mortality in older adults with cognitive impairment and no coffee consumption compared to those with coffee consumption and normal cognitive function.In addition, when separately analyzing the older adults with cognitive impairment, we found that the association between coffee consumption and mortality was not consistent when older adults demonstrated cognitive impairment on different cognitive tests.For all-cause mortality, coffee consumption for older adults who showed cognitive impairment in the DSST or AFT showed a significant negative association with mortality and a similar association was observed in the analysis for caffeine intake from coffee.However, for older adults exhibiting cognitive impairment in CERAD-WLT, both Cox regression and RCS showed a significant association with reduced risk of all-cause mortality only when coffee consumption was located in the lower concentration interval, and for caffeine intake as well.As for global cognitive impairment, both higher coffee consumption and caffeine intake revealed a significant association with reduced risk of all-cause mortality.It was also found that all-cause mortality was not significantly increased in the global cognitive impairment group compared to the normal cognition group among those who consumed highcaffeine coffee.In addition, older adults with cognitive impairment who consumed only decaffeinated coffee displayed a significantly lower risk of all-cause mortality compared to those who did not consume coffee, which was consistent in each cognitive test.For CVD mortality, total coffee consumption revealed a significant association with mortality risk in the older adults showing cognitive impairment in DSST or AFT, with a significant decease trend in CVD mortality risk as coffee consumption increased.Whereas, coffee consumption was not significantly associated with CVD mortality in older adults who exhibited cognitive impairment in CERAD-WLT and overall cognition.In addition, for caffeine intake, we found that for people with AFT cognitive impairment, the risk of CVD mortality was significantly lower relative to those without caffeine intake only when the intake was at T2 (76.1-171.5 mg/day).For decaffeinated coffee, we found that in older adults who showed cognitive impairment in the CERAD-WLT did not display difference in the risk of CVD mortality when consuming only decaffeinated coffee compared to those who did not consume coffee.
When elucidating the associations between coffee consumption and mortality, most cohort studies and meta-analyses have revealed Restricted cubic spline revealed association between coffee consumption and all-cause mortality among participants with cognitive impairment in the CERAD-WLT.The Solid lines and shadows represent the estimated HRs and their 95% confidence intervals (HR, hazard ratio).Model were adjusted for age, gender, race, PIR, educational levels, smoking status, drinking status, NHANES survey cycle, BMI, total energy intake, hypertension, diabetes, CKD, CVD, and cancer.
that coffee was beneficial in reducing mortality (5,34,35), while some studies also suggested that coffee consumption was not correlated with mortality or even increased the risk of mortality (28,36).Additionally, cognitive impairment as a risk factor for mortality was extensively confirmed in a number of studies (14,37,38), which was consistent with the results observed in this study.In the joint effects analysis, we observed that the reduction in risk of all-cause mortality and cardiovascular mortality by coffee was most pronounced in those with cognitive impairment.This suggested that cognitive function might be an important factor influencing the uncertain relationship between coffee consumption and mortality.Although there is still a lack of cohort studies directly investigating the association between cognitive impairment, coffee consumption, and mortality, some studies investigating the effects of coffee consumption on the risk of mortality in people with neurodegenerative diseases have supported this result to some extent.One previous cohort study that investigated the associations between lifestyle factors and mortality in Parkinson's patients found that moderate coffee consumption was associated with lower mortality in Parkinson's patients, whereas this association was not significant in people who did not suffer from Parkinson's disease (39).Another study about coffee consumption and mortality in Parkinson's patients, conducted through the Cancer Prevention Study II Cohort, also determined that coffee reduced the risk of mortality (40).One animal study conducted in PGRP-LB -deficient fruit fly (D. melanogaster) to explore the pharmacological effects of caffeine on neurodegenerative diseases also found that caffeine reduced mortality of the fruit flies (41).Numerous studies have identified a possible association between coffee and neurological health.In an umbrella review including multiple observational and randomized controlled studies, higher coffee consumption was found to be associated with a reduced risk of Alzheimer's disease (35).Caffeine as one of the most important components of coffee has been found to have neuroprotective effects in previous clinical studies and animal studies (42)(43)(44), and a series of cross-sectional and cohort studies found that caffeine might be beneficial in improving global cognitive function (7,8,45).Furthermore, in addition to caffeine, coffee contains a number of other chemicals that have been found to have potential neuroprotective effects.For example, chlorogenic acid, a polyphenol abundant in both caffeinated and decaffeinated coffee.The neuroprotective effect of chlorogenic acid may be related to its ability to reduce oxidative stress.As previously found, chlorogenic acid protects some neurons from H 2 O 2 -induced oxidative damage (46,47).Another example is trigonelline, a naturally found alkaloid in coffee beans, which has also found to be capable of ameliorating lipopolysaccharide (LPS)-mediated neuroinflammation and memory deficits in the Association of caffeine intake from coffee with (A) all-cause mortality, (B) cardiovascular mortality in older adults presenting with cognitive impairment in CERAD-WLT, AFT, DSST, or global cognition.The blue origin represents P-value > 0.05, the yellow triangle represents P < 0.05 (T1: 0.1-76 mg/day; T2:76.1-171.5 mg/day; T3: > 171.5 mg/day).Model were adjusted for age, gender, race, PIR, educational levels, smoking status, drinking status, NHANES survey cycle, BMI, total energy intake, hypertension, diabetes, CKD, CVD, and cancer.
adult mice (48).Besides, it has also been found that dietary intake of niacin, which is also one of the compounds contained in coffee, can prevent AD and age-related cognitive decline (49).Based on the associations between caffeine and a range of other compounds contained in coffee and cognition, we speculated that coffee might prevent the onset and progression of neurodegenerative diseases, such as Alzheimer's and Parkinson's disease, by improving cognitive function, leading to a reduction in mortality in people with cognitive impairment.
In addition, in both the AFT and DSST, coffee consumption and caffeine intake from coffee positively correlated with decreased all-cause mortality in people with cognitive impairment, whereas a decreased risk of all-cause mortality was observed only at lower concentrations of caffeine intake in the CERAD-WLT.We presumed that these variations were due to the differences in the effects of caffeine in different cognitive domains, since differences in the associations between caffeine and the results of different cognitive tests have been found in several previous studies (7,50,51).
For mortality caused by cardiovascular disease (CVD), the joint effect analyses of the association between cognitive impairment, coffee consumption, and mortality afforded similar results to allcause mortality.Previous studies found that a higher cardiovascular risk was associated with lower cognitive function (52,53), and other studies revealed the beneficial effects of coffee on reducing risk of CVD (54, 55).We hypothesized that, since people with cognitive impairment tend to have a higher risk of CVD and that coffee had a modifying effect on both cognitive impairment and CVD, coffee consumption tended to be more beneficial in people with cognitive impairment.What's more, in specific coffee and caffeine intake and CVD mortality analyses conducted for cognitively impaired populations, a statistically significant association was also found between coffee or caffeine and mortality, but only in those who showed cognitive impairment, as measured by AFT and DSST.We speculate that possible reasons for this discrepancy are possible differences in the progression dimensions of cardiovascular disease in people with cognitive impairment in different cognitive domains, such as decreases in executive function and memory that may impair clinical communication and adherence and influence clinical decision making (56).Besides, given the strong and complex correlation between cardiovascular disease and cognitive function (57), this could also be due to the fact that people with a higher risk of developing cardiovascular disease are more likely to have impairments at the functional level as assessed by the AFT as well as the DSST.
For the analysis of decaffeinated coffee, we found that older adults who consumed only decaffeinated coffee also showed a significantly lower risk of death compared to those who did not consume coffee, suggesting that there may be chemicals other than caffeine involved in the mortality ameliorating effect of coffee on older adults with cognitive impairment.For example, the neuroprotective effects of chemicals such as niacin, trigonelline and chlorogenic acid, as well as the antioxidant, lower blood sugar and lipid-lowering functions of the phenolics and alkaloids contained in coffee (58).In some large cohort studies, decaffeinated coffee has been found to be associated with a reduced risk of all-cause or cardiovascular mortality (6,59).In a meta-analysis, it was also found that higher decaffeinated coffee consumers showed a significant reduction in all-cause mortality compared to lower decaffeinated coffee consumers (35).Our study further validated that this association persists in cognitively impaired older adults.However, for the cognitively impaired population in the CERAD-WLT, CVD mortality were not significantly altered in the decaffeinated coffee-consuming population compared with the decaffeinated population, which we hypothesize is due to the presence of cognitively impaired people in the CERAD-WLT who are more likely to have poor prognosis for CVD, leading to the ineffectiveness of decaffeinated coffee, and of course other unconsidered confounding factors may have also contributed to this result.For the attenuated association between decaffeinated coffee and CVD death that was observed in the sensitivity analysis, this may be due to the sampling bias presented in the unweighted sample.The weighting procedure is a method of analysis applicable to the complex sampling design of NHANES, which allows the analysis to more closely match the real US population.
Our study prospectively examined the associations between coffee consumption, cognitive performance, and mortality.Although previous studies observed that moderate coffee and caffeine intake reduced all-cause and cause-specific mortality (5), this study provided a more detailed classification of the population and focused on cognitive function.Our study featured several strengths.First, we conducted the study using the NHANES database, a large sample database based on the national population, with objective evaluation criteria, strict data entry, authentic death records, and comprehensive potential confounding factors.In addition, this was the first study to explore the association between coffee intake and mortality in a cognitively impaired group, which is innovative.Finally, coffee is a staple beverage of our daily lives, and while it is still controversial whether it can be considered a healthy food, our study may provide some implications for determining health guidelines for coffee intake.
Our study also had some limitations: (1) This study specifically targeted older adults (over the age of 60) in the United States, which is not representative of the global population.(2) The dietary data provided were derived from two 24-h dietary recalls, which may not be sufficient to reflect long-term dietary habits.However, some studies suggested that two dietary recalls might be sufficient to assess daily dietary intake (60).(3) The cognitive tools that we applied only assessed some of the cognitive functions, but these tools might not be representative of overall cognitive ability.(4) Considering the short follow-up period, the incidence of specific causes of mortality was relatively low, and the associations with specific causes of death might not be observed reliably.(5) In this study we only considered coffee consumption, whereas participants may have consumed other coffee-based products (e.g., candied, cakes filled with coffee) and caffeine-rich beverages other than coffee, which may have led us to underestimate the effect of caffeine and other compounds contained in coffee for some participants.
Conclusion
In conclusion, this study found that older adults with cognitive impairment and without coffee drinking habits had a higher risk of all-cause mortality and CVD mortality compared to others.Furthermore, the association between coffee consumption and mortality differed for people with cognitive impairment occurring in different cognitive domains.We hope that this study will provide some advice for coffee consumption patterns and also provide some information for designing future clinical studies.We hope that this study will also provide some direction for future research to determine the mechanisms into how coffee affects humans.
FIGURE 1 Flow
FIGURE 1Flow chart of the selection process for selecting eligible participants.
FIGURE 4
FIGURE 4 Consistent with previous related studies (21-23), we used the 25th percentile (lowest quartile) of the scores in the three tests (CERAD-WLT, AFT, and DSST) as the cut-off point for determining cognitive impairment; the participants who scored within this quartile were categorized as having cognitive impairment, while those who scored above this threshold were categorized as having normal cognitive function.The cut-off points of CERAD test, AFT, DSST and global cognition were 21, 13, 34 and −0.98, respectively.
results about PCA are shown in Supplementary Table 1 and Supplementary Figure 1.
CERAD-WLT, Consortium to Establish a Registry for Alzheimer's Disease Word Learning sub-test; AFT, animal fluency test; DSST, digit symbol substitution test; CVD, cardiovascular disease; CKD, chronic kidney disease.1Data is represented as median (IOR). 2 Data is represented as number of subjects (weighted percentages).
TABLE 3
Association of decaffeinated coffee consumption with all-cause mortality and cardiovascular mortality among older adults with cognitive impairment.Models were adjusted for age, gender, race, PIR, educational levels, smoking status, drinking status, NHANES survey cycle, BMI, total energy intake, hypertension, diabetes, CKD, CVD, and cancer.None: no coffee consumption.Bold represents P-value <0.05. 1 | 2023-10-26T15:06:41.466Z | 2023-10-24T00:00:00.000 | {
"year": 2023,
"sha1": "1e996e107fd939a43d095d2f9ddc1123c01057bb",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2023.1150992/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1c991d35e5d7b57fd26d77e7e7e90a2f58fd73f9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
4088675 | pes2o/s2orc | v3-fos-license | Breaking up the Wall: Metal-Enrichment in Ovipositors, but Not in Mandibles, Co-Varies with Substrate Hardness in Gall-Wasps and Their Associates
The cuticle of certain insect body parts can be hardened by the addition of metals, and because niche separation may require morphological adaptations, inclusion of such metals may be linked to life history traits. Here, we analysed the distribution and enrichment of metals in the mandibles and ovipositors of a large family of gall-inducing wasps (Cynipidae, or Gall-Wasps) (plus one gall-inducing Chalcidoidea), and their associated wasps (gall-parasitoids and gall-inquilines) (Cynipidae, Chalcidoidea and Ichneumonoidea). Both plant types/organs where galls are induced, as well as galls themselves, vary considerably in hardness, thus making this group of wasps an ideal model to test if substrate hardness can predict metal enrichment. Non-galler, parasitic Cynipoidea attacking unconcealed hosts were used as ecological “outgroup”. With varying occurrence and concentration, Zn, Mn and Cu were detected in mandibles and ovipositors of the studied species. Zn tends be exclusively concentrated at the distal parts of the organs, while Mn and Cu showed a linear increase from the proximal to the distal parts of the organs. In general, we found that most of species having metal-enriched ovipositors (independently of metal type and concentration) were gall-invaders. Among gall-inducers, metals in the ovipositors were more likely to be found in species inducing galls in woody plants. Overall, a clear positive effect of substrate hardness on metal concentration was detected for all the three metals. Phylogenetic relationships among species, as suggested by the most recent estimates, seemed to have a weak role in explaining metal variation. On the other hand, no relationships were found between substrate hardness or gall-association type and concentration of metals in mandibles. We suggest that ecological pressures related to oviposition were sufficiently strong to drive changes in ovipositor elemental structure in these gall-associated Hymenoptera.
Introduction
The stiffness, hardness and thickness of arthropod cuticle is extremely variable [1], [2], and in certain species and body parts can be reinforced by the addition of Zn, Mn or other elements [3], [4]. Metals and halogens have been found in the mandibles, chelicerae, stings, pedipalps, forcipules, leg claws and ovipositors, and typically at prone-to-wear cutting edges of these organs [3], [5], [6], [7], [8], [9]. The inclusion of such elements can greatly improve cuticle hardness. For example, removal of Zn from worm jaws decreases hardness by over 65% [10], and in ants [4] and termites [11] the hardness of the mandibular teeth correlates with Zn content. For other metals, such as Mn, quite common but found in minor concentrations, it is still not clear the effect on cuticle mechanical properties [11]. Cuticle enriched by Zn and Mn are believed to differ in their mechanical properties [5] and so their differential use may indicate different functional roles.
Because resource partitioning and niche separation may require special adaptations which often include morphological changes [12], [13], from an evolutionary point of view, inclusion of metals in the cuticle may be linked to life history traits. This hypothesis seems to be true in some studied cases. For example, the presence of harder mandibles in the drywood termites seems to be related to lack of access to free water with which to moisten wood, with Zn being rare or absent in termites able to moisten wood [14]; high concentrations of Zn and Mn were found in mandibles of insect larvae that bore into seeds, but not in mandibles of insect larvae that attack previously damaged seeds [15].
On the other side, there is some indication that the presence of metals in selected organs can be related to phylogenetic relationships, as in the case of mandibles of herbivorous insects: Mn is not found in the Orthoptera, Phasmatidae and Lepidoptera, while in Formicidae both Zn and Mn are present [16]. Within parasitic Hymenoptera (Parasitica), the cuticle of the ovipositor is sometimes reinforced by either Zn (Siricidae, Stephanidae) or Mn (Cynipoidea, Ichneumonoidea) or both (Megalyridae) [17], [6]. Quicke et al. [18] even reported Ca in the ovipositor tip of a few ichneumonoids. Thus, the strongest predictor of whether an organism contains metals may not be its behaviour or habitat, but whether or not other members of its family also use such elements [16], [19], [3], [7], [20].
Here, we analysed the distribution and concentration/enrichment of metals in the mandibles and ovipositors primarily of a large family of gall-inducing wasps (Hymenoptera: Cynipidae, or Gall-Wasps), and of some of their associated wasps (gall-parasitoids and gall-inquilines, altogether named here as gall-invaders) (Hymenoptera: Cynipidae, Chalcidoidea and Ichneumonoidea). Cynipidae, a species-rich family of gall-inducing and gall-inquiline wasps, with roughly 1400 described species, represents the second largest radiation of gall-inducing insects after gall midges (Diptera: Cecidomyiidae) [21], [22]. The gall-inducing cynipids form galls, morphological structures formed by plants in response to gall inducer organisms, inside which the larvae develop [23], [22]. Galls induced by Gall-Wasps are morphologically complex and provide shelter and nutrition for their larvae, as well as protection from predators and parasitoids [24], [25]. Notably species in the tribe Cynipini and few species in the tribe Pediaspidini have complex cyclically parthenogenetic (heterogonic) life cycles (i.e. alternation of sexual and asexual generations), which in some cases also involve host plant alternation (heteroecy) [26]. The cynipid inquilines also have phytophagous larvae but cannot initiate gall formation on their own. Instead, their larvae develop inside the galls induced by other Gall-Wasps [27]. On the other side, the Chalcidoidea and Ichneumonoidea associated with galls are mainly parasitoids of larvae of Cynipidae [28], [29], [30] and other insects (e.g. [31]). The genus Aditrochus, here studied, belongs to Chalcidoidea but induces galls.
Selected species of parasitic groups of Cynipoidea (Figitidae) phylogenetically related to Cynipidae, but not associated with galls, were also analysed. These selected figitid species are endoparasitoids of other insects not concealed in a substrate [32], and served here as a sort of ''ecological outgroup''. The genus Parnips, also here studied, is unique among Figitidae, being a parasitoid in cynipid galls (it is thus a gall-invader).
Patterns of metal incorporation have been almost not investigated to date in Gall-Wasps and their associated wasps [6].
Here we tested for the hypothesis that metal incorporation is more likely to occur (and at higher concentrations) for wasp species ovipositing in harder substrates. This hypothesis includes two possible predictions. First, species primarily associated to galls as inquilines or parasitoids (gall-invaders), and thus ovipositing in galls, which are typically thick and often hard structures, may require a greater incorporation of metals in the ovipositors, compared with gall-inducing species, which oviposit in plant tissues, and with non-gall parasitoids. Second, because gallinducers oviposit in plant types and organs differing in hardness (from herbs to trees, and from leaves and flowers to buds and roots), we expected a positive relationship between the hardness of plant substrate and ovipositor metal enrichment. In addition, we also tested the hypothesis that the hardness of the emerging substrate (i.e. the gall for most taxa of our sample, plus larval cuticle for non-gall parasitoids) positively correlates with mandible metal enrichment.
Selected Taxa for Study
Females of 43 species of Gall-Wasps of the eight described cynipid tribes and the main genera of Cynipidae, seven species of Chalcidoidea and Ichneumonidae (all but one acting as gallparasitoids and one as gall-inducer), and six species of Figitidae (five not associated with galls and one acting as gall-parasitoid) were investigated (Table 1). For heterogonic species, either sexual or asexual forms (both forms for two species) were used. The studied gall-associated taxa were selected to represent, from one side, all the main lineages of gall-inducers (Cynipidae) spanning a wide range of biologies (e.g. plant type, gall structure) and, from the other side, the taxonomic and biological diversity of gallinvaders (inquilines and parasitoids) ( Table 1). Both mandibles and ovipositors were not available for all 56 species: in particular, mandibles were not studied for 3 species and ovipositor was not studied for 6 species and the asexual form of one species. As in [6] we preferred to examine 1 or few individuals of closely related taxa with similar biology rather than many representatives of the same species. Overall, a total of 86 females (1.4660.6 per species on average) were studied. Voucher specimens are deposited at Museo Nacional de Ciencias Naturales (CSIC) (Madrid, Spain).
For all species except two collected in Chile, no specific permissions were required for the locations/activities, since collections were done in non-protected areas. The two species from Chile were collected in the Reserva Nacional Los Queules, and the permit for such collection was issued by the Corporación Nacional Forestal (CONAF). The field studies did not involve endangered or protected species.
Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectroscopy (EDS)
The micromorphology, topography, distribution and detection of metals were determined using a Philips FEI INSPECT (Hillsboro, Oregon, USA), a scanning electron microscope (SEM) at the Museo Nacional de Ciencias Naturales (CSIC). To obtain comparative results, we always worked in a high-vacuum mode with a backscattered electron detector (BSED) under vacuum conditions of 30 Pa, a high voltage of 20 kV, a suitable beam spot diameter for particular magnifications and to achieve good focus and astigmatism correction, and a working distance of approximately 10 mm to the detector. The X-ray energy microanalysis (EDS) of the samples and the analysis for line-scan were conducted with an energy-dispersive X-ray spectrometer (INCA Energy 200 energy dispersive system, Oxford Instruments), as it was previously done in similar works [15], [33].
Females were dissected under light microscopy and the excised ovipositors and mandibles were gold-coated after mounting on adhesive carbon pads attached to aluminum stubs. The hymenopteran ovipositor proper consists of three valves, with the upper valve and a pair of lower valves; these together form the egg canal [34]. The ovipositor sheaths (i.e. the third valvae primarily serving for protecting the ovipositor proper, [34]) were not considered in this study since they do not enter substrate during oviposition. For the few specimens coming from the Museum collection, we introduce in the SEM the whole, not gold-coated, individuals.
The semi-quantitative analysis allowed us to establish not only which elements were present but also the concentration of each element, which required an accurate intensity measurement for each peak in the spectrum [35]. We used the maximum peak intensities obtained by a least-squares fitting routine that used standard peaks correlated to a spectrum of known compounds [36]. After these intensities were determined, matrix corrections were applied [37] to determine the concentration of each element. This correction method uses approximated exponential curves and the w(rZ) model to describe the shape of the curves. Thus, improved measurements of light elements in a heavy-element-rich matrix and samples that are tilted in the direction of the incident electron beam can be obtained [37].
The correction factors are dependent on the sample composition (which is the object of our analysis), so that the actual Table 1. List of the studied species (with number of individuals in brackets), together with biological information and ranks for metal concentrations in mandibles and in ovipositors. concentrations must be derived using an iterative procedure [38]. Apparent concentrations are then used to calculate correction factors and make more ''accurate'' estimates of the concentrations. After successive iterations, concentrations that are accurate to approximately 0.01% can be achieved [38].
To calculate the statistical error in the concentration, the weight percentage of the sigma value should be used to determine whether the element is below the detection limits of the sample analysis. We were conservative in this study and used a stricter condition that requires an element's weight percentage to be greater than three times the weight percentage of the sigma value resulting from the analysis [39].
The Smart Map application was used to collect and store X-ray for production of line-scans and quantification. The analytical conditions for 0-20 keV spectral range were optimized for the best average spectra: process time of 5, resolution of 128 6 112 pixels, dwell time of 6000 ms and live acquisition time of 300 s.
For each specimen, we first performed a point-analysis, in which metal concentration was obtained at one point on the distal part of the organ (mandible tooth point and ovipositor distal point) and on the inner part of the organ (i.e. more basal position) (Fig. S1). Then, we performed a line-scan analysis to study the metal concentration along a line starting from the distal point of the point-analysis and ending at about 50-800 mm (depending on the size of the organ and its position) in the inner side (Fig. S1). In the line-scan analysis, the concentrations are calculated averaging the values recorded across the line, thus giving an overall metal enrichment in the cuticle. In addition, from the rough data of the line-scan it was possible to analyse the dispersion pattern across the line (distance). In such a way, we could estimate if the metal concentration changes across the line (e.g. if increase from inner to outer point, the distal cutting/drilling part of the organ) and with which shape.
Since the analytical method of metal detection is semiquantitative, we ranked all the obtained values [6]. In the mandibles, Zn, which can be very abundant (see results) was ranked as 1 (,5 wt%), 2 (5.1-10 wt%), 3 (10.1-20 wt%) or 4 (.20.1 wt%); in the ovipositor, Zn was much less abundant and was ranked as 1 (,0.3%), 2 (0.31-0.6 wt%), 3 (0.61-2 wt%) or 4 (.2 wt%). In both mandibles and ovipositors, Mn and Cu were ranked as Zn in the ovipositor. We ranked both the concentrations recorded with the distal point analysis and with the line-scan. If significant patterns of increase in the line-scan were detected and resulted in a higher rank at the distal point than the rank of the average concentration observed with the line-scan, we used in the statistical analysis the former rank (distal point).
Statistical Analysis
Despite the few individuals studied per species limit conclusions about intra-specific variability, an exploration of the ranked values of metals in the studied specimens strongly suggests this variability is small. In fact, the recorded values of metal concentrations always fall in the same ranks for individuals belonging to the same species. Thus, species were treated as single points in the statistical analysis and individuals were not considered. Separate analyses were performed for mandibles and ovipositors.
To study if metal concentrations increase from the inner to the distal part of the organs, the curves obtained by the line-scan method were fitted to either a linear model or a sigmoid model, with model significance tested with linear or non-linear regressions. We reported all the significant regressions (P,0.05), but because the large sample size in these analysis (i.e. the points along the line-scan, n .100) tended to give significant regressions at low R 2 [40], we also evidenced in particular those in which the distance from the tip of the organ explains at least half of the variance in metal concentration (R 2 $0.50) (Table S2).
To explore the dissimilarity among species based on the concentration of the different metals, we performed a hierarchical cluster analysis, which finds relatively homogeneous clusters of cases based on measured characteristics (in this study rank values of metal concentrations) [41]. The cluster analysis was performed through Ward's method based on Euclidean distance (dissimilarity) between pairs of objects. This analysis also reported the dissimilarity value (truncation), which likely determines how many clusters best suit the data.
The ordinal nature of our independent variables (metal concentration ranks) did not allow to apply classic binary logistic regressions and standard linear regressions to test for association between substrate hardness and metal concentration. Thus, we used appropriate statistics for ordinal and binary variables.
Spearman's rank correlation test was used to look for associations between the concentrations (ranks) of the different metals across species.
To study if metal enrichment is linked to species ecology (substrate hardness), we performed ordinal regression analysis (probit model), commonly used for predicting an ordinal variable [42]. As with classical logistic regression, this models estimate a chi-square (i.e., likelihood ratio) which compares deviances for the full model to the deviance for the baseline or null model. For both organs, one model per each metal was carried out.
Substrate hardness refers to different concepts directly related to emerging or ovipositing. For the mandibles, the substrate to be dug during adult emergence is the gall or, in case of non-gall parasitoids, the host body. This emerging substrate was ranked as 1 (host larval cuticle), 2 (soft-juicy galls), 3 (dry-hard galls without woody external layer) and 4 (very hard galls with woody external layer). For the ovipositor, the substrate to be drilled during egglaying is the plant tissue (for gall-inducers), the galls (for gallinvaders), or the larval host body (for non-gall parasitoids). This oviposition substrate was ranked as 1 (host larval cuticle), 2 (mostly herbaceous plants), 3 (mostly woody plants and relatively soft/ immature galls) and 4 (hard mature galls). Such ranking for substrate hardness was based on previously published information on the biology of the studies species (mainly [43] and references therein cited) (see Table S1).
Trait Mapping on Phylogeny
To map our results on a phylogeny of the studied species, we draw an intuitive ''handmade'' phylogenetic tree based on combined molecular and morphological phylogenetic analysis available in recent works [44], [45], [46], [47], [48], [49], [50] and more recent unpublished results obtained in an on-going study in which one of the authors of the present paper (JLN-A) is involved. Relationships among the studied species of Gall-Wasps (Cynipidae) were mostly based on [44], which performed an analysis based on three molecular markers; that study included all the tribes except the small Qwaqwaiini and Paraulacini, in our study represented by one species each. The phylogenetic position of Cecinothofagus gallaelenga (Paraulacini: basal to Cynipini+Synergi-ni+Aylacini+Qwaqwaiini) was inferred after [48] and unpublished results. For the phylogenetic position of Qwaqwaia scolopiae (Qwaqwaiini: basal to Cynipini) we referred to unpublished evidences. Additional information for some genera and species of Cynipini and Synergini (Cynipidae) not included in [44] was retrieved from posterior published molecular analyses [46], [47]. The phylogenetic position of the parasitic groups of Cynipoidea (Figitidae) was derived from recent combined molecular+morphological studies [44], [45], and the position on the tree of the Chalcidoidea and Ichneumonidae was also based on recent published results [49], [50]. It should be noted that, despite most of the depicted relationships are based on congruent results obtained with morphology and genetics (all the relationships among superfamilies and within Figitidae and Chalcidoidea), some relationships within Cynipidae look different when using morphological evidence only (discussed in [44]). In particular, Synergini appear to be monophyletic in the morphological analysis and polyphyletic in the molecular analysis (as we here draw). In addition, the morphological evidence suggests a basal position of herb-gallers (Aylacini) over wood-gallers, rather than placing non-Quercus wood-gallers (Diplolepidini, Eschatocerini and Pediaspidini) as the more ancestral tribes (as we here draw following the molecular analysis). On the other side, the Aylacini appear polyphyletic or paraphyletic in both analyses. Thus, the built tree could not be used to correct directly our results for common ancestry, but it was useful to roughly appreciate the relationships between phylogeny, metal occurrences and life-history traits, also taking into account the possible alternative tree typologies.
To quantitatively reinforce the visually suggested relationships among metal concentrations, substrate hardness and phylogeny, for ovipositor only (since no significant models were detected for mandibles, see results), we performed first the following comparisons of metal ranks (Mann-Whitney test): gall-invaders 6 gallinducers (within Cynipidae), gall-invaders (within Cynipidae) 6 gall-invaders (outside Cynipidae), and non-gall parasitoids (thus outside Cynipidae) 6 gall-inducers (all groups). If the effect of phylogeny is weak, we expect larger differences in the first comparison than in all the other comparisons. Second, we performed the following comparisons of metal ranks: gall-inducers in substrate ranked 2 6 gall-inducers in substrate ranked 3 (within Cynipidae) and gall-invaders in substrate ranked 3 6 gall-inducers in substrate ranked 3 (all groups) (we used such hardness ranks because of the greater sample size). If the effect of phylogeny is weak, we expect larger differences in the first comparison than in the second one.
Mandibles
We found Zn, Mn and Cu in mandibles (Table 1). Zn was present in all species (Fig. 1) with generally high concentrations, with only 12 species out of 55 showing Zn falling in the lowest rank (,5 wt%) and more than half of the species (31) showing Zn .10.1 wt% (ranks 3-4) ( Table 1). The abundant Zn is even visible from SEM images of mandibles, in which a clearly whiter area is recognizable at the outer margins of the teeth (Fig. 2). Such pattern is confirmed by the elemental analysis, which invariably showed null values at the inner point and a sigmoid increase from the inner to the outer part of the teeth (increasing the rank) ( Fig. S2A and Table S2). Mn was found in about half of the species (28) (Fig. 1), with concentration mainly (18 spp.) falling in lowest rank 1 (,0.3 wt%) ( Table 1). Mn was found to increase in concentration from the inner to the outer part of the mandible teeth in about half of the cases; when this occurs, it increased following a linear trend ( Fig. S2B and Table S2). Cu occurred in 26 species (Fig. 1) with concentration in most cases (21 spp.) below 0.6 wt% (ranks 1-2) ( Table 1), and increased (linearly) in concentration towards the tip of the teeth in all these species (Fig. S2C and Table S2). For both Mn and Cu, the concentration at the inner point was null or extremely low and not higher than the error (see methods), while their concentration measured both at the distal point and from the line-scan fall in the same rank.
Across species, higher concentrations of Zn were associated with higher concentrations of Cu (Spearman's r = 0.35, n = 55, P = 0.009), and higher concentrations of Cu were associated with higher concentrations of Mn (Spearman's r = 0.36, n = 55, P = 0.007).
The cluster analysis depicted a dendrogram in which three main groups can be separated (Fig. S3A). The group more dissimilar to the other ones (group 1) included mostly (10 out of 15) species of gall-inducing wasps. The other two groups, much more similar one with the other, included each about half of the gall-invading species, mixed with gall-inducing and non-gall species. Thus, it seems that the gall-association type does not account for species grouping. This was confirmed by the probit model analysis, since no regressions of the metal ranks against substrate hardness were significant ( Table 2).
Ovipositors
We found Zn, Mn and Cu in the ovipositor, with different occurrence and concentration (Figs 3, Table 1). Metals were only found in the lower valvae (Fig. 4). Zn was present in only 7 species, all but two gall-invaders (Fig. 3), mainly at low concentrations (,0.3 wt%, 4 species) ( Table 1). In Chalcidoidea only (four species), Zn was concentrated at the tip of the ovipositor compared to its inner part, following a sigmoid pattern of increase (increasing the rank) ( Fig. S2E and Table S2). Mn was found in 20 species (Fig. 3), with concentration mainly (14 spp.) .0.6 wt% (ranks 2-4) ( Table 1). Mn was found to increase in concentration from the inner to the outer part of the ovipositor, in linear fashion but not increasing the rank, in 14 of these species (11 of them Cynipidae) ( Fig. S2F and Table S2). Cu occurred in 14 species (Fig. 3) with concentration in most cases (9 spp.) below 0.3 wt% (rank 1) (Table 1), and increased linearly towards the tip of the organ in five species, four of them gall-invaders ( Fig. S2G and Table S2).
The cluster analysis depicted a dendrogram in which three main groups can be separated (Fig. S3B). The more distant group (group 1) included most of the gall-invading species (12 out of 16). Group 2 included many gall-inducers and all the non-gall parasitoids, while group 3 was composed of two species, one gall-invader and one gall-inducer both in the Chalcidoidea.
In the probit model analysis, Zn, Mn and Cu ranks all increased with oviposition substrate hardness ( Table 2). The effect of common ancestry on the observed variation seems to be weak. First, the differences in metal ranks between gall-invaders and gallinducers within Cynipidae were significant, but they were not significant in the other contrasts (Table S3). Second, within Cynipidae, gall-inducers in substrate ranked 2 have less Zn and Mn than gall-inducers in substrate ranked 3, and that gall-invaders and gall-inducers in substrate ranked 3 (all groups) did not differ in Zn, Mn and Cu concentrations (Table S3). Several observations on the map of metal occurrence across the phylogenetic tree also suggest an overall weak role of common ancestry on the observed variability (see Discussion).
Discussion
The present study is the first one dealing in detail with the occurrence and concentration of metals in the mandibles and ovipositors across a large sample of a species-rich superfamily of Hymenoptera (Cynipidae) in which members markedly vary in important life-history traits (from wood-gallers, to herb-gallers, to gall-invading inquilines) [23], [22], [51]. In addition, the inclusion in the analysis, on one side, of species of non-gall cynipoid parasitoids and, on the other side, of gall-invading wasps belonging to more distant hymenopteran groups (Chalcidoidea and Ichneumonoidea), made possible to test the hypothesis that life-history traits selected for the evolution of metal incorporation. We showed that the effect of greater oviposition substrate hardness significantly accounts for variability in metal inclusion in the ovipositor of gall-associated Hymenoptera, and that, overall, phylogenetic relationships probably have, even taking into account the few alternative tree typologies (see below), a weak effect on such variability. This would partially contrast with the view that occurrence of metals is mainly dependent on shared ancestry [20], in particular when looking at large (across families or orders) scale [6], [20]. The very few studies carried out within families, however, showed contrasting results. For example, Zn and Mn enrichment have been found in the mandibles of species of Formicidae which range widely in both habitat and in feeding behaviour [16], [3]; on the other side, the ability to moisten wood predicts the presence of Zn in termites lineages [14]. In Hymenoptera other than ants, the effect of phylogeny on metal enrichment within families was not clear to date, because the large survey of Quicke et al. [6] included many families but very few species per family.
We have interestingly shown that within-family variation in metal enrichment pattern occurs in Hymenoptera and relates with ecological traits. However, this seems to be true for the ovipositor only, since no links between life-history traits and variability of metal inclusion were observed in the mandibles of our sample. For example, Zn was present in the mandibles of all of our studies species, as well as in all the other hymenopterans analysed to date, with the exception of two aculeate wasps, two symphytan wasps, and only one member of Parasitica (Proctotrupoidea) [6]. Looking at the most recent superfamily-level phylogeny of Hymenoptera [48], it seems that Zn-enrichment appeared once in Symphyta, when the so-called Unicalcarida separated from the rest of the group, and then was conserved, in Apocrita, across all groups with only two apparent exceptions: one in the Pelecinidae (Parasitica: Proctotrupoidea) and one in Vespidae (Aculeata: Vespoidea).
Mn, on the other hand, is more rarely found in mandibles and it seems to have appeared twice within the so-called Proctotrupomorpha (within Parasitica): once in Cynipoidea and once in Chalcidoidea ( [6], [48], this study). Within these groups, however, Mn may have been lost in some lineages, as in the genus Andricus.
Cu was detected in the mandibles of about half of the species, though with generally low concentrations. This result is interesting because Cu was very rarely reported in insects and in no hymenopterans to date. Adult mandibles of two termite species also contain Cu [14], [52]. Cu was found in internal organs of fruit flies, but not in the cuticle [53].
An association between metal inclusion and life-history traits seemed clearer for the ovipositor. Other aspects of hymenopteran ovipositor structure and morphology were already shown to be under selection imposed by ecological pressure [34], [54], [55], [56]. For example, the extent of sclerotisation of the ovipositor tip in fig wasps matched the force required to penetrate the syconium at the time of oviposition of each species [57]. Within Cynipoidea, all Figitinae and Eucoilinae that attack semi-concealed dipterous hosts were found to possess the so-called ovipositor clip (an adaptation for gripping host larvae during oviposition), while figitids that attack fully concealed hosts all lacked it [58]. Concerning metals, Quicke et al. [6] suggest at least seven independent acquisitions within the order, and within at least some superfamilies (e.g. Chalcidoidea, Ichneumonoidea) metal occurrence may correlate broadly with the way the ovipositor is used and the nature of the oviposition substrate. Thus, Zn is present in the ovipositor of species drilling deep in wood and not in related non-drilling species (e.g. siricids vs. xiphidriids within Symphita). Similarly, no metals were found in ovipositors of taxa that do not use their ovipositors to drill through any substrate [6]. However, some exceptions appeared: for example, the pimpline wasp, Dolichomitus, attack hosts that are concealed in hard wood, but no metal was found in its ovipositor [6], though this may be due to the fact that such wasps insert the ovipositor through preexisting tunnels in the wood [59].
Here, we showed that drilling in galls, rather than in ''normal'' vegetation tissues or in unconcealed host larvae, also favoured metal-inclusion in the ovipositor of Hymenoptera. Because, at large scale, the distribution of gall-invading behaviour in the alternative phylogenetic trees does not suffer great variations, the overall association between metal inclusion and this life-history seems to be effectively probable. For example, distant groups as parasitoid Chalcidoidea and inquiline Synergini (Cynipoidea) have more metals, while in the non-gall Cynipoidea metals are basically absent. Considering separately the three studied metals the picture seems to be still valid, though with some exceptions which could suggest certain effects of phylogeny within families. For example, most of species having Zn and/or Cu in the ovipositor are gallinvaders (spanning three superfamilies), but it is also true that both gall-invading and gall-inducing pteromalids have these metals. On the other side, among gall-inducers, Zn and Cu were almost exclusively associated with harder substrates, supporting our hypothesis.
Mn was rarely found in association with Zn (two cases here, and one species of Megalyridae observed by [6]) and in several cases it was found in association with Cu. Though the role of Mn in hardening the insect cuticle is still debated [11], this metal could contribute to help drilling or cutting the substrate, in particular in such cases in which the organ follows preexisting gaps and cracks [6]. In the study of Quicke et al. [6], all the hymenopteran taxa having Mn-enriched ovipositor tips attack hosts concealed within a hard substrate, supporting this hypothesis. However, only some taxa analysed by Quicke et al. [6] attacking concealed hosts have metal hardened ovipositors; in addition, in our study Mn was also present in some gall-inducers, in particular within Cynipini, which have, however, to drill through woody (thus harder) substrates compared with herbs (only one herb-galler had Mn). As a support to the hypothesis that Mn inclusion co-evolved with substrate hardness, our results suggest that the observed variability in Mn weakly depend on phylogeny even within families. This is supported in particular by two observations and possibly an additional one (depending on the phylogenetic tree taken into account) First, Parnips nigripes, the only gall-invading figitid, is also the only figitid in our sample to have Mn. Second, a look at the tree built here [48] shows that Aylacini (herb-gallers) is composed of two different lineages in practice lacking Mn, with one of them more closely related to wood-galler cynipids (having Mn in various species). The morphology-based relationships, in which the Aylacini would be basal to wood-gallers within Cynipidae, would not alter the conclusion that an effect of substrate hardness exists, since in both phylogenetic scenarios herb-gallers would have lost metals previously acquired by either gall-invaders or wood-gallers. The third observation which may support a link between substrate hardness and metal inclusion concerns the inquiline Gall-Wasps (Synergini), but only if the relationships depicted by the threegenes molecular analysis are considered as the most probable. In fact, following this phylogenetic hypothesis, the Synergini are polyphyletic (including two lineages, respectively closer to the two groups of herb-gallers) and would have acquired metals twice independently (with Mn present in all except one case). However, following the sole morphological evidence, Synergini is a monophyletic group, thus not allowing hypotheses about independent associations between inquilinism and metal inclusion in the ovipositor.
Conclusions
Overall, the mandibles and ovipositors of Gall-Wasps and gallassociated Hymenoptera are variably characterized by metal inclusion. While for mandibles such trait does not seem to have evolved with particular ecological pressures (emerging substrate hardness), for the ovipositor such relationship was clearer. First, the presence of Zn, Mn, Cu or their variable combination seems to be more likely to occur in species which penetrate galls; second, the hardness of substrate may have affected the evolution of metal allocation patterns in order to optimize such an ''arm'' directly linked with reproductive success. In particular, higher Mn levels would be basically associated with both woody (harder) vegetation substrate and gall-substrate (since it was most common among Cynipini and gall-invaders). Further studies devoted to investigate patterns of metal occurrence in the still unexplored groups of Hymenoptera (e.g. most Aculeata), together with a robust and large phylogeny covering all major subfamilies/tribes within the order, would greatly help to understand the evolutionary history and the adaptive significance of metal enrichment. . Occurrence of metals in the ovipositor of the studied species, mapped on a phylogenetic tree derived from recent literature and unpublished data (see Methods). Species for which the ovipositor was not studied have their name in grey (for A. quercusradicis data were available only for the sexual form). * Mn is present in the sexual form, while no metals were detected in the asexual form. doi:10.1371/journal.pone.0070529.g003 Table S1 Details of gall/substrate for each of the studied species, its rank, and references for hardness ranking. Species are listed in alphabetic order. Gall description in the ''Emerging site'' column refers to the mature gall. ''-'' identifies that for that species the organ used to emerge (mandibles) or to oviposit (ovipositor) was not analysed.
Supporting Information
(DOC) | 2016-05-12T22:15:10.714Z | 2013-07-24T00:00:00.000 | {
"year": 2013,
"sha1": "99b4aaf541dd9b32241584d3464509458ebe30bd",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0070529&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "99b4aaf541dd9b32241584d3464509458ebe30bd",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
88518626 | pes2o/s2orc | v3-fos-license | Bayesian additive regression trees and the General BART model
Bayesian additive regression trees (BART) is a flexible prediction model/machine learning approach that has gained widespread popularity in recent years. As BART becomes more mainstream, there is an increased need for a paper that walks readers through the details of BART, from what it is to why it works. This tutorial is aimed at providing such a resource. In addition to explaining the different components of BART using simple examples, we also discuss a framework, the General BART model, that unifies some of the recent BART extensions, including semiparametric models, correlated outcomes, statistical matching problems in surveys, and models with weaker distributional assumptions. By showing how these models fit into a single framework, we hope to demonstrate a simple way of applying BART to research problems that go beyond the original independent continuous or binary outcomes framework.
BART has also been extended to survival outcomes (Bonato et al., 2011;Sparapani et al., 2016), multinomial outcomes Agarwal et al., 2013), and semi-continuous outcomes . In the causal inference literature, notable papers that promote the use of BART include Hill (2011) and Green and Kern (2012). BART has also been consistently among the best performing methods in the Atlantic causal inference data analysis challenge (Hill, 2016;Hahn et al., 2017;Dorie et al., 2017). In addition, BART has been making inroads in the missing data literature. For the imputation of missing covariates, Xu et al. (2016) proposed a way to utilize BART for the sequential imputation of missing covariates, while Kapelner and Bleich (2015) proposed to treat missingness in covariates as a category and set up the splitting criteria so that the eventual likelihood in the Metropolis-Hasting (MH) step of BART is maximized. For the imputation of missing outcomes, Tan et al. (2018a) examined how BART can improve the robustness of existing doubly robust methods in situations where it is likely that both the mean and propensity models could be misspecified. Other more recent attempts to utilize or extend BART include applying BART to quantile regression , extending BART to count responses (Murray, 2017), using BART in functional data (Starling et al., 2018), applying BART to recurrent events , identifying subgroups using BART (Sivaganesan et al., 2017;Schnell et al., 2016Schnell et al., , 2018, and using BART as a robust model to impute missing principal strata to account for selection bias due to death (Tan et al., 2018).
The widespread use of BART has resulted in many researchers starting to use BART as the reference model for comparison when proposing new statistical or prediction methods which are flexible and/or robust to model misspecification. A few recent examples include Liang et al. (2018), Nalenz and Villani (2018), and Lu et al. (2018). This growing interest for BART raises a need for an in-depth tutorial paper on this topic to help researchers who are interested in using BART better understand the method that they are using and possibly diagnose the likely problems when unexpected results occur. The first portion of this paper is aimed at addressing this.
The second portion of our work revolves around an interesting observation on four works extending BART beyond the original independent continuous or binary outcomes setup. In these papers, they extend BART to semiparametric situations (Zeldow et al., 2018), correlated outcomes (Tan et al., 2018b), survey (Zhang et al., 2007), and robust error assumptions (George et al., 2018). Although these papers were written separately, they surprisingly share a common feature in their framework. In brief, when estimating the posterior distribution, they subtract a latent variable from the outcome and then model this residual as BART. This idea, although simple, is powerful because this can allow researchers to easily extend BART to problems that they may face in their dataset without having to rewrite or re-derive the Monte Carlo Markov Chain (MCMC) procedure for drawing the regression trees in BART. We summarize this idea in a framework unifying these models that we call the General BART model. We suggest how the priors could be set and how the posterior distribution could be estimated. We then show how General BART is related to the these four models. We believe that by presenting our General BART model framework and linking it with the models in these four papers as examples, it will aid researchers who are trying to incorporate and extend BART to solve their research problems.
Our in-depth review of BART in Section 2 focuses on three commonly asked questions regarding BART: What gives BART flexibility? Why is it called a sum of regression trees?
What are the mechanics of the BART algorithm? In Section 3, we demonstrate the superior performance of BART compared to the Bayesian linear regression (BLR) when data are generated from a complicated model. We then describe the application of BART to two reallife datasets, one with continuous outcomes and the other with binary outcomes. Section 4 lays out the framework for our General BART model that allows BART to be extended to semiparametric models, correlated outcomes, survey, and situations where a more robust assumption for the error term is needed. We then show how our General BART model is related to these four BART extension models. In each of these description examples, we describe how the prior distributions are set and how the posterior distribution is obtained.
We conclude with a discussion in Section 5.
Bayesian additive regression trees
We begin our discussion with the independent continuous outcomes BART because this is the most natural way to explain BART. We argue that BART is flexible because it is able to handle non-linear main effects and multi-way interactions without much input from researchers. To demonstrate how BART handles these model features, we explain using a visual example of a regression tree. We then illustrate the concept of a sum of regression trees using a simple example with two regression trees. We next show how a sum of regression trees link with non-linearity. To show how BART determines these non-linear main and multiway interaction effects automatically, we discuss two perspectives. First, we provide a visual and detailed breakdown of the BART algorithm at work using a simple example, providing intuition for each step along the way. Then, we provide a more rigorous explanation of the BART MCMC algorithm by discussing the prior distribution used for BART and how the posterior distribution is calculated. Finally, we show how these ideas can be extended to independent binary outcomes.
Formal definition
We begin with the formal definition and notation of BART. Suppose we have a continuous outcome Y and p covariates X for n subjects. The goal is a model that can capture complex relationships between X and Y , with the aim of using it for prediction. BART attempts to estimate f (x) from models of the form Y = f (X) + ε i , where, for now, ε i ∼ N (0, σ 2 ), i = 1, · · · , n. To estimate f (X), a sum of regression trees is specified as (1) In Equation (1), T j is the j th binary tree structure and M j = {µ 1j , . . . , µ b jj } is the vector of terminal node parameters associated with T j . Note that T j contains the information of which covariate to split on, the cutoff value in an internal node, as well as where the internal node is located in the binary tree. The constant m is usually fixed at a large number, e.g.,
200.
We will next make much more clear what is meant by T j and M j , and also how this leads to extremely flexible models. We begin with the simple case of a single regression tree.
Single regression tree
To understand BART, consider first a single regression tree, rather than a sum of trees.
For now we will assume that the tree is known and just focus on how to interpret it and obtain predictions from it. Later, we will describe the priors on these trees since the true tree structure is usually unknown.
Consider the regression tree g(X; T j , M j ) given in Figure 1. Imagine that we have covariates X i = (X i1 , . . . , X i5 ) and we would like to know E(Y i |X i ) for subject i. Each place where there is a split is called a node. At the top node (root), there is a condition X i2 < 100. If X i2 < 100 is true, then we follow the path to the left, otherwise to the right. Assuming that X i2 < 100 is true, we see that we arrive at a node which is not split upon. This is called a terminal node and the parameter µ 1j = 1.19 would be used as the predicted value of Y i . Suppose instead that X i2 < 100 is not true. Then, moving along the right side, another internal node with condition X i4 < 200 is encountered. This condition would be checked and, if this condition is true (false), we would follow the path to the left (right). This process continues until we reach a terminal node and the parameter µ kj in that terminal node is assigned as the predicted value of Y i . Note that µ kj is the mean parameter of the k th node for the j th regression tree. So, for example, a subject i with X i1 = 30, X i2 = 120, X i3 = 115, X i4 = 191, and X i5 = 56 would be assigned a predicted outcome of µ 2j = 2.37. The prediction would be exactly the same for another subject i who instead had covariates X i 1 = 130, X i 2 = 135, X i 3 = 92, X i 4 = 183, X i 5 = 10. Figure 1: Example of a regression tree g(X; T j , M j ) where µ kj is the mean parameter of the k th node for the j th regression tree.
In summary, we can view a regression tree as a function that assigns the conditional mean of Y i to the parameter µ kj i.e. µ kj = g(X i ; T j , M j ) → E(Y i |X i ). Note that we have not yet discussed how a tree is created and how uncertainty about what to split on and where to split is quantified. We will address that when we introduce priors and algorithms.
Regression tree as an analysis of variance (ANOVA) model. Another way to think of the regression tree in Figure 1 is to view it as the following analysis of variance (ANOVA) model: where I{.} is the indicator function and ε i ∼ N (0, σ 2 ). We can see that the term µ 1j I{X i2 < 100} corresponds to the terminal node on the top left corner of Figure 1 correspond to the terminal node on the middle right of Figure 1, and so on. We can think of µ 1j I{X i2 < 100} as a main effect, because it only involves the second variable X i2 , while µ 2j I{X i2 ≥ 100}I{X i4 < 200}I{X i3 < 150} is a three way interaction effect involving the second (X i2 ), fourth (X i4 ), and third variable (X i3 ). By viewing a regression tree as an ANOVA model, we can easily see why a regression tree and hence, BART, is able to handle main and multi-way interaction effects.
Sum of regression trees
We next consider a sum of regression trees. To illustrate the main idea, we focus on an example with m = 2 trees and p = 3 covariates. Suppose we were given the two trees in Regression tree, j = 1 Regression tree, j = 2 X i1 < 100 The resulting conditional mean of Y given X is 2 j=1 g(X; T j , M j ). Consider the hypothetical data from n = 10 subjects given in Table 1. We can see that the quantity that is being 'summed' and eventually allocated to E(Y i |X i ) is not the regression tree or tree structure, but, the value that each j th tree structure assigns to subject i. This is one way to think of a sum of regression trees. It allocates a sum of parameters µ kj to subject i.
Note that contrary to initial intuition, it is the sum of µ kj that is allocated to rather than the mean of the µ kj 's. This is mainly because BART calculates each posterior draw of the regression tree function g(X; T j , M j ) using a leave-one-out concept, which we shall elaborate on shortly. Table 1: The values of 2 j=1 g(X; T j , M j ) from the regression trees in Figure 2.
i Y X 1 X 2 X 3 g(X; T 1 , M 1 ) g(X; T 2 , M 2 ) 2 j=1 g(X; T j , M j ) Another way to view the concept of a sum of regression trees is to think of the regression trees in Figure 2 as ANOVA models. Then, the sum of trees is the following ANOVA model: Non-linearity of BART. From this simple example, we can see how BART handles non-linearity. Each single regression tree is a simple step-wise function or ANOVA model.
When we sum regression trees together, we are actually summing together these ANOVA models or step-wise functions, and, as a result, we eventually obtain a more complicated step-wise function which can approximate the non-linearities in the main and multiple-way interactions. It is this ability to handle non-linear main and multiple-way interaction effects that makes BART a flexible model. But unlike many flexible models, BART does not require the researcher to specify the main and multi-way interaction effects.
Prior distributions.
In the examples above, we have taken the trees as a given, including which variables to split on, the splitting values, and the mean parameters at each terminal node. In practice, each g(X; T j , M j ) is unknown. We therefore need prior distributions for these functions. Thus, we can also think of BART as a Bayesian model where the mean function itself is unknown. A major advantage of this approach is that uncertainty about both the functional form and the parameters will be accounted for in the posterior predictive distribution of Y .
Before getting into the details of the prior distributions and MCMC algorithm, we will first walk through a simple example to build the intuition.
BART machinery: a visual perspective
In our simple example, we have three covariates X = (X 1 , X 2 , X 3 ) and a continuous outcome Y . We run the BART MCMC with four regression trees for 5 iterations on this dataset and at each iteration, we present the regression tree structures to illustrate how the BART machinery works as it goes through each MCMC step. When Y and X are provided to BART, BART first initializes the four regression trees to single root nodes (See "Initiation" in Figure 3). Since all four regression trees are single root nodes, the parameters initialized for these nodes would be µ ij =Ȳ m =Ȳ 4 . (1) 12 Iteration 2 Tree 1 Tree 2 Tree 3 Tree 4 µ (2) 11μ (2) 12 Iteration 3 Tree 1 Tree 2 Tree 3 Tree 4 Tree 4 Iteration 5 Tree 1 Tree 2 Tree 3 Tree 4 With the initializations in place, BART starts to draw the tree structures for each regression tree in the first MCMC iteration. Without loss of generality, let us start with determining (T 1 , M 1 ), the first regression tree. This is possible because the ordering of regression tree calculation does not matter. We first calculate Then a MH algorithm is used to determine the posterior draw of the tree structure, T 1 for this iteration. The basic idea of MH is to propose a new tree structure from T 1 , call this T * 1 , and then calculate the probability of whether T * 1 should be accepted, taking into consideration: R 1 |T * 1 (the likelihood of the residual given the new tree structure), R 1 |T 1 (the likelihood of the residual given the previous tree structure), the probability of observing T * 1 , the probability of observing T 1 , the probability of moving from T * 1 to T 1 , and the probability of moving from T 1 to T * 1 . We describe the different types of moves from T 1 to T * 1 in detail in the next subsection. If T * 1 is accepted, T 1 is updated to become T * 1 i.e. T 1 = T * 1 . Else, nothing would be changed for T 1 . From Figure 3, we can see that T * 1 was not accepted in the first iteration so the tree structure remains as a single root node. The algorithm then updates M 1 based on the new updated regression structure for T 1 and moves on to determine (T 2 , M 2 ).
To determine (T 2 , M 2 ), again the algorithm calculates 11 is the updated parameter for regression tree 1. Similarly, MH is used to propose a new T * 2 and R 2 is used to calculate the acceptance probability to decide whether T * 2 should be accepted.
Again, we see from Figure 3 that T * 2 was not accepted and hence a single parameterμ (1) 12 , drawn from M 2 |T 2 , R 2 , σ, is used for g(X, T 2 , M 2 ). For (T 3 , M 3 ), the MH iteration result is more interesting because the newly proposed T * 3 was accepted and we can see from Figure 3 that a new tree structure was used for T 3 in Iteration 1. As a result, when calculating was not accepted and a single node T 4 was updated as the tree structure for (T 4 , M 4 ). Once the regression tree draws are complete, the BART then proceeds to draw the rest of the parameters in the BART model. More details in the next subsection.
Figures 3 and 4 give the full iterations from initiation to iteration 5. From these figures we can see how the four regression trees grow and change from one iteration of the MCMC to another. This iterative process runs for a burn-in period (typically 100 to 1000 iterations), before those draws are discarded, and then run for as long as needed to obtain a sufficient number of draws from the posterior distribution of m j=1 g(X, T j , M j ). After any full iteration in the MCMC algorithm, we have a full set of trees. We can therefore obtain a predicted value of Y for any X of interest (simply by summing the terminal node µ's).
By obtaining predictions across many iterations, we also can easily obtain a 95% prediction interval. Another point to note is how shallow the regression trees are in Figures 3 and 4 with a maximum depth of 3. This is because the regression trees are heavily penalized (via the prior) to reduce the likelihood for a single tree to grow very deep. This concept is borrowed from the idea that many weak models combined together performs much better than utilizing a very strong model which requires careful tweaking in order for the model to perform well.
A rigorous perspective on the BART algorithm
Now that we have a visual understanding of how the BART algorithm works, we shall give a more rigorous explanation of BART. First, we start with the prior distributions for BART.
The prior distribution for Equation (1) are independent of each other. Then the prior distribution can be written as For the third to fourth line in Equation (2), recall that M j = {µ 1j , . . . , µ b j j } is the vector of terminal node parameters associated with T j and each node parameter µ kj is usually assumed to be independent of each other. Equation (2) implies that we need to set distributions for the priors µ kj |T j , σ, and T j . The priors for µ kj |T j and σ are usually given as and σ 2 ∼ IG( ν 2 , νλ 2 ) respectively, where IG(α, β) is the inverse gamma distribution with shape parameter α and rate parameter β.
The prior for P (T j ) is more interesting and can be specified using three aspects: 1. The probability that a node at depth d = 0, 1, . . . would split, which is given by α (1+d) β .
The parameter α ∈ {0, 1} controls how likely a node would split, with larger values increasing the likelihood of a split. The number of terminal nodes is controlled by parameter β > 0, with larger values of β reducing the number of terminal nodes.
This aspect is important as this is the penalizing feature of BART which prevents BART from overfitting and allowing convergence of BART to the target function f (X) (Ročková and Saha, 2018). As mentioned in the previous subsection, this aspect also allows many shallow (weak) regression trees to be fit and eventually summed together to obtain a stronger model.
2. The distribution used to select the covariate to split upon in an internal node. The default suggested distribution is the uniform distribution. Recent work (Ročková and van der Pas, 2017;Linero, 2018) have argued that the uniform distribution does not promote variable selection and should be replaced if variable selection is desired.
3. The distribution used to select the cutoff point in an internal node once the covariate is selected. The default suggested distribution is the uniform distribution.
The setting of the hyper-parameters for the BART priors is rather technical so we refer interested readers to our Appendix for how this can be done.
The prior distribution would induce the posterior distribution which can be simplified into two major posterior draws using Gibbs sampling. First, draw m successive for j = 1, . . . , m, where T (j) and M (j) consist of all the tree structures and terminal nodes except for the j th tree structure and terminal node; then, draw from IG( ν+n 2 , To obtain a draw from (3), note that this distribution depends on ( the residuals of the m − 1 regression sum of trees fit excluding the j th tree (Recall our visual example in the previous subsection). Thus (3) is equivalent to the posterior draw from a single regression tree We can obtain a draw from (6) by first integrating out M j to obtain P (T j |R j , σ). This is possible since a conjugate Normal prior on µ kj was employed. We draw P (T j |R j , σ) using a MH algorithm where first, we generate a candidate tree T * j for the j th tree with probability distribution q(T j , T * j ) and then, we accept T * j with probability is the ratio of the probability of how the previous tree moves to the new tree against the probability of how the new tree moves to the previous tree, is the likelihood ratio of the new tree against the previous tree, and is the ratio of the probability of the new tree against the previous tree.
A new tree T * j can be proposed given the previous tree T j using four local steps: (i) grow, where a terminal node is split into two new child nodes; (ii) prune, where two terminal child nodes immediately under the same non-terminal node are combined together such that their parent non-terminal node becomes a terminal node; (iii) swap, the splitting criteria of two non-terminal nodes are swapped; (iv) change, the splitting criteria of a single non-terminal node is changed. Once we have the draw of P (T j |R j , σ), we then draw P (µ kj |T j , R j , σ) ∼ N ( σ 2 µ n k k=1 R kj n k σ 2 µ +σ 2 , σ 2 σ 2 µ n k σ 2 µ +σ 2 ), where R kj is the subset of elements in R j allocated to the terminal node parameter µ kj and n k is the number of R kj s allocated to µ kj . We derive P (µ kj |T j , R j , σ), Equation (4), and Equation (7) for the grow and prune steps as an example in our Appendix.
Binary outcomes
For binary outcomes, BART can be extended using a probit model. Specifically, (2) without σ can be employed and the similar prior specifications for µ kj |T j and T j can be used. The setup of the hyper-parameters are slightly different from that of continuous outcomes and we describe this in the Appendix.
To estimate the posterior distribution, data augmentation (Albert and Chib, 1993) can be used. In essence, we first draw a latent variable Z = {Z 1 , . . . , Z n } as follows: is a truncated normal distribution with mean µ and variance σ 2 truncated at (a, b). Next, we can treat Z as the continuous outcome for a BART model with where ε ∼ N (0, 1) because we employed a probit link. The usual posterior estimation for a continuous outcome BART with σ ≡ 1 can now be employed on Equation (8) for one iteration in the MCMC. The updated m j=1 g(X; T j , M j ) can then be used to draw a new Z and this new Z can be used to draw another iteration of m j=1 g(X; T j , M j ). The process can then be repeated till convergence.
3 Illustrating the performance for BART
Posterior performance via synthetic data
We generated a synthetic data set with p = 3, n = 1, 000 and, the true model for Y i is The goal is to demonstrate that BART can predict Y 's effectively even in complex, non-linear models, and also properly accounts for prediction uncertainty compared to a parametric BLR model. To this end, we randomly selected 880 samples as the training set and then use the remaining 20 samples as the testing set. We also varied the number of trees used by BART to illustrate how varying m affects the performance of BART. We plotted the point estimate and 95% credible interval of the 20 randomly selected testing data points and compared them with their true values in Figure 5. The codes to implement this simulation will be made available on https://github.com/yaoyuanvincent. We can see from Figure 5 that most of the point estimates of BLR were far away from their true values and many of the true values were not covered by the 95% credible interval.
For BART with a single tree, although the true values were mostly covered by the 95% credible interval, the point estimates were far from their true values. When we increased the number of trees to 50 in BART, we see a significant improvement in terms of bias (closeness to the true values) compared to both BLR and BART with m = 1. In addition, we see a narrowing of the 95% intervals. We see that as we increase the number of trees, the point estimate and 95% intervals stabilize. In other words, we might see a big difference between m = 1 and m = 50, and virtually no difference between m = 200 and m = 20, 000. In practice, the idea is to choose a large enough value for m (default is often 200) so that it very well approximates the results that would have been obtained if more trees were used.
One way to determine an m that is sufficiently large is with cross validation Table 2 shows some descriptive statistics for this dataset. SHR was adjusted for a patients age, sex, duration of ESRD, comorbidities, and body mass index at ESRD incidence.
We removed 463 facilities (7%) with missing SHR values because of small patient number.
We also removed peritoneal dialysis (PD) removal greater than 1.7 Kt/V because of the high proportion of missingness (80%). We combined pediatric hemodialysis (HD) removal greater than 1.2 Kt/V with adult HD removal greater than 1.2 Kt/V because most facilities (92%) do not provide pediatric HD. We re-categorized the chain names to "Davita," "Fresenius Medical Care (FMC)," "Independent," "Medium," and "Small." "Medium" consists of chains with 100-500 facilities while "Small" are chains with less than 100 facilities. To estimate patient volume, we used the maximum of the number of patients reported by each quality measure group: Urea Reduction Ratio (URR), HD, PD, Hemoglobin (HGB), Vascular Access, SHR, SMR, STR, Hypercalcemia (HCAL), and Serum phosphorus (SP). We also logarithm-transformed (log) SHR, SMR, and STR so that the theoretical range for these log standardized measures will be −∞ to ∞.
For our analysis, we used the log-transformed SHR as the outcome and the variables in Table 2 as the predictors. We used the root mean squared error (RMSE) of a 10-fold crossvalidation to compare the prediction performance from multiple linear regression (MLR), Random Forest (RF), and BART. For RF and BART, we used the default settings from the R packages randomForest and BayesTree respectively. The 10 RMSEs produced by each method from the 10-fold cross validation is provided in Figure 6. It is clear from this figure that BART and RF produce very similar prediction performances and is better compared to MLR. The mean of these 10 values also suggested a similar picture with MLR producing a mean of 0.24 while RF and BART produced a mean of 0.23.
Predicting left turn stops at an intersection
We next present another example where BART showed improvement in the prediction performance of a binary outcome. In Tan the columns and each turn as the rows, they then performed principal components analysis (PCA) on these vehicle speeds using moving windows of 6 meters from 94 meters away to 1 meter away from the center of an intersection. This implies that at each meter, a PCA analysis was conducted using 6 meters of vehicle speeds, i.e. at 94 meters, 94 to 100 meters away was used, at 93 meters, 93 to 99 meters away was used and so on until 1 meter away. They used a 6 meter moving window because they found that longer windows did not improve prediction performance and a 6 meter moving window provided the best prediction perfor-mance. At each meter, the first three principal components (PCs) from the corresponding 6 meter moving window PCAs were then used to determine the prediction model with the outcome as whether the vehicle stopped (vehicle speed < 1m/s) in the future with stopped coded as 1 and not stop coded as 0. Only the first three PCs were used because these three PCs explained nearly 99% of the variance in the 6 meter moving window distance series of vehicle speeds as well as provided the best prediction performance. This setup resulted in 94 models corresponding with 94 datasets for each meter. In order to keep our presentation concise, we focus on the dataset halfway through the turn maneuver (50 meters away from the center of an intersection) which is made up of the first 3 PCs of the PCA on vehicle speed from 50 to 56 meters and the outcome of whether the vehicle stopped in the future from 49 meters to the center of an intersection. We ran a 10-fold cross-validation on this dataset and compared the binary prediction results of logistic regression, RF, and BART. Since the outcome of interest for this dataset was binary, we used the area under the receiver operating curve (AUC) to determine the prediction performance instead of the RMSE, which is more suited for continuous outcomes.
General BART model
Recently researchers have extended or generalized BART to a wider variety of settings, including clustered data, spatial data, semi-parametric models, and to situations where more flexible distributions for the error term is needed. Here we describe a more general BART framework that includes all of these cases and more. An important feature of this general BART model is they can be fitted without very extensive re-derivation for the MCMC draws of the regression trees described in Section 2. That is, the MCMC algorithm we described previously only needs small adjustments to handle this more general setting.
To set up our General BART model, suppose once again that we have a continuous outcome Y and p covariates X = {X 1 , . . . , X p }. Suppose also that we have another set of q covariates W = {W 1 , . . . , W q } such that no two columns in X and W are the same. Then, we can extend Equation (1) as follows: Assuming that (T, M ), Θ, and Σ are independent, the prior distribution for Equation Under this framework, we will only need priors for P (T, M ) and P (Θ). P (T, M ) can be we are willing to assume that the m trees are independent of one another, and data augmentation (Albert and Chib, 1996) can be used obtain the posterior distribution. We can draw and then treat Z as the outcome for the model in Equation (9). This would imply that ε k ∼ N (0, 1) in Equation (9) and we can apply the Gibbs sampling procedure we described for continuous outcomes using Z instead of Y with Σ = σ = 1. Iterating through the latent draws and Gibbs steps will produce the posterior distribution that we require.
With the general framework and model for BART in place, we are now equipped to consider how Zeldow et al. (2018), Tan et al. (2018b), (Zhang et al., 2007), and George et al.
(2018) extended BART to solve their research problems in the next fours subsections.
Semiparametric BART
The semiparametric BART was first presented by Zeldow et al. (2018). Their idea was to have a model where the effects of interest are modeled linearly with at most simple interactions to keep the associated parameters interpretable while having the nuisance or confounder variables be modeled as flexibly as possible. In its simplest form, we have under the framework of Equation (9) that . . , θ q }, and ε i ∼ N (0, σ 2 ) with Σ = σ. Prior distributions for µ kj |T j , T j , and σ 2 follow the usual distributions we use for BART while Θ ∼ M V N (β, Ω), possibly. Posterior estimation follows the procedure we described in Section 4 using Gibbs Sampling. For Equations (10) and (12), since they suggested using the default BART priors, the usual BART mechanisms can be applied to obtain the posterior draws. For Equation (11), Θ ∼ M V N (β, Ω) implies that we can treat this as the usual BLR and standard Bayesian methods could be used to obtain the posterior draw for Θ. The framework for binary outcomes follows easily using the data augmentation step we describe in Section 4.
Random intercept BART for correlated outcomes
Random intercept BART (riBART) was proposed by Tan et al. (2018b) as a method to handle correlated continuous or binary outcomes with correlated binary outcomes as the main focus.
Under the framework of Equation (9), we have H(W, Θ) = W a, where Θ = (a, τ ) and i.e. W is a matrix with 1 repeated n 1 times in the first column and 0s for the rest of the column, 0 repeated n 1 times in the second column followed by 1 repeated n 2 times and then 0s for the rest of the columns, and so on until for the last column we have 0 repeated L−1 l=1 n l times and then 1 repeated n L times, a = {a 1 , . . . , a L } with a l |τ 2 ∼ N (0, τ 2 ). l indexes the subject. Once again, ε ∼ N (0, σ 2 ) with Σ = σ and the usual BART priors for σ, µ kj |T j , and T j can be employed. a l and ε are assumed to be independent. A simple prior of τ 2 ∼ IG(1, 1) could be used although more robust or complicated priors are possible. Posterior estimation and binary outcomes then follow the procedure described in Section 4 easily.
Spatial BART for a statistical matched problem
The Spatial BART approach of Zhang et al. (2007) was proposed to handle statistical matched problems (Rässler, 2002) that occur in surveys. In statistical matched problems, inference is desired for the relationship between two different variables collected by two different datasets on the same subject. For example, survey A may collect information on income but survey B collects information on blood pressure. Both surveys A and B contain subjects that overlap. The relationship between income and blood pressure is then desired.
To solve this problem, Spatial BART essentially uses a framework similar to that of riBART described in Example 4.2 with a more complicated prior distribution for Θ. The specification for W and a is the same but the distribution placed on a is instead the conditionally autoregressive prior which can be specified as where C = c il is a I × I adjacency matrix, l = 1, . . . , I, with c il = 1 if group i and group l are (spatial) neighbors for i = l; c il = 0 otherwise; and c il = 0 if i = l. H is a diagonal I × I matrix with diagonals h i = I l=1 c il , ρ is a parameter with range (−1, 1), and δ 2 is the variance component for Equation (13). ρ and δ 2 are hyperparameters that is prespecified.
Finally, ε ∼ N (0, σ 2 ) and Equation (9) is completed by placing the usual BART priors for σ, µ kj |T j , and T j . Posterior draws again follow the procedures we outlines in Section 4.
Dirichlet Process Mixture BART
The Dirichlet Process Mixture (DPM) BART was proposed by George et al. (2018) to enhance the robustness of distributional assumption for ε in Equation (1). To do this they focused on a different specification for ε by assuming that where D denote a random discrete distribution and DP denotes the Dirichlet process with parameters D 0 and α > 0. The atoms of D can be seen as iid draws from D 0 . α on the other hand determines weight allocated to atom of discrete D. Higher values of α imply that the weights would be spread out among the atoms. Lower values of α imply that weights would be concentrated on only a few atoms. Although the assumption of ε i ∼ N (a i , σ 2 i ) suggests that each subject will have their own mean and variance for the error term, the placement of a Dirichlet process on D restricts the number of unique components for (a i , σ 2 i ) to K < n, which ensures that this model would still be identifiable. Viewing DPMBART as a form of Equation (9), we have H(W, Θ) = W a where W and a have the same structure as riBART and P (Θ, Σ) = (a i , σ 2 i ). Note here that we are no longer assuming that a i and ε i are independent unlike in some of our previous examples.
The priors for DPMBART are D 0 and α. For D 0 , the commonly employed form is P (µ, σ|ν, λ, µ 0 , k 0 ) = P (σ|ν, λ)P (µ|σ, µ 0 , k 0 ). George et al. (2018) specified their priors as ν is set at 10 to make the spread of error for a single component k tighter. λ is chosen using the idea from how λ is determined in BART with the quantile set at 0.95 instead of 0.9 (See Appendix A for how λ is determined in BART). For µ 0 , because DPMBART subtractsȲ from Y , µ 0 = 0. For k 0 , the residuals of a multiple linear regression fit is used to place µ into the range of these residuals, r. The marginal distribution of is µ ∼ √ λ √ k 0 t ν , where t ν is a t distribution with ν degrees of freedom. Let k s be the scaling for µ. Given k s = 10, k 0 can be chosen by solving For α, the prior used by DPMBART is the same as in Section 2.5 of Rossi (2014) where the idea is to relate α to the number of unique components in (a i , σ 2 i ).
The posterior draw for DPMBART follows most of the ideas discussed in General BART where first, the idea of Equation (10) is used to draw (T, M )|a i , σ 2 i . The slight difference is to view this as a weighted BART draw with ε ∼ N (0, w i σ 2 ). The second draw, (a i , σ 2 )|(T, M ) follows Equation (11) which can be solved by using draws (a) and (b) of the algorithm in Section 1.3.3 of Escobar and West in Dey et al. (1998). The final draw is α|(a i , σ 2 ). This is obtained by putting α on a grid and the using Bayes' theorem with P (α|(a i , σ 2 i )) = P (α|K) ∝ P (K|α)P (α) where K is the number of unique (a i , σ 2 i )'s.
Discussion
In this tutorial, we walked through the BART model and algorithm in detail, and presented a generalized model based on recent extensions. We believe this is important because of the growing use of BART in research applications as well as being used as a competitor model for new modeling or prediction methods. By clarifying the various components of BART, we hope that researchers will be more comfortable using BART in practice.
Despite the success of BART, there has been a growing number of papers that point out limitations of BART and propose modifications. One issue is the inability of BART to do variable selection due to the use of the uniform prior to select the covariate to be split upon in the internal nodes. One simple solution is to allow researchers to place different prior probabilities on each covariate (Kapelner and Bleich, 2016). Other solutions include using a Dirichlet Process Prior for selecting covariates (Linero, 2018) or using a spike-andslab prior (Liu et al., 2018). Another commonly addressed issue is the computation speed of BART. Due to the many MH steps that BART require, computation speed of BART can often be slow, especially when the sample size n and/or the number of covriates p is large.
One direction is to parallelize the computational steps in BART, which was proposed by Pratola et al. (2014) and Kapelner and Bleich (2016). The other direction is to improve the efficiency of the MH steps which leads to the reduction in the number of trees needed.
Notable examples include Lakshminarayanan et al. (2015), where particle Gibbs sampling was used to propose the tree structure T j 's; Entezari et al. (2018), where likelihood inflated sampling was used to calculate the MH steps, and more recently He et al. (2018), where they proposed to use a different tree-growing algorithm which grows the tree from scratch (root node) at each iteration. Other less discussed issues with BART include the problem of under estimation of the uncertainty of BART caused by inefficient mixing when the true variation is small (Pratola, 2016), inability of BART to handle smooth functions (Linero and Yang, 2018), and inclusion of many spurious interactions when the number of covariates is large (Du and Linero, 2018). Finally, the posterior concentration properties of BART have also been discussed recently by Ročková and van der Pas (2017), Ročková and Saha (2018), and Linero and Yang (2018). These works provide theoretical proof of why BART has been successful in many data applications we have seen thus far.
A second component we focused on was how we can extend BART using a very simple idea without having to re-write the whole MCMC algorithm to draw the regression trees.
We term this framework General BART. This framework has already been used by various authors to extend BART to semiparamteric situations where a portion of the model was desired to be linear and more interpretable, correlated outcomes, solve the statistical matching problem in survey, and improve the robustness assumption of the error term in BART. By We do note that the critical component of our General BART framework is re-writing the model in such a way that the MCMC draw of the regression trees can be done separately from the rest of the model. In situations where this is not possible, re-writing of the MCMC procedure for the regression trees may be needed. An example of this would occur if, rather than mapping the outcome to a parameter at the terminal node of a regression tree, it is mapped to a regression model. However, we feel that the general BART model is flexible enough to handle many of the extensions that might be of interest to researchers. This can be achieved by defining v such that min(Y ) = mµ µ − v √ mσ µ and max(Y ) = . This results inỸ ∈ (−0.5, 0.5) where min(Y ) = −0.5 and max(Y ) = 0.5. This has the effect of allowing the hyperparamter µ µ to be set as 0 and σ µ to be where v is to be chosen. For v = 2, N (mµ µ , mσ 2 µ ) assigns a prior probability of 0.95 to the interval (min(Y ), max(Y )) and is the default value. Finally for ν and λ, the default value for ν is 3 and λ is the value such that P (σ 2 < s 2 ; ν, λ) = 0.9 where s 2 is the estimated variance of the residuals from the multiple linear regression with Y as the outcomes and X as the covariates.
For binary outcomes, the α and β hyperparameters are the same but the µ µ and σ µ hyperparameters are specified differently from continuous outcomes BART. To set the hyperparameters for µ µ and σ µ , we set µ µ = 0 and σ µ = 3 v √ m where v = 2 would result in an approximate 95% probability that draws of m j=1 g(X; T j , M j ) will be within (−3, 3). No transformation of the latent variable Z would be needed.
B Posterior distributions for µ kj and σ 2 in BART B.1 P (µ kj |T j , σ, R j ) Let R kj = (R kj1 , . . . , R kjn k ) T be a subset from R j where n k is the number of R kjh s allocated to the terminal node with parameter µ kj and h indexes the subjects allocated to the terminal node with parameter µ kj . We note that R kjh |g(X kjh , T j , M j ), σ ∼ N (µ kj , σ 2 ) and µ kj |T j ∼ N (µ µ , σ 2 µ ). Then the posterior distribution of µ kj is given by P (µ kj |T j , σ, R j ) ∝ P (R kj |T j , µ kj , σ)P (µ kj |T j ) where h (R kjh − µ kj ) 2 is the summation of the squared difference between the parameter µ kj and the R kjh s allocated to the terminal node with parameter µ kj . g(X, T j , M j ), σ)P (σ 2 ) where m j g(X i , T j , M j ) is the predicted value of BART assigned to observed outcome Y i .
C Metropolis-Hastings ratio for the grow and prune step This section is modified from Appendix A of Kapelner and Bleich (2016). Note that α(T j , T * j ) = min{1, where q(T * j ,T j ) q(T j ,T * j ) is the transition ratio, P (R j |X,T * j ,M j ) P (R j |X,T j ,M j ) is the likelihood ratio, and P (T * j ) P (T j ) is the tree structure ratio of Kapelner and Bleich, Appendix A. We now present the explicit formula for each ratio under the grow and prune proposal.
C.1 Grow proposal C.1.1 Transition ratio q(T * j , T j ) indicates the probability of moving from T j to T * j i.e. selecting and terminal node and growing two children from T j . Hence, P (T * j |T j ) = P (grow)P (selecting terminal node to grow from)× P (selecting covariate to split from)× P (selecting value to split on) = P (grow) 1 b j 1 p 1 η .
In the above equation, P (grow) can be decided by the researcher although the default provided is 0.25, b j is the number of available terminal nodes to split on in T j , p is the number of variables left in the partition of the chosen terminal node, and η is the number of unique values left in the chosen variable after adjusting for the parents' splits.
q(T j , T * j ) on the other hand indicates a pruning move which involves the probability of selecting the correct internal node to prune on such T * j becomes T j . This is given as P (T j |T * j ) = P (prune)P (selecting the correct internal node to prune) = P (prune) 1 w * 2 where w * 2 denotes the number of internal nodes which have only two children terminal nodes.
This gives a transition ratio of If there are no variables with two or more unique values, this transition ratio will be set to 0.
C.1.3 Tree structure ratio
Because the T j can be specified using three aspects, we let P SP LIT (θ) denote the probability that a selected node θ will split and P RU LE (θ) denote the probability that which variable and value is selected. Then based on P SP LIT (θ) ∝ α (1+d θ ) β and because T j and T * j only differs at the children nodes, we have P (T * j ) P (T j ) = θ∈H *
C.2 Prune proposal
Since prune is the direct opposite of the grow proposal, the explicit formula of α(T j , T * j ) will just be the inverse of the grow proposal. | 2019-01-22T18:27:40.000Z | 2019-01-22T00:00:00.000 | {
"year": 2019,
"sha1": "61fe0943027b9dc20cd51a4d215d2bf0dcced264",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6800811",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "61fe0943027b9dc20cd51a4d215d2bf0dcced264",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Medicine",
"Computer Science"
]
} |
136179358 | pes2o/s2orc | v3-fos-license | Simulation of Single Row of Droplets Impact on a Flat Surface Under High-overload Condition
A tow-dimensional model has been developed of a row of droplets impact on a flat surface heated with constant power. The simulation based on VOF (volume of fluid) method is carried out to simulate the liquid film flow with no-phase-change heat transfer during droplets hitting the flat surface. A series of case with different droplets velocities and overload acceleration are calculated. The results obtained under highoverload and normal gravity conditions are compared to show the effect of acceleration on the heat transfer performance. KEYWORD: high-overload environment; overload acceleration; VOF method; nophase-change heat transfer
INTRODUCTION
Advanced thermal management technology is becoming a major factor restricting the development of electronics components nowadays. Spray cooling has received much attention in recent years because it can remove high heat fluxes.
The major heat transfer mechanism of spray cooling are heat conduction when droplets hitting the surface, force convection between liquid film and the heater surface, evaporation on the liquid film surface, nucleate boiling, secondary nucleate boiling and so on. These all contribute to the high heat removal rate of spray cooling. It is reported that its Critical Heat Flux (CHF) can be up to 1000 W/cm 2 when water is the coolant [1] .
Experiment is the major method to carry out the research [2] . However, due to the difference in experimental conditions, many conclusions could not reach an agreement. Spray cooling involves many processes such as droplets ejection, droplets interaction and impact into liquid film, so it is very hard to model the whole process. Many researchers attempted to set up some models to describe this process. Some researchers established VOF method to simulate the progress of a single droplet impacting on a constant temperature flat surface [3] . Some researchers established a 3D multiphase flow model to simulate the heat transfer in spray cooling [4] .
In addition, other advantages including compact structure, reduced fluid inventory, low heater surface temperature and temperature gradient, make spray cooling one of the most promising heat transfer techniques used for the cooling of high-power electronics in aviation and aerospace flied. However, aircraft suffers a high overload in maneuvering flight. Airborne thermal control equipment will be affected by the overload inertial forces from different directions and the heat transfer performance is likely to be changed badly [5] .
In order to understand the influence of overload in maneuvering flight on the heat transfer performance of spray cooling, a two-dimensional model will be developed to study the impact of a row of droplets on a flat surface heated with constant power. In this paper, a model based on VOF method is set up to simulate the liquid film flow as well as no-phase-change heat transfer during droplets hitting on the wall surface. Numerous cases with different droplet velocities and overload acceleration are studied. The results obtained under high-overload and normal gravity conditions are compared to show the effect of acceleration. This work tries to provide a reference for the application of spray cooling under high-overload environment.
COMPUTATIONAL MODEL
The objective of this simulation is to understand the heat transfer performance of spray cooling in single phase region under high-overload environment. This simulation can analyze the effect of acceleration on the liquid film flow and the heat transfer performance.
Due to the complexity of the studied problem, the following assumptions will be made: (1) In single-phase region, heat convection between liquid film and heater surface is the major heat transfer mechanism. (2) It is assumed that the heater surface is stationary and the whole solution domain is in horizontal or vertical inertia force field. (3) For a small heater surface unit, there is a single row of droplets impact on it vertically and continuously.
The width of the solution domain is 25mm and the height is 20mm in the computational model. The ordinate origin is on the bottom of the left. In order to simulate the atmospheric environment, the two vertical sides are assigned the pressure outlet boundary and the top side is assigned a pressure inlet boundary. The bottom side is assigned a no-slip boundary condition. The heater is marked with red line and it has a width of 3 mm and heat power of 15 W/cm 2 . Air is the primary phase in the solution domain and liquid is the secondary phase. Pressure solver based implicit method applicable to incompressible flow is chosen for solver model and unsteady laminar model is used for viscous model. The governing equations in this model include momentum equation, energy equation and volume fraction equation. Pressure Implicit with Splitting of Operator (PISO) is used for pressure-velocity coupling and PRESTO! discretization scheme is used in pressure calculation. Momentum and energy equations are solved using a first-order upwind scheme and volume fraction equation is solved using Geo-Reconstruct discretization scheme. First-order implicit method is used for time discretization. In addition, the physical parameters of the gas and liquid phases are all constant. The surface tension coefficient of the liquid water is 0.073 N/m and the static state contact angle of water and wall surface is 90 o . The initial temperature of air and droplets is 27 o C and other parameters are shown in Table 1.
DESIGN OF OPERATING CONDITIONS
Normal gravity environment is represented by "0g". and "10g" and "15g" are used to represent the overload acceleration magnitude suffered by current and future high maneuver aircraft respectively.
Chen et al. [6] researched that the droplet velocity had the most dominant effect on CHF and the heat transfer coefficient, Given this conclusion, droplet velocity is the only parameter studied in this research. The droplets are spaced at 1.6 mm centers and have a diameter of 1 mm. The droplet velocities tested are 2.5m/s, 4m/s and 7m/s, respectively.
In the simulation, droplets with same initial parameters impact the heater surface continuously and the calculation time is 19.7 ms. We can judge whether the simulation reaches steady state or not according to the changes of liquid film flow and heater surface temperature. In our study, the surface temperature in steady state is obtained by the average heater surface temperature in 17.2~19.7ms. The simulation conditions are shown in Table 2. The simulations are established under normal gravity and high-overload conditions, respectively. Average surface temperature is used as the criterion of heat transfer performance. From Fig. 1 and Fig. 2, it can be seen that the relationship between average surface temperature and droplet velocity is different when the value and direction of overload acceleration are different. The details are discussed below. After the impact, the inertia force of droplets provides the momentum and the kinetic energy is dissipated during the spreading. Fig.3 shows that when the droplet velocity is 2.5 m/s and the initial momentum of droplet is low, the liquid film is easy to break and its distribution on the heater surface isn't uniform. When the droplets velocity is 4 m/s, the liquid film is not break obviously and its distribution is relatively uniform. When the droplet velocity is increased to 7 m/s, the liquid film surface is serrated without breaking up and its distribution is uniform.
Heat transfer Performance Under Normal Gravity Condition
(a) V=2.5 m/s t=0.7 ms t=5.7 ms t=10.7 ms t=19.7 ms (b) V=4 m/s t=0.7 ms t=5.7 ms t=10.7 ms t=19.7 ms (c) V=7 m/s t=0.7 ms t=5.7 ms t=10.7 ms t=19.7 ms The heat transfer results obtained at different droplet velocities are analyzed, and the effect of velocity on heat transfer performance can be summarized as follows.
(1) With the increase of droplet velocity, the impact force of droplets on liquid film is enhanced. As a result, the liquid film flow and convective heat transfer are also increase.
(2) Droplets would bring air into the liquid film during the impact. Low thermal conductivity of the air restricts the heat transfer when air bubbles are adhered to the heater surface. Once the liquid film flow is enhanced, the attachment time of air bubbles on the heater surface is decreased and then heat transfer is improved.
(3) A larger droplet velocity leads to a larger fluid flux. Increasing droplet velocity is also expected to make more working fluid to transfer heat with the heater surface in unit time.
(4) With the increase of droplet velocity, more droplets would impact on the liquid film in unit time and the air carried into liquid film is increased as a result.
(5) The thickness of liquid film doesn't change monotonously with the increase of droplet velocity. The thicker the film is, the harder the adhering bubbles get away from the heater surface and also the harder the cold working fluid contact with the heater surface and transfer heat.
From the above, the average heater surface temperature is decreased and the heat transfer is enhanced with the droplet velocity increased in the simulation of normal gravity. With further increase of the droplet velocity, the surface temperature becomes constant.
Heat Transfer Performance Under Horizontal High-overload Condition
It can be found in Fig.1 that heater surface temperature is lower and heat transfer performance is better in horizontal high-overload environment than those obtained in normal gravity environment. It is also indicated that surface temperature first decreases and then increases with the increase of droplet velocity and the lowest surface temperature is acquired at the droplet velocity of 4 m/s. When the droplet velocity is 4 m/s, surface temperature obtained under 15g condition is lower than that obtained under 10g condition. At other droplet velocities, 10g condition can get lower surface temperature. Overload acceleration change the liquid film flow behavior on the heater surface including liquid film velocity and thickness distribution. Those will affect the heat transfer performance. Fig.4 shows that at the droplet velocity of 2.5 m/s, the distribution of liquid film on flat surface is very non uniform, and higher overload acceleration leads to more non uniform distribution. Moreover, the liquid film on the left side breaks badly. While that on the right side piles up and the film thickness is increased obviously. When the droplet velocity is 4 m/s, the distribution of liquid film is relatively uniform on the center of the heater surface and the liquid film still breaks or accumulates on the edge of the surface. When the droplet velocity is 7 m/s, the distribution of liquid film is relatively uniform without breaking or accumulating. Thus it can be seen that overload acceleration changes the flow and distribution of liquid film on the flat surface and the uniformity of liquid film distribution becomes worse with droplet velocity decreasing.
Heat Transfer Performance Under Vertical High-overload Conditions
Under horizontal high-overload condition, the average heater surface temperature is the lowest at the droplet velocity of 4 m/s. However, when it is under vertical highoverload condition, heat transfer performance will be quite different because of acceleration direction. Fig. 2 shows that when the droplet velocity is 4 m/s, the average heater surface temperatures are 51.26 ℃, 45.98 ℃ and 54.10 ℃ under normal gravity, overweight and weightlessness conditions, respectively. The temperature decreases in overweight environment and increases in weightlessness environment. It is also obtained from Fig. 4 that when the droplet velocity is 7 m/s, the acceleration almost shows no effect on heat transfer. Fig.5 shows that when the droplet velocity is 4 m/s, surface temperature decreases, but liquid film breaks obviously under overweight condition. While under weightlessness condition, liquid film continuity is relatively good, but the surface temperature is increased due to the increase of liquid thickness. Fig.6 shows that when the droplet velocity is 7 m/s, liquid film continuity is relatively good under both overweight and weightlessness conditions. Moreover, the surface temperature is almost constant. Higher droplet velocity means larger initial momentum after droplets impact the heater surface. And in this case, overload acceleration shows little influence on liquid film flow and then on heat transfer performance as well.
Dangers of Liquid Film Breaking and Accumulating
Computational model is modified slightly in this section to study the danger of liquid film distribution non-uniformity under high-overload. The length of the heater surface is extended to 23 mm in the new model. Acceleration is 15g to the left in horizontal direction and droplets with velocity of 2.5 m/s are spaced at 1.2 mm centers.
In this case we can find that that average surface temperature has a cyclical fluctuation over time. One cycle is 2 ms and the range is 60 ~ 80 ℃. The heater surface temperature at the point of (2, 0) has a rapid increase. This point is in the area of liquid film breaking and when there is no liquid film on the flat surface. Heater surface is in direct contact with the air so that heat transfer coefficient is very small at this time. As a result, the local temperature increases sharply in a short time. When the surface temperature is more than 100 ℃, the phase change processes are likely to play an important role in the total heat transfer and the computational model is not applicable anymore. Even so, the simulation still shows the danger of film breaking on heat transfer. On the other hand, the temperature period is also 2 ms and it indicates that film breaking has a direct effect on average heater surface temperature. The heater surface temperature and liquid film thickness at the point of (23, 0) increases continuously with the growth of the liquid film accumulating at the point.
The results above illustrate that film breaking and accumulating will lead to high local temperature, which will become serious threat to the component working performance. | 2019-04-29T13:16:52.872Z | 2017-05-23T00:00:00.000 | {
"year": 2017,
"sha1": "15da1e5710aea33ed701d4b4078aa94deaa14d59",
"oa_license": null,
"oa_url": "http://dpi-proceedings.com/index.php/dtetr/article/download/10020/9573",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "81ab77c14eedaadfb53e95937070e67620b7f637",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
228935690 | pes2o/s2orc | v3-fos-license | Single crystal flake parameters of MoS2 and MoSe2 exfoliated using anodic bonding technique and its potential in rapid prototyping
Rapid prototyping of devices using exfoliated Molybednum di-Sulphide (MoS2) and Molybdenum di-Selenide (MoSe2) requires an experimental protocol for maximizing the probability of realizing flakes with desired physical dimension and properties. In this work, we analyzed the size and thickness distribution of MoS2 and MoSe2 single crystalline flakes exfoliated using anodic bonding technique and established a correlation between physical dimension of the flakes and the bonding parameters. Anodic bonding was carried out by applying a fixed voltage of 200 V with a set temperature of 150 °C for four different bonding time intervals. On analyzing the flake parameters from the four anodic bonded substrates using the optical and atomic force microscopy, it is found that the probability of getting flakes with large lateral size (>200 μm) increases as the bonding time interval is increased. Most of these large sized flakes have thickness of more than one hundred mono-layers and a tiny fraction of them have thickness of the order of few monolayers. A similar trend was also observed for MoSe2 single crystals. To demonstrate the feasibility of this technique in rapid prototyping, ultra thin MoS2 flakes was directly bridged between two ITO electrodes and their transport properties was investigated. Micro-Raman and photoluminescence studies were taken on selected regions of the thicker and thinner exfoliated flakes and their physical properties are compared.
Introduction
Two dimensional (2D) transitional metal dichalcogenides (TMDCs) are an emerging class of materials for fabricating nano-scale devices as it has non-zero tunable band gap, strong spin-orbit coupling, high electron mobility and large optical absorption window [1]. The most simplest method to realize highly crystalline, pure and atomically thin nanosheets of TMDCs is by mechanical exfoliation [2]. Even though this technique is not scalable unlike other growth techniques such as molecular beam epitaxy, chemical vapour deposition and chemical exfoliation, it offers the advantage of realizing high quality films for rapid prototyping [3][4][5]. Over the years, such mechanically exfoliated ultra thin films have greatly contributed towards understanding many of the fundamental properties and potential device application of TMDCs and other layered materials [6,7].
The main bottleneck in the conventional exfoliation process is the smaller flake size and low yield of single layers. Typical size of the exfoliated flakes are often less than 200 nm and requires electron beam lithography for device fabrication. One possible route to increase the size of the flakes is by exfoliating on gold coated substrates [8]. Even though the technique produced larger flakes, the necessity to pre-deposit the substrate with gold requires post-exfoliation chemical treatement which affects the quality of the flakes. Considering the fact that mechanical exfoliation is very cost effective, it would be a great advantage, if one could use this method for rapid prototyping of devices without using expensive lithographic techniques. To realize this objective and overcome the size limitation of the convetional exfoliation process, the best alternative will be anodic bonding technique [9]. In this method, exfoliation is carried out on a sodium rich pyrex glass by applying high voltage and temperature [10]. The applied voltage triggers the movement of ions inside the glass substrate to create a space charge region at the boundary. The resultant electrostatic force pull the flakes towards the substrate and an ionic type bonding is established between the flake and the substrate [11]. On separating the scotch tape with the precursor TMDCs from the substrate, high quality large single layer flakes with fewer grain boundaries are obtained [12]. Such exfoliated flakes are easily dispersible in any solution by ultrasonication and subsequently can be transferred to any substrate or pre-patterned electrodes for experimental investigations.
Even though most of the works on anodic bonding and other exfoliation technique report the presence of large area flakes, it seldom highlights the fact that such large flakes are found among the debri of hundreds of other exfoliated flakes with varying size and thickness. The primary objective of this work is to provide a realistic analysis about the size and thickness distribution of TMDC flakes exfoliated using anodic bonding technique with different bonding parameters. The analysis is then used to correlate the likelihood of getting flakes of particular size and thickness with the bonding parameter. Furthermore, by using the bonding parameter that favourably yielded flakes with sizes>200 μm, the TMDC flake was directly bridged between two ITO electrodes which are about 200 μm apart and their transport properties were investigated. Figure 1 illustrates the process flow for obtaining very thin flakes of MoS 2 and MoSe 2 on a glass substrate using anodic bonding technique. A detailed explanation about the physics of anoding bonding technique can be found elsewhere [9]. Single crystalline flakes of MoS 2 and MoSe 2 (size of around 1-2 mm) were carefully cleaved off from the bulk crystal using a scotch tape. The thickness of the freshly cleaved precursor flake is further reduced by exfoliating three to four times before using it for anodic bonding ( figure 1(a)). For each of the different bonding trials freshly cleaved TMDC precursors were used and the same tape was never used more than once.
Experimental details
The scotch tape with the 2D precursor flakes is then stuck onto the glass substrate and placed on a heating platform. Two copper electrodes were attached, one on the top of the scotch tape and the other at the bottom of the glass substrate and the two contacts were connected to the Keithely source-meter ( figure 1(b)). Anodic bonding procedure was carried out in four different substrates by applying a voltage of 200 V at 150°C for time intervals 15, 30, 45 and 60 min. A small mechanical pressure was exerted using a 100 gm weight on the top of the scotch tape to improve the adhesion between the glass substrate and the scotch tape. Once the bonding time is completed, the scotch tape was peeled gently leaving behind flakes of different geometry on the glass substrate (figures 1(c) and (d)). The exfoliated flakes were then investigated using optical and atomic force microscope (AFM) to study its size and thickness distribution. Raman and photoluminescence (PL) studies were carried out to ascertain the active phonon modes and the optical band gap of the exfoliated flakes. A simple two terminal device was fabricated without using any lithographic technique by directly exfoliating the flakes between the two Indium Tin Oxide (ITO) electrodes on the glass substrate. The transport properties was then investigated by measuring its resistance as a function of temperature in a variable range closed cycle cryostat.
Results and discussion
Optical microscope images of very thin single crystals of MoS 2 and MoSe 2 flakes exfoliated under four different bonding parameters are shown in figures 2 and 3. The images were taken by illuminating white light from the back of the substrate.
Most of the flakes appear either blue or greenish in colour due to the difference in their optical absorption characteristics. When the flakes are thicker, it absorbs lights with longer wavelength and transmits mostly the shorter wavelength (blue) radiation. As the thickness of the flakes decreases, the intensity maxima of the transmitted light is red-shifted giving rise to greenish tinge. The thicker bluish flakes are predominantly larger, with lateral size greater than 60 μm whereas the thin greenish ones have lateral sizes varying between 10 and 60 μm. Furthermore it can also be observed that the number of flakes with larger lateral size increases with the bonding time. This is because, on initial contact, many tiny air gaps are formed between the precursor flakes and the glass substrate. As the time progresses, the flakes are gradually pulled towards the substrate due to the force exerted by the applied electrostatic field. With the area of physical contact increasing with the bonding time, the probability of getting flakes with large lateral size increases [13]. 5(a)). Table 1 shows that, as the bonding time interval is increased from 15 to 60 min, the average flake size increases. Therefore, to get a reasonable number of flakes with size greater than 200 μm, ideal bonding time will be 45 min and more.
For ascertaining the thickness distribution of the flakes, we carried out AFM measurement on all the four anodic bonded substrates. We focused our measurement only on the flakes whose lateral size is 50 μm and more. This 50 μm limit was fixed for two reasons; One, it will give us a rough estimate of thickness distribution of larger flakes which is the focus of this study. Secondly, the AFM measurements are very time consuming and it is not practically feasible to study the thickness distribution of all the exfoliated flakes. The results of our AFM measurements are shown in table 1. From the thickness distribution data across all the four bonding intervals, we found that in MoS 2 , the maximum flake thickness varied between 730-840 nm and the range of minimum thickness was between 80-300 nm. In comparison to MoS 2 , the flakes are significantly thicker in MoSe 2 for all the four bonding time intervals with maximum flake thickness varying between 1-0.6 μm. This was expected as the energy cost of peeling off the layers is lesser for MoSe 2 [14]. Unlike the lateral size distribution, there is a lack of correlation between bonding time interval and the flake thickness. This is due to the small number of samples taken for the study. However the thickness distribution data roughly indicates that the most expected thickness for flakes larger than 50 μm is around 250 nm. The AFM images taken across the surface of those flakes indicates that the flake surfaces are relatively smooth without any visible cracks (supplementary information (figure S1 is available online at stacks.iop.org/JPCO/4/105015/mmedia)). Large mono-layer flakes (size>50μm) are obtained in the bonding trails but the likelihood of getting it was highly random. To elaborate, the number of bonding trials (with same bonding parameters) one should carry out to realize such large monolayer and bilayered flakes still remains unpredictable. A monolayer flake of MoSe 2 which was observed in one of the bonding trials is shown in figure 5(b) along with its Raman spectrum.
Optical bandgap and active phonon modes of the exfoliated MoS 2 and MoSe 2 flakes were investigated using PL and Raman spectroscopy respectively. Both these studies were performed on a large number of flakes for all the bonding parameters. However the spectra corresponding to only the thickest and the thinnest flakes were shown as it gives the lower and upper bound for the optical band gap and phonon mode distribution (figures 6 and 7). For the sake of clarity, detailed analysis was limited to 60 min bonding time and the data for all the other bonding parameters are provided in the supplementary information (figures S2-S5).
The thinnest and thickest MoS 2 and MoSe 2 flakes used in this study have their lateral sizes around 7 and 200 μm respectively. At room temperature, the exciton binding energy for MoS 2 and MoSe 2 is very weak and therefore the excitons will be mostly free [15,16]. Unlike 'bound' excitons, the PL peak due to the recombination of free excitons will be more or less equal to the optical band gap. We use this fact to estimate the band gap of the flakes directly through PL peak wavelength. The thinnest flake has an optical bandgap of 1.7 eV whereas the thicker one is around 1.3 eV ( figure 6(a)) [17]. The higher bandgap for the thinner flakes is due to the strong coupling between the adjacent layers. As thickness decreases, the inter-layer coupling becomes stronger and the energy separation between the top of the valence band and the bottom of the conduction band increases [18]. In the case of MoSe 2 , PL shift between the thickest and thinnest flakes is much lesser and is around 0.12 eV. This is due to the fact that the weak inter-layer interaction in MoSe 2 makes the bandgap vary very slowly with layer thickness. The optical bandgap of the thinnest MoSe 2 flake is 1.45 eV which is about 0.1 eV less than the monolayer bandgap [19]. The optical band gap of exfoliated flakes for all the other bonding parameters are tabulated and shown in the supplementary information (table ST1). Micro-Raman spectrum were taken in the back scattering geometry with a 532 nm laser by focusing it at the edges of the same MoS 2 and MoSe 2 flakes that were used for PL studies. Raman spectrum revealed phonon modes that are very similar to a monolayer flake indicating that the thickness at the edges of both thick and thin layer flakes are almost few monolayer thick. For instance, in MoS 2 , both in-plane mode (E 1 2g ) and out-of-plane mode (A 1g ), had a raman shift of less than 25 cm −1 which is a typical feature of monolayer flake [20]. Similarly, for MoSe 2 a sharp peak at 243 cm −1 corresponding to out-of-plane virbational mode of a few layer flake was observed (A 1g ). This study indicate that the exfoliated flakes are thicker at the center and gradually thin down to few monolayer thickness as we move towards the edges. This is also observed in the form of color contrast in the optical microscope images and also in the Raman spectra taken at the center and edge of few layer MoSe 2 flake which is shown in the supplementary information (figures S6 and S7).
Finally to demonstrate the feasibility of this bonding technique in rapid prototyping of devices, two and four terminal devices were fabricated by directly exfoliating n-MoS 2 flakes between the pre-patterned ITO electrodes (figures 8 and S8). The flakes were exfoliated using 60 min bonding time. The experimental procedure for realizing 200 μm gap pre-patterned ITO electrode was provided elsewhere [21]. Since the optical contrast between the etched and the unetched segments of the ITO portion was very poor, a solid line demarcating the boundary had been drawn as a guide to the eye. In figure 8, one can notice that the exfoliated MoS 2 flakes bridges the electrodes in two ways. One is by inter-connected flakes ( figure 8(a)) and other is through a large single MoS 2 flake directly bridging the electrodes ( figure 8(b)).
For the two terminal measurement, the bridged sample was loaded in a closed cycle cryostat and the currentvoltage (I-V) measurement was taken at different selected temperatures as the sample was cooled down to 10 K ( figure S9). The linearity of the I-V curves all the way upto 10 K indicates that the contacts are ohmic. On reaching 10 K, the sample was allowed to warm up naturally and the current was recorded as a function of temperature (T) by applying a constant bias of 3 V. To determine the effect of contact resistance, four terminal measurement was carried out by injecting a constant current of 100 nA and the voltage was measured as a function of temperature (figure S8). The resultant resistance (R) versus temperature (T) plot shows identical characteristics for both two and four terminal measurement which is a clear indication that the contact resistance plays no significant role in our measurement (figure 9). Two distinct conduction regimes are seen in the transport measurement [22]. A regime below 70 K dominated by phonons and the other above 70 K dominated by thermal activation of charge carriers. In the phonon dominated regime, the current increases with decrease in temperature as the number density of phonons decreases. In the thermally activated regime, the current increases with temperature as more and more charge carriers are activated from the donor sites to the conduction band. Given the fact that the Mo interstitials and S vacancy are deep level defects, we spectulate that carrier excitations are mostly from the shallow Hydrogen donor sites in MoS 2 [23].
Summary and conclusion
In summary, we investigated the flake parameters of MoS 2 and MoSe 2 that were exfoliated using anodic bonding technique. The analysis revealed that the probability of getting flakes with lateral size>200 μm is high when the bonding time interval is 45 min or more. Larger flakes mostly have a thickness greater than 300 nm and the smaller ones are only a few monolayer thick. The band gap distribution and phonon modes were studied using PL and Raman spectroscopy respectively. The transport properties of the exfoliated flakes were then studied by directly bridging them between the two ITO electrodes. The study revealed two transport regimes, one dominated by phonons and other by thermally activated carriers. . Resistance versus temperature plot for n-MoS 2 that was directly bridged between the two ITO electrodes. The dotted line demarcates the boundary between two different resistance regimes. Inset: resistance versus temperature plot for n-MoS 2 that was directly bridged between four ITO electrodes (four terminal measurement). | 2020-10-28T19:19:01.838Z | 2020-10-30T00:00:00.000 | {
"year": 2020,
"sha1": "f7093012b835f81746c7638fad86e2f821bc3f1c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/2399-6528/abc296",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3033c78c3c2313e844bc84bb11a94654d6876fae",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
251335818 | pes2o/s2orc | v3-fos-license | A Clinical Approach to Juvenile Parkinsonism During Pregnancy CASE REPORT
Juvenile parkinsonism is a rare disease affecting patients younger than the age of 21 years. When superimposed with pregnancy, most physicians fear its health complications and the role of the treatment and its safety for the fetus. This case presents a 37-year-old woman diagnosed with juvenile parkinsonism who was blessed with her second child, eight years after her first. Despite all the odds, concerns, and warnings from family and physicians, the patient was determined to conceive, avoiding all means of contraception. During pregnancy, the patient experienced multiple hypoglycemic attacks and was diagnosed with gestational diabetes mellitus, which was controlled accordingly. The patient also suffered from motor impairments that worsened with the progression of pregnancy. However, the patient regained previous motor function upon delivery. The expectation that pregnancy may permanently worsen the symptoms of Parkinson’s disease is not explicit; some pregnancies are uncomplicated by Parkinsonism yet complicated by pregnancy-induced medical conditions. As demonstrated in this case, family support and care, alongside continuous maternofetal monitoring, aids in the success of pregnancy in patients with juvenile Parkinson’s disease regardless of their risks.
Introduction
Parkinson's disease (PD) is a common neurodegenerative disorder involving the degeneration of the dopaminergic neurons of the basal ganglia. Its symptoms are subdivided into motor symptoms such as resting tremor, bradykinesia, and rigidity, and other non-motor involvements, including autonomic, sensory, sleep, and neuropsychiatric impairments.
The prevalence of PD is age-dependent, with a prevalence of about 5% in patients younger than 40 years of age. Hence, PD is commonly missed and under-diagnosed in this population. PD is more common in males than females, with a ratio of 3:2. However, the reasoning behind such gender predominance is still undetermined, yet some studies have indicated the protective role of estrogen, which reduces the rates of the disease in females.
Since the prevalence of PD in individuals aged less than 40 years is low, it justifies the low prevalence of the disease during pregnancy. Multiple concerns exist in patients with PD and pregnancy, mainly regarding the safety of both the mother and the fetus during conception, gestation, and delivery. Since there is a low prevalence of patients with PD and pregnancy, there is no single established management plan and guidelines for such rare cases since the data from the available publications has numerous limitations. However, the cases with a positive and successful outcome were preponderant. 1
Case Presentation
A 37-year-old woman, gravida 3, para 1, abortion 1, was admitted to the Bahrain Defense Force (BDF) hospital at 23 weeks of gestation. The patient was diagnosed with juvenile parkinsonism at the age of 18 years in Saudi Arabia through clinical evaluation and radiological confirmation. Initially, she suffered from bilateral upper limb paresthesia with upper limb postural tremors, which progressed to bilateral upper limb stiffness, and at the age of 28, the stiffness advanced to involve both upper and lower limbs. In 2011, the patient developed an unsteady shuffling gait and impaired short-term memory. As for the neuropsychiatric aspect of PD, the patient initially suffered from insomnia, then depression and generalized anxiety disorder. The patient also presented with postural hypotension and hyperhidrosis. The patient denied any history of dysphagia, urinary incontinence, or stool incontinence. The patient's symptoms were aggravated by stress and anxiety and were significantly relieved by levodopa. No history of any substance abuse was reported. The family history for PD and its sub-entities was negative. No genetic testing was conducted to confirm the genotypic mutation involved.
The patient was initially started on Levodopa/ carbidopa (a decarboxylase inhibitor that increases dopamine in the central nervous system (CNS)) and Pramipexole (a dopamine agonist) at the age of 20, and subsequently developed peaked dose dyskinesia and wearing-off phenomena with episodes of sudden freezing while sitting, and hence amantadine, an antiviral associated with increasing dopamine in the CNS, was added. Escitalopram, a selective serotonin reuptake inhibitor, was used to alleviate neuropsychiatric symptoms. Multiple neurosurgical interventions were attempted, with her first deep brain stimulation (DBS) inserted in April 2014, but unfortunately, it was removed due to the complication of a hemorrhagic stroke, resulting in right-sided spastic hemiparesis. In 2016, another DBS was inserted successfully in Germany.
During the patient's first pregnancy in 2012, stiffness was exaggerated, and she suffered from fatigue and severe body aches. The patient was on the following medications then: Pramipexole 0.18 mg, and Levodopa/carbidopa. Eventually, in the 31st week of gestation, due to the severe exacerbation of symptoms, the patient delivered a baby girl by cesarean section with a birth weight of 1.7 kg. The baby was kept in the nursery for 2 months, and is currently, healthy. Following her first pregnancy, the patient desired to conceive although she had a first-trimester spontaneous miscarriage in 2019.
In 2020, the patient had a positive home pregnancy test and confirmed it at the health center and was not able to follow up with neurologists in Saudi Arabia and Germany due to the coronavirus disease of 2019 (COVID-19) pandemic, and hence tapered medications down by herself, consuming Pramipexole 0.52 mg and Stalevo 100 mg. No special tests or investigations were ordered by neurologists. Continuous follow-ups with obstetrician throughout the pregnancy, which were all insignificant until the second trimester, when the patient reported feeling dizzy and collapsing on multiple occasions, along with progressive motor impairments and occasional loss of consciousness, hence was admitted for further investigations.
Upon admission at 23 weeks of gestation, the patient was medically evaluated and was oriented to time, place, and person. No auditory or visual deficits and had good facial and constructive organization. The patient had adequate cognitive functioning for short-term cognitive screening. Mini-mental state was 27/30 with minor deficits in her calculation skills. Symptoms were more predominant on right side. She suffered from mild postural instability. An intact sensory examination was reported. The patient had no dysarthria, no facial or gaze paresis. Hyperkinesia of the left side and right-sided hemiparesis with no leg dropping nor pronation of the right arm. A corresponding disturbance of fine motor skills in juvenile Parkinson's syndrome was reported. Due to the COVID-19 pandemic, the patient was not able to follow up on her DBS device in Germany, and this might have added to the worsening of motor functions.
Investigations were carried out and the patient was diagnosed with gestational diabetes and was discharged on metformin (a biguanide that increases insulin sensitivity) to obtain glycemic control. With regards to motor impairments and loss of consciousness, the patient was dependent on her husband, who took the time to provide care for her. The patient was re-admitted at 34th week of gestation for an elective cesarean section, with the outcome of a healthy baby girl weighing 2.5 kg and an Apgar score of 9 at 5 minutes. Further neonatal examination revealed no anomalies. Following discharge, the patient was competent to care for her two daughters with the help of her husband and family, regaining her prior motor function.
Discussion
PD is mostly known to affect the elderly. However, in less than 5% of the population, the disease is diagnosed before 40 years of age and is known as early-onset Parkinson's (EOPD), which is subclassified into juvenile parkinsonism and youngonset Parkinson's disease (YOPD). Patients with juvenile parkinsonism have the onset of their symptoms before 21 years of age, while patients with YOPD have their symptoms appear between the ages of 21 and 40. 2 In this case, the patient's signs and symptoms first appeared at the age of 18 years.
A systematic analysis compared the prevalence and morbidity of PD from 1990 to 2016. This study showed that in 2016 there were 6.1 million patients with PD, whereas in 1990 the prevalence was only 2.5 million. This significant increase in the global burden of PD is due to a larger population, an increased life expectancy, and environmental and demographic factors. 3 Even though no genetic confirmation was done in this case, as juvenile Parkinsonism is a rare entity in PD, it is important to identify the mutations involved in its pathogenesis. There has been successful identification of some of the genes that are linked with early-onset Parkinson's, including impairments in PRKN, PINK1, and DJ-1; numerous more mutations have yet to be identified. Multiple autosomal recessive variants of juvenile Parkinson's have been identified, some of which are associated with mutations in the DNAJC6 gene, which encodes for 970 proteins. 4 Another rare autosomal recessive variant of juvenile Parkinson's is Kufor-Rekab syndrome (KRS), which is linked to a mutation in the ATP13A2 gene 5 , and other cases of autosomal recessive juvenile Parkinson's have been documented to involve a parkin gene mutation (ARJP/PARK2), which is associated with a marked response to levodopa and is associated with levodopa-induced dyskinesia. 6 The diagnosis of PD is made clinically with the aid of history, examination, and responsiveness to levodopa. Magnetic resonance imaging (MRI) is used to exclude the differential diagnosis, while morphometric and functional MRI, along with transcranial Doppler ultrasound studies, are being used to differentiate idiopathic Parkinson's from other parkinsonian disorders. Radionuclide imaging such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT) can be used to assess dopamine metabolism and deficiency by using a dopamine transporter ligand, revealing an asymmetrical reduction in the uptake, mainly in the dorsal striatum. No biomarkers have been identified to be distinctly associated with PD. However, increased alpha-synuclein in the cerebrospinal fluid indicates possible cognitive impairments. 7 The semiology of Parkinson's disease is not exclusive to it, coinciding with the clinical features of other diseases, including Wilson's disease, dopa-responsive dystonia, and drug-induced parkinsonism. The clinical manifestations of PD include motor and non-motor manifestations. The motor signs include resting tremor, cogwheel rigidity, and bradykinesia, along with impaired postural reflexes and unstable gait, mainly shuffling gait. There are similarities and differences when comparing early-onset and late-onset Parkinson's disease (LOPD); patients with EOPD have more prominent muscle stiffness and marked levodopa-induced side effects such as "wearing-off phenomena," "on-off" dystonia, and peak-dose dyskinesia. Patients with juvenile Parkinson's specifically, have more prominent dystonia and akinetic rigidity. Patients with LOPD more often present with gait disturbances and postural instability. The non-motor manifestations of PD include cognitive and psychological impairments, including psychosis, confusion, and hallucinations. In EOPD, depression was more prominent, which might be linked to a longer duration of the disease and its morbidity. Paresthesia, restlessness, and hyperhidrosis were profound in patients with EOPD. 8 This study supports the current case report findings, as the patient experienced significant levodopa-induced side effects and depression, in addition to the general signs and symptoms of Parkinson's disease.
The incidence of pregnancy in patients with
Parkinson's is rare. This study postulated that the clinical manifestation of juvenile Parkinson's might be hormonally linked since they noted that in their patients, symptomatic exacerbations occurred at estrogenic surges such as between ovulation and menstruation, antenatal, and late pregnancy. Estrogen functions in the regulation of dopaminergic neurotransmission in the basal ganglia and hence alters the symptoms of Parkinson's.
A survey conducted in the United Kingdom showed that 65% of women had worsened symptoms despite continuous treatment, speculating that this was associated with the altered serum drug concentrations due to the physiological increase in plasma volume and changes in gastrointestinal absorption and renal excretion during pregnancy. The study signified the continuation of treatment before and during pregnancy, and the importance of having a sufficient multidisciplinary team to monitor pregnant women with Parkinson's disease; however, there are no clear guidelines for an exact management plan. No data indicates increased fertility, maternofetal, or intrapartum complications in women with PD. This survey indicated that there is no contraindication to normal vaginal delivery in patients with PD and that a cesarean section should not be prompted for. 9 However, in this case, the patient had two cesarean sections, and this could be due to motor impairments.
Postpartum maternal acts might be affected by the level of motor deterioration per individual. Breastfeeding is an option despite the limited data about the harms of possible horizontal transmission of medications via breastmilk and the role of levodopa in suppressing lactation, though in this study, two women breastfed their infants comfortably. 9 However, even though the data suggests no harm is implied to infants from lactation, some mothers, like our patient, are still reluctant to breastfeed.
Conclusion
Despite the limited data on juvenile Parkinson's during pregnancy, there are successful uncomplicated cases, and some complications are transient, resolving upon delivery. In such cases, it is important to monitor for gestational-induced conditions not only for the symptoms of parkinsonism since they are also detrimental to maternofetal safety. As illustrated by previous studies, there is an important role for a multidisciplinary team to manage pregnant ladies with juvenile Parkinsonism. Despite the disease's high morbidity for patients and their lives, patients can function normally in social and familial settings. | 2022-08-05T15:15:29.015Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "ff8fba1f4a16c816427e4ec4c40dd5d81c8b4f31",
"oa_license": "CCBYNC",
"oa_url": "https://www.bhmedsoc.com/jbms/media/Full_Text_PDF/JBMS253Full_Text_PDF.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "73412ed79974d5ada92896c8c4510273e90b5f10",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
} |
202918585 | pes2o/s2orc | v3-fos-license | Tree cover mapping based on Sentinel-2 images demonstrate high thematic accuracy in Europe
Highlights • A tree map was created using unsupervised classification for six Sentinel-2 tiles.• The combination with highest agreement with the NFI was the bands 2, 3, 6 and 12.• Good qualitative agreement between the present map and the NFI.• The map shows between 8% and 79% greater tree cover compared to previous estimates.• The overall accuracy of the present map was assessed to be up to 90%.
Introduction
Trees serve as a major carbon pool contributing to important feedback mechanisms to the earth's climate (Bonan, 2008). Likewise, trees are known to release gases, such as Biogenic Volatile Organic Compounds (BVOC) (Kesselmeier and Staudt, 1999), and Primary Biological Aerosols (PBA), such as pollen (Pauling et al., 2012) and fungal spores (Sadys et al., 2014), to the atmosphere. Repeatedly, it has been demonstrated that location and abundance of trees are important in relation to the release of VOC (Arneth et al., 2011;van Meeningen et al., 2016) and their contribution towards production of secondary organic aerosols (Oderbolz et al., 2013;Tchepel et al., 2014) or PBA (Hernandez-Ceballos et al., 2011;Pauling et al., 2012). Furthermore, the spatial and temporal distribution of trees is known to be important for commercial, recreational and social activities in society (FAO, 2015) as well as the ecological or biodiversity functionality of the landscape (e.g. Ren et al., 2013;Schindler et al., 2013). It is thus evident that the spatial distribution of trees and changes in the spatial distribution of trees over time has a large impact on human health and the environment.
A range of tree mapping methodologies has been presented in the literature. Focus in this section will be on mapping of trees in the United Kingdom (UK) due to the scarcity of map comparisons for other countries. Skjøth et al. (2015) assessed the accuracy of Corine Land Cover (Bossard et al., 1994) and Globcover ) against the National Forest Inventory (NFI) over the UK (Forestry Commission, 2001). Despite reported high thematic accuracy for Corine Land Cover (Büttner and Maucha, 2006;Caetano et al., 2006) and Globcover , large biases were found in these compared to the national dataset. Similar map comparison exercises have been carried out at European scale (Seebach et al., 2011a, b). In old cultural landscapes, like the UK, many trees are located in smaller patches, such as hedgerows, or in urban areas (McInnes et al., 2017). A remote sensing approach was used in Kempeneers et al. (2011) European scale tree cover as presence/absence, and Hansen et al. (2013), who mapped global tree cover as a percentage, both with a spatial resolution of 25 m -30 m. An estimate by the authors shows a relative difference of 26% in the area of the total UK tree cover between Hansen et al. (2013); Kempeneers et al. (2011) andForestry Commission (2011), which indicates a considerable uncertainty relating to the total tree cover in the UK. Moreover, Hansen et al. (2013) does not distinguish between broadleaved and coniferous trees, a minimum requirement in a range of scientific applications such as air quality modelling (Oderbolz et al., 2013;Steinbrecher et al., 2009), dynamic vegetation modelling (Hickler et al., 2012), the generation of national forest inventories (Paivinen et al., 2009) and modelling of climate change including future woodland changes (e.g. Jones et al., 2009). As seen, there is a general lack of consensus on mapping methodologies (Hansen and Loveland, 2012) and comparisons of the thematic accuracy of the resulting tree maps are rather scarce.
The Copernicus Sentinel-2 satellites, which were launched in 2015 and 2017, have four bands with a spatial resolution of 10 m and a total band combination of 13 bands, with spatial resolutions ranging from 10 m to 60 m, specifically designed for vegetation monitoring (Drusch et al., 2012). The high spatial, temporal and radiometric resolution of data from this satellite should enable the creation of tree cover maps with a higher thematic accuracy than previously achieved, and recent examples include Grabska et al. (2019) and Korhonen et al. (2017). Given that this is a new satellite, a substantial amount of research on development of tree mapping algorithms as well as accuracy assessment of said algorithms have to be done in the years to come. To contribute to this process, a tree map for six selected Sentinel-2 tiles was created, the optimal choice of spectral bands as input to the map was analysed, and the accuracy of this map was assessed.
Section 2.1 describes the creation of the tree map using unsupervised classification. The data included in the respective analyses is specified in Sections 2.2, Section 2.3 describes the analysis of the optimal choice of spectral bands and Section 2.4 details how the accuracy assessment was performed. The study area is described in Section 2.5 as a foundation for the discussion of the map accuracy. The results and discussion are presented in Section 3 and the conclusion in Section 4.
Tree cover mapping
In order to support monitoring processes, reduce the cost of the development and increase the production speed, the mapping methodology should proceed without analyst interference (Hansen and Loveland, 2012). The tree map in the present study was therefore created using an unsupervised classification approach. The tree mapping algorithm consisted of a number of steps: 1. Removing pixels with cloud cover, defect pixels, no-data pixels, and saturated pixels using the accompanying masks for the individual Sentinel scene. 2. Resampling all bands to 10 m × 10 m using nearest neighbour interpolation. 3. Normalizing the bands using mean centring and division by the standard deviation to remove effects of different scale of reflectance in the images obtained in the different bands following the approach of e.g. Nguyen et al. (2018). Tests showed that the accuracy of the mapping procedure increased considerably through adding this step. 4. Classification of the satellite image using unsupervised k-means classification within R. The k-means algorithm in R was very time consuming on the 5 GB tiles from Sentinel-2. The approach was therefore improved numerically by using Intel® Data Analytics Acceleration Library (DAAL) (https://software.intel.com/en-us/ intel-daal) linked directly within R. The unsupervised classification was performed with 25 classes, based on the authors experience with similar classification exercises, and with a maximum of 20 iterations to limit calculation time. This number is higher than previous identified optimum number of 12 classes in specific Landsat scenes (Yıldırım, 2014) and therefore ensures sufficient number of classes without compromising quality. Sensitivity tests showed that the mapping algorithm was not particularly sensitive to these choices. 5. The classified image was filtered to remove non-vegetation pixels by calculating the Normalized Difference Vegetation Index (NDVI) (Tucker, 1979) for the entire image by using band number 4 (red) and 8 (Near-Infrared) and setting a lower threshold. The lower NDVI threshold for vegetation was found by analysing the distribution of NDVI values in the image that was mapped as forest in Corine Land Cover. The assumption was that pixels with an NDVI less than the median minus the distance from the median to the 95-percentile of the distribution were non-vegetation pixels (e.g. buildings, roads or lakes found in forests), that according to the definition can be expected to be present in forested areas identified by the Corine Land Cover. The removal of these pixels also has the effect of removing clouds, shadows and other artefacts not included in the accompanying mask files. 6. The classes belonging to respectively coniferous and broadleaved trees were labelled using the forests classes from Corine Land Cover as training data but with error pixels and non-vegetation removed (step 1 and step 5). Broadleaved and coniferous forests in Corine Land Cover can contain up to 25% of other land cover types. Moreover, Corine Land Cover has a minimum mapping unit of 25 ha (Bossard et al., 1994). These two properties introduce noise in the training data. To circumvent this problem, an iterative procedure, with the aim of finding the dominating classes, from the classification performed in step 4, for respectively broadleaved and coniferous forests, was developed: a. Within each Sentinel-2 scene, the polygons for respectively coniferous forest and broadleaved forest from Corine Land Cover were sorted in descending order as a function of the area. The iterations proceeded from the largest polygons to the smallest based on the assumption that the uncertainty was largest on the smallest polygons in Corine Land Cover, an assumption that was confirmed during the algorithm development phase. b. The largest polygon for broadleaved and coniferous trees was then masked out from the classified image (step 4) after the filtering (step 5) and the proportion of pixels in the respective classes defined by the k-means algorithm was calculated separately for broadleaved trees and coniferous trees. c. This procedure was then repeated for the second largest polygon for respectively broadleaved and coniferous, and the pixels from the new polygon added to the distribution created in step b). d. Convergence was checked by comparing the percentage change in each class in the distribution between iterations, and convergence was reached when the largest change in a class was less than 1%. If convergence was not reached step c) was repeated with the next largest polygon until convergence. As the polygons are getting smaller and smaller, convergence will eventually be achieved in this way. e. All 25 classes from the k-means algorithm applied on the entire image and extracted within the Corine Land Cover forest areas without non-vegetation classes were then labelled as either mostly broadleaved or mostly coniferous trees based on which category had the largest proportion of the selected class. f. Subsequently, for respectively the mostly broadleaved classes and the mostly coniferous classes, a k-means clustering was applied to divide the distribution into two classes: Dominating and nondominating. This resulted in a subset of the 25 classes where the forest type could be identified. g. The dominating classes were then labelled as either broadleaved or coniferous forest. The remaining classes were labelled non-forest and the separation of the 25 classes into three categories was applied on the entire image.
In this way, a tree map was created without analyst interference. To test the sensitivity of the method to the use of Corine Land Cover as training data, the tile 30UWC from 19.07.2016 was classified using Globcover ) as training and the results compared with the result using Corine Land Cover. The details of replacing Corine Land Cover with Globcover and the results are described in Appendix A.
Data
Six Sentinel-2 single tile images were downloaded as L1C data from United States Geological Survey (USGS) earthexplorer (https:// earthexplorer.usgs.gov/) for the creation of a tree map. The level 1C processing includes radiometric and geometric correction using ground control points and a digital elevation model to correct for parallax error (Drusch et al., 2012). L1C data provide top of atmosphere reflectances, and thus no further preprocessing was applied to the images. The tiles were selected to cover the summer period (June-August) 2016 and to have as small a cloud cover as possible. The tile 30UWC was selected since it covers Worcester, UK an area familiar to the authors. Two images of this tile were downloaded to elucidate seasonal differences. The tile 30VUH was selected to cover an area of Scotland, which has a much larger fraction of coniferous trees compared with most of England and thus provides a different type of landscape to the analysis. The tile 32VNH covers an area in western Denmark and the tile 33VUC covers an area in eastern Denmark and southern Sweden. These were selected since high resolution tree cover maps (www.kortforsyningen.dk, www. lantmateriet.se) used by national forest inventories (Nord-Larsen et al., 2016) were available for these countries for the testing of band combinations (described in Section 2.3) and because the areas are familiar to the authors. The available data is here available as final classified data sets delivered in the form of shape files, where the central input data for providing the tree cover maps in all regions are based on a combination of high-resolution aerial photography and administrative records combined with sites visits all with a spatial accuracy much higher than the 10 m resolution provided by Sentinel-2.
The tile 30TWN was selected as a blind test of the forest mapping methodology in Southern Europe, since tile 30UWC was used during the development of the algorithm. The tile covers an area in Northern Spain selected to both have a large urban fraction and substantial tree cover, to allow the accuracy assessment using Google Earth. The algorithm was applied to one image at a time, to better analyse the performance of the algorithm, to keep the data and calculation requirements small for the present study, and to limit the study scope. Future work should aim at analysing the impact of the input data on the accuracy of this algorithm as well as related algorithms. Sentinel-2 provides a new opportunity for methods development in land cover analysis by providing a large number of images over the same area taken within a short time span. This enables new possibilities for land cover analysis and the associated error assessment by taking into account multiple images within the area of interest. Such improvements are likely to remove the occasional errors caused by outliers in the data set, thereby increasing the accuracy of the final map. Red, green, blue images of the Sentinel-2 scenes can be seen in Fig. 1 in the supplementary material, the location of the individual tiles can be seen in Fig. 2 in the supplementary material and the properties of the individual tiles are summarized in Table 1.
Testing band combinations
To determine whether all 13 bands from the Sentinel-2 satellite were needed in the algorithm described in Section 2.1, or whether some bands made the classification noisier, the algorithm was run for all band combinations of 3 to 13 bands. To avoid subjective assessments of which bands to include and which to leave out, all 13 bands were included in this part of the analysis. This was done for the five Northern European images due to the availability of recent high resolution tree cover maps as described in Section 2.2. This summed to a total of 8100 combinations. For each classification the wall-to-wall kappa coefficient (Cohen, 1960;Congalton et al., 1983) between the national forest inventory and the tree map was calculated. The kappa coefficient is a popular approach to map comparison in remote sensing (Foody, 2006), since a visual comparison of the National Forest Inventories with the red, green, blue image of the corresponding satellite image showed that these also contained errors. The kappa coefficient should not be used for accuracy assessment (Pontius and Millones, 2011) (the details of this analysis are described in Section 2.4) but can be used to assess "interrater agreement" (Foody et al., 2013). This choice is also based on that the present study only analyses the difference in kappa coefficient for the respective band combinations, which removes the risk associated with using one specific kappa coefficient.
The satellite-based tree map was filtered to remove small patches of trees before the calculation of the kappa coefficient to make it comparable with the corresponding national forest inventory. This resulted in a minimum mapping unit of 0.5 ha for images 30 UWC and 30VUH, 0.25 ha for 32VNH and 0.01 ha for 33VUC, since the Swedish data are made in a way that does not operate with a minimum mapping unit. The kappa coefficients were summed across the five images, different approaches to select an optimal (based on the images in the present analysis) band combination were explored and an optimal band combination, conditional on the present algorithm and input data, was chosen to produce an automated tree cover map using Sentinel-2.
Accuracy assessment of forest map
The accuracy of the map resulting from the analysis described in Section 2.1 was assessed at tiles 30UWC and 30TWN to cover both Northern and Southern Europe. No filtering was applied to the map in this part of the analysis, and the minimum mapping unit is therefore 0.01 ha. The accuracy assessment needed reference data which were derived from Google Earth as described in Section 2.4.2. High resolution images from Google Earth are available for the entire 30UWC tile and areas close to the larger cities for the 30TWN tile. Reference data points therefore cover the entire tile 30UWC and within 10 km of the four cities Bilbao, Vitoria, Logrono and Pamplona in tile 30TWN. The accuracy assessment for both images followed the sampling design, response design and analysis methodology of Stehman and Czaplewski (1998).
Sampling design
To test the thematic accuracy of the map, an accuracy assessment dataset was produced. To generate this dataset, 999 pixels were extracted from the image. The sampling was made using stratified (broadleaved trees, coniferous trees and no trees) random sampling (Stehman, 2009) with equal sample size for each stratum, since the area covered by the no trees category naturally will be much larger than the area covered by the two forest categories for both images. This ensured Table 1 Properties of the Sentinel-2 tiles used in the present study. (2020) 101947 333 pixels in each stratum, which exceeds the 100 pixel threshold, which according to Stehman (2001), is required to obtain a standard error of 0.05 on the overall accuracy almost regardless of the sample size. Stehman and Wickham (2011) discuss the use of pixels, blocks of pixels and polygons as the spatial unit for accuracy assessment based on the recommendations in Congalton and Green (2009). They show, through a numerical example, that the effect of moving from pixels to blocks of pixels to polygons has a small effect on the overall accuracy of the map. It was therefore decided to stick with 10 m × 10 m pixels as the spatial unit for the accuracy assessment, an approach also used by e.g. Feng et al. (2016) and Wickham et al. (2017).
Response design
Each 10 m × 10 m pixel was assigned a primary land cover class and eventually a minor land cover class if this was present following similar approaches as Benza et al. (2016); Shubho et al. (2015); Wickham et al. (2017); Yan and Roy (2016).
The collection of reference labels was done by three interpreters within the study group. To enhance consistency among interpreters, a written guide to the classification procedure was produced and 99 points, selected using the sampling design described in Section 2.4.1, for both the tiles 30UWC and 30TWN, were classified by all interpreters. The interpreter did not have access to the forest map from the satellite during classification to avoid biasing the manual classification (blind interpretation). Each interpreter was supplied a Google Earth KML file containing the sample pixels for overlay on Google Earth imagery. The interpreter selected the Google Earth image with an image date as close as possible to the date of the satellite image and with good visibility and subsequently decided the most appropriate land cover category. The interpreter could select among the three categories from the tree map plus "unclassified trees" and "unclassified" for images and points where a distinct category could not be determined. Pixels in the last two categories were subsequently excluded from the analysis and the initial number of 333 sampling points in each category thereby ensured that the total number of pixels is substantially above the minimum number of 100 according to Stehman (2001). The number of remaining pixels can be found in the Results section (Section 3.3).
Analysis
The reference dataset based on Google Earth was used to produce a confusion matrix for the two classified Sentinel-2 images covering three classes (broadleaved trees, coniferous trees and non-trees) and two classes (trees and no trees), by merging the two tree-classes to one. Following recommended "good practice" in accuracy assessment (Olofsson et al., 2014;Stehman and Foody, 2019), the error matrix was reported in terms of estimated area proportions p ij : Where W i is the proportion of area mapped as class i, n ij is the sample counts of pixels mapped as class i which belong to class j, and n i+ is the sample size from stratum i. The user's accuracy, producer's accuracy, overall accuracy, plus the proportion of area in each class based on the reference classification along with their corresponding standard errors were calculated using the formulas from Olofsson et al. (2014); Stehman and Foody (2019). The confusion matrix for the two-class case was made using the indicator functions described in Stehman (2014).
Study area
The study area in the North with reference data, tile 30UWC, is centred on the city of Gloucester (Fig. 1a), encompassing Gloucestershire and parts of 9 other counties located in the Midlands, England. The relief of the landscape is marked by the Severn Valley in the centre and associated tributaries with a uniform low level terrain between Gloucester and Worcester and the Bristol Channel to the Southwest (e.g. Sadys et al., 2014). No large upland areas occur within the area, but the land rises towards the Birmingham plateau in the north and towards the massifs of mid-Wales in the west. Nevertheless some prominent hills exist; the Malverns (peak height 425 m), Bredon Hill (293 m), the Cotswold range (up to 300 m) and the Black Mountain (550 m) as seen in Fig. 3 in the supplementary material. The area has one large woodland in the Forest of Dean (Forestry Commission, 2011) and numerous small woodlands and groups of trees (Forestry Commission, 2017;Skjøth et al., 2015), distributed approximately homogeneously across the area and located in both the rural and urban areas. According to Forestry Commission (2011) the area covered by forests amounts to 8.32%. The area is dominated by privately owned woodlands, where broadleaved trees are the most abundant tree type (Forestry Commission, 2002). The broadleaved part is typically dominated by Quercus sp, Fraxinus sp and Fagus sp, while the coniferous part often consists of a broad range of unclassified species complemented by Picea abies and Pinus Sylvestris (Skjøth et al., 2008). The rest of the landscape covers urban areas and in particular agricultural areas used for annual crops within rotation systems and permanent pastures , but also with significant areas for fruit production (e.g. Sadys et al., 2014). The climate of the region is relatively uniform and characterized as maritime and cold temperate (UK Met Office, n.d.-a) with mild winters and warm summers, an annual mean temperature around 10 degrees, and regular rainfall throughout the year ranging from about 600 mm/year to more than 800 mm/year (e.g. Sadys et al., 2014;UK Met Office, n.d.-b).
The study area in the South with reference data, tile 30TWN, is bordered by the cities of San Sebastian, Bilbao, Logrono and Pamplona (Fig. 1b). The region encompasses the three regions of Gipuzkoa, Vizcaya, and La Rioja and partly covers several other regions, located in the most Northern parts of Spain towards the Bay of Biscay. The central part of the region is covered by the Cantabrian Mountains with elevation up to 1500 m (as seen in Fig. 4 in the supplementary material), contrasted by the large Ebro Valley and the Ebro River in the southern part of the domain. The area has numerous larger woodlands, in particular in the mountainous part but also in lower areas to the North, while the valleys such as the Ebro Valley are mainly covered by agricultural land, therefore containing very few trees. The total tree cover of the region is, according to Hansen et al. (2013), 41.8%. The coniferous part of the woodland is dominated by various types of pinus species such as Pinus sylvestris, Pinus halepensis and Pinus nigra, while the broadleaved part is dominated by Fagus sylvatica and several Quercus species such Quercus ilex, Quercus robur and Quercus faginea (Skjøth et al., 2008). The climate of the region varies substantially due to the large variations in elevation and is, according to generalised maps for the global climate (UK Met Office, n.d.-a), in a region partly covered by temperate and partly by Mediterranean climate. This means that it is a region where winters tend to be warm and wet while summers are dry with little or no rainfall, here considerably modified by the presence of mountains. This has the effect that the annual average rainfall in the region can be below 400 mm/year or above 700 mm/year and that mean annual temperatures can be higher than 15 degrees Celsius in the Ebro Valley and lower than 12 degrees in the nearby elevated terrain (e.g. Vicente-Serrano et al., 2003).
Testing band combinations
The calculation of the wall-to-wall kappa coefficient with the corresponding national forest inventory for all 8100 band combinations shows that the highest summed kappa coefficients generally are 2.7 to 2.8, where the theoretical maximum is 5.0 and the highest kappacoefficient is found when using a combination of four bands (Table 2). Typically, the coefficients vary from 2 to 2.8, where the highest Fig. 1. (a) Map of the study area in the North. Data sources: Counties, Urban areas, Geographical areas, rivers (https://www.ordnancesurvey.co.uk/business-andgovernment/products/strategi.html), Surface water (Corine Land Cover), Forest areas (Morton et al., 2011). The forest polygons with an area < 1.5 ha have been filtered away to increase map readability. Map is produced by the authors. (b) Map of the study area in the south. Data sources: Counties (Eurostat NUTS, https://ec. europa.eu/eurostat/web/gisco/geodata/reference-data/administrative-units-statistical-units/nuts), Urban areas (Bossard et al., 1994) (data from Corine Land Cover 2012), rivers and surface water (Digital Chart of the World, http://www.soest.hawaii.edu/wessel/dcw/), Forest areas (Hansen et al., 2013) reclassified with forests containing more than 50% trees. The forest polygons with an area < 1.5 ha have been filtered away to increase map readability. Map is produced by the authors. abundance is in the range 2.4-2.6 as seen in Fig. 2, which displays the kappa-coefficients for the images applying combinations of four bands. Similar results were obtained for band combinations of other lengths. It is evident that there is a very large scatter between the band combinations with some having very high kappa coefficients and others having very low kappa coefficients. This means, that the driver of the mapping performance with respect to identifying forests in the five examples is not the number of bands, but the choice of bands.
The maximum kappa coefficients using combinations of between four and seven bands are almost equal. It is evident that bands 2, 3, 6 and 12 appear in many of the combinations. Band 2 is the blue band (496.6 nm, 10 m), band 3 is the green band (560.0 nm, 10 m), band 6 is a red-edge band (740.2 nm, 20 m), and band 12 is a short-wave infrared band (2202.4 nm, 20 m) and this combination is also the highest scoring combination of all bands (Table 2). Using USGS Spectral Characteristics Viewer (https://landsat.usgs.gov/spectral-characteristics-viewer), it can be seen that these bands are particularly suitable to separate different types of vegetation. It is natural that band 4 and band 8 will not contribute much to the classification, since these two bands are already included in the analysis through the NDVI-filter. Columns two and three in Table 2 show that the difference between the individual combinations' performance in each image is larger than the difference between the performances of the individual combination, which indicates that the highest agreement is achieved by a different band combination for each of the Sentinel images. This result is also seen in Fig. 2, where up to 35 band combinations have a performance differing by less than 1%. This makes it difficult to choose the optimum band combination.
As a way to overcome this problem, the band combinations within 5% of the best performing combination for each image were selected, and the band combinations appearing in the top 5% for all the images are tabulated in Table 3. It is again evident, that the bands 2, 3, 6 and 12 appear in many of the combinations.
As can be seen from Table 3, the best performing combination with four bands, which is also the best performing combination for the entire dataset, is among the top 5% combinations for each image. It is therefore selected for the following accuracy assessment. Given that this analysis shows a negligible small difference in agreement with the NFIs between the selected band combination and a large number of other band combinations, this choice of band combination must be considered a provisional result. Nevertheless, the results clearly illustrate that using all available bands in Sentinel-2 for this type of land cover analysis does not provide the best results. Future work should aim at arriving at a more definitive answer to the question of choice of bands e.g. through performing this analysis on a larger and more variable set of Sentinel-2 tiles.
Tree mapping
Maps of the tree cover or broadleaved trees or coniferous trees based on the satellite or the respective national forest inventory are shown in Fig. 3a-n. A visual inspection of the raw data reveals a number of interesting features. For tile 33VUC there is generally good agreement between the NFI and the satellite derived map in areas with a high forest density. On the righthand side of Fig. 3b there are a number of white areas caused by clouds in the satellite image. In the lower left part of the figure and centrally in the picture, the satellite derived map predicts more trees than the NFI. These areas include, according to Corine Land Cover, large amounts of urban residential areas (Corine Land Cover code 112) and sport & leisure facilities (Corine Land Cover code 142), where the latter has actually vast areas covered by summer houses. Local knowledge by the authors establishes the fact that in particular the summer house areas contain large amounts of trees. However, from a land cover perspective these areas are not forests and do therefore not appear in either the national forest inventories or land cover data sets like Corine Land Cover. Nevertheless, these areas contribute substantially to the tree cover in these regions. A secondary effect in this region is minor woodlands and hedges found throughout the part of the region that is designated as agricultural landscape found in both Denmark and Sweden. In this case these minor woodlands are not found in the national forest inventory or the Corine Land Cover. It is therefore a potential source of error, if the Corine classes are used as a training element as the classes are known to Table 2 Combinations with the highest summed κ as a function of number of bands (#). κ i is the kappa coefficient for image i. The maximum value of ∑ κ i is 5.000 (1.000 for each of the five images). Columns 2 and 3 are respectively the minimum and maximum difference in κ between the best performing combination across all five images and the best performing combination for the individual image for the same n. , 5, 6, 7, 8a, 9, 11, 12, 9 2.787 0.005 0.052 3, 4, 5, 6, 7, 8a 9, 11, 12 10 2.740 0.008 0.062 1, 3, 4, 5, 6, 7, 8a 9, 11, 12 11 2.738 0.012 0.071 1, 2, 3, 5, 6, 7, 8, 9, 10, 11, 12 12 2.734 0.019 0.103 1, 2, 3, 4, 5, 6, 8, 8a 9, 10, 11, 12 13 2.680 0.021 0.072 1, 2, 3, 4, 5, 6, 7, 8, 8a, 9, 10, 11, 12 Fig. 2. Histogram of sum of kappa coefficients across all five images for all band combinations with n = 4. 2, 3, 4, 5, 7, 9, 12 2.803 2, 3, 6, 12 2.774 1, 3, 5, 6, 12 2.771 1, 2, 3, 5, 6, 12 2.800 1, 3, 4, 5, 6, 11, 12 2.797 2, 4, 5, 6, 12 2.799 3, 5, 6, 11, 12 2.797 2, 3, 4, 5, 6, 11 2.757 3, 4, 5, 6, 7, 9, 11, 12 be neither spectrally pure or unique e.g. Pekkarinen et al. (2009). We have here solved that issue by two steps: 1) filtering the Corine classes by removing the part of the area that causes the problems with spectral confusion and 2) by using only the most pure fraction of the entire data set as training elements. Some of the difference can also be explained by the algorithm confusing the spectral signal from trees and other types of land cover (e.g. green fields). However, this effect is considered to be of minor importance compared to the very large tree cover found in urban areas, residential areas (summer houses) and the agricultural landscape. The tile 30UWC (Fig. 3c-f) generally shows good agreement between the NFI and the satellite derived map in the areas with a high woodland density. However, the satellite based map shows a higher amount of broadleaved trees in areas with low tree density ( Fig. 3c and e) and a slightly smaller amount of conifers in the picture from 19.07.2016 (Fig. 3f) compared to the picture from 15.08.2016 (Fig. 3h), where the last picture has the closest resemblance to the national forest inventory (Fig. 3d). This suggests that it will be an advantage to take several scenes into account over the same area if the purpose is to create very accurate inventories by using Sentinel-2 images. The higher amount of broadleaved trees could be related to orchards as the area is well known for its cider production. Orchards are technically considered a part of the agricultural landscape and therefore not included in the NFI. However, using remote sensing they will be identified either as grassland (the underlying vegetation) or as tree coverdepending on the density of the fruit trees. In any case, this type of vegetation contributes to the overall tree cover. The upper left corners of Fig. 3g and h are missing data in the satellite image.
For tile 30VUH the coniferous forest compares reasonably well with the NFI whereas the broadleaved forest overestimates the tree cover. This is likely related to the spectral signature of these trees not being significantly different from their surroundings.
For tile 32VNH there is good qualitative agreement between the satellite-derived map and the NFI. However, the satellite derived image shows regions with somewhat higher amounts of trees compared to the NFI. According to Corine Land Cover this region also contains substantial areas of urban land cover (in particular cities of Aarhus, Silkeborg, Randers and Horsens) and these areas have in the remote sensing picture been classified with substantial woodland cover whereas the NFI does not include those regions. As such, part of the difference is actual trees not included in the NFI whereas another part of the difference is spectral confusion between trees and green fields/ grass.
The summary statistics for the present study and the respective NFI plus the statistics for the studies by Kempeneers et al. (2011) and Hansen et al. (2013) are presented in Table 4-7. Tile 33VUC is the only image where the present study yields a smaller total tree cover compared with the other datasets. As described above, this is caused by cloudy areas in the image, not included in the accompanying cloud mask, but removed by the NDVI-filter of the present algorithm. As seen from Section 2.1, the present algorithm does not distinguish between clouds and non-forest pixels. The reason is that this distinction is complicated and thus beyond the scope of the present study (see Deng et al. (2019); Li et al. (2019); Sui et al. (2019) for some recent examples). The present approach is designed for cloud free or almost cloud free images and is designed with computational efficiency in mind. Besides that, incorporating multiple images over the same area in a subsequent study is expected, to some extent, to alleviate this problem. Despite this bias, the total tree cover is quite close to the other estimates, which indicates that the remaining areas have a larger tree coverage than previously thought. Part of this tree cover is technically not accounted for in the NFI as the land use is either agricultural (e.g. orchards), urban (e.g. low density residential) or recreational (summer cottages).
For the remaining images, the relative difference between the tree cover area of the present study and the previous studies is between 8% and 79%. As can also be seen, there is a large variation between the previous estimates of the tree cover for the respective image, which contributes to the large variation in the relative difference between the previous estimates and the present study. For tile 33UWC the image dated 15.08.2016 always has a smaller tree cover compared with the image 15.07.2016 due to the smaller area covered by the satellite on this particular day. As can be seen, even though the total area does not change much, the distribution between broadleaved and coniferous trees changes significantlya result also seen in Fig. 3e-h. The image from 19.07.2016 has a larger cover of broadleaved trees and a smaller cover of coniferous trees compared with the national forest inventory and vice versa for the image from 15.08.2016. The correct result is probably somewhere between the two estimates, which underlines that temporal averaging or other approaches that utilize several images in order to create accurate tree maps would yield a higher accuracy. For tile 33 VUH the cover of coniferous trees is in reasonably agreement with the national forest inventory, whereas the cover of broadleaved trees is much larger. A visual inspection of the image reveals that the area of broadleaved trees in Corine Land Cover in this image is much smaller than the area of coniferous trees in Corine Land Cover. This means that there is a larger probability that clouds and other artefacts can influence the training data and thus introduce noise in the labelling procedure. This is also seen in that approximately six times as many pixels are used for the labelling of conifer trees compared with the labelling of broadleaved trees. Future work should aim at reducing this effect.
For 32VNH the present algorithm also finds a considerably larger tree cover compared with the previous studies, but given the large variation in the previous estimates, it is difficult to conclude on the validity of this estimate. However, it is known that this particular area contains a substantial amount of land cover that technically is not part of the national forest inventories (e.g. urban land) and that the tree density in these areas requires at least 10 m spatial resolution in order to be accurately mapped (Uuemaa et al., 2013). This suggests that the true tree cover in those regions likely to be better mapped with the Sentinel-2 satellite compared to previous estimates.
Accuracy assessment
The results of the accuracy assessment for the primary land cover class for tile 30UWC can be found in Table 8 for three categories and in Table 10 for two categories. As can be seen it was not possible to manually classify 58 pixels. Since this corresponds to approximately 5% of the data this is not assumed to influence the results, which can also be seen on the low standard errors on all the accuracies. The overall accuracy for three classes is 90% with a standard error of 1.35%. Comparing this to the commonly used accept criterion of 85%, the accuracy of the map is high, even though this acceptance criterion has been questioned (Foody, 2006). The producer's and user's accuracies are quite low for coniferous trees, even though the area comparisons are quite close to each other. It is natural that the accuracy of the map will be better between the no trees category and the two tree categories, compared to between the two tree categories due to the spectral similarity between different types of trees. The significant fraction mapped as broadleaved trees being no trees is most likely green fields which have a similar spectral signature. A part of the pixels mapped as coniferous trees being broadleaved trees is due to shadows e.g. at forest roads or forest edges, where the shadows cause the trees to appear darker and thus fall in the coniferous category. Future work should aim at reducing this effect. When classifying the map in two classes (trees/ no trees) the percent correctly classified is 90% again being above the accept criterion. It is noteworthy, that even though the present map has reported substantially more tree cover compared to previous maps, as shown in Section 3.2, the accuracy assessment indicates that the tree cover based on the reference data is actually substantially higherespecially for broadleaved trees. The actual tree cover might therefore be substantially higher.
The results of the accuracy assessment for three land cover classes for tile 30TWN are shown in Table 9 and for two classes in Table 11. The overall accuracy is 83.43% with a standard error of 1.64% for three classes, and as such, a little bit lower than for tile 30UWC. This is expected, since tile 30UWC has been part of the development process for the algorithm. In particular the separation between broadleaved and coniferous trees is better for this tile. The accuracy for the no trees category is slightly lower compared to tile 30UWC. A part of this can be explained by 41 pixels where the manual interpreter could not determine whether the pixel was showing orchards of young trees or fruit bushes, where the tree map has classified it as no trees. This is one of the explanations why the number of unclassified pixels is slightly higher for this image, and the standard error therefore slightly larger. As stated in Section 2.4.1, pixels for this analysis were only sampled within a 10 km radius of the four cities in the image. This means that the actual accuracy for the entire image is likely to be higher, since the land cover will be more homogeneous in the rural areas. Reducing the number of classes to two (trees/no trees) gives an overall accuracy of 85.43% with a standard error of 1.87%. The phenomenon that the area of tree cover estimated from the reference data is substantially higher than the mapped area is likewise found for this tile.
Conclusion
Tree maps with high thematic accuracy can be produced from Sentinel-2. The high spatial resolution of this satellite means that a larger tree cover is generally found compared with previous estimates (on average 36%), for the five Sentinel-2 tiles in the present study and in particular a large tree cover is found in regions officially classified as urban landscapes. The performance of the present map compared to the respective national forest inventory does not depend on the number of bands included in the analysis, but on the choice of bands, with the band combination 2, 3, 6 and 12 as the best performing combination in the present study. Likewise, the difference in performance for the individual band combination is larger for the different images compared with between the band combinations. With a few exceptions, the present tree map agrees well with the corresponding national forest inventory, and add to this the non NFI tree resource. This non NFI resource can in some regions be substantial. The thematic accuracy, for the two tiles where accuracy assessment was performed, was above or close to the commonly applied 85% threshold for three land cover classes (non-forest, broadleaved trees, and coniferous trees) at a Table 10 Accuracy assessment for tile 30UWC in percentages of area for two categories. The table includes the user's accuracy (User) and the producer's accuracy (Prod), standard errors are presented in parentheses along with the number of pixels in each category (n). Estimated overall accuracy is 90.43% with a standard error of 1.38%. resolution of 10 m × 10 m.
Declaration of Competing Interest
None. | 2019-09-17T01:10:48.831Z | 2020-02-01T00:00:00.000 | {
"year": 2020,
"sha1": "b0c19f4ba6031256418798a0319fa6a791d56557",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jag.2019.101947",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "839f454de97349741dc3786729aa4b7d3f71e14b",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
248834371 | pes2o/s2orc | v3-fos-license | On the Green function for an Aharonov-Bohm flux tube
An earlier contour expression for the Green function of a free complex scalar field in the presence of a conical singularity with localised magnetic flux is shown to yield expressions for the field correlator and defect block expansions that have been more recently found in connection with monodromy defects in conformal field theory. The Green function appears as the Picard integral representation of the Appell $F_1$ function. This is shown to transform into a confluent Horn function, corresponding to a different defect block expansion. Other transformations are discussed.
Introduction and survey
In the extension of conformal field theories to defect conformal field theories, in particular those with a monodromy defect, one soon encounters the bulk two-point field correlator, G(x 1 , x 2 ) = φ(x 1 )φ(x 2 ) in the presence of a conical defect that generates a phase shift on being encircled, (the defect has codimension 2). Of most interest are interacting theories but since explicit expressions for this correlator can be found in the case of free fields there is some value in a further examination of this restricted situation. In an earlier literature, the defect is sometimes referred to as a cosmic string and the correlator as a Green function.
The aim of this technical note is to draw together and extend some existing free-field results and to relate them to more recent investigations. The set-up is the simplest i.e. a conical defect in flat space-time of d dimensions with either a Lorentzian or Euclidean signature. The field is a complex scalar conformally invariant one. The phase change is 2πδ and the cone has the angle, β. When β = 2π, the construction is just an Aharonov-Bohm flux tube.
The essential monodromy expressions were given in [1] for d = 2, where the Green function appeared in two equivalent forms. The fundamental one, and the most useful, is a Sommerfeld-Carslaw complexified polar angle contour integral which can then be 'evaluated', if desired, to give an eigenfunction expression involving Bessel functions. These results are classic in the case of no monodromy, δ = 0.
The explicit extension of these results to d dimensions was made in [2] where the one-point vacuum averaged energy-momentum tensor was calculated in the standard way by applying a differential operator to the Green function. If done directly, the coincidence limit diverges and the infinities have to be removed by subtracting the values corresponding to the usual flat space Green function (i.e. that with β = 2π, δ = 0, hereafter called G 0 ). Such was the procedure adopted in the earlier [3] (for δ = 0) where exact free Green functions were exhibited. In [2], the subtraction, and also knowledge of the explicit Green function, was avoided by a simple change of contour which eliminated G 0 from G at the outset. The coincidence limit is then finite and can be evaluated by contour methods without knowing G. I briefly return to this point in section 5.
In recent works, I cite only [4][5][6] as being particularly relevant, the Green function is obtained, and displayed, using various methods, eigenfunction expansion being one. Therefore it might be useful to show, again, how these expressions can be derived from the contour form and so could be considered to be latent in [1,2]. In doing so, certain interesting relations emerge.
A contour derivation of the correlator
The normalisation conventions here are those of standard quantum field theory and I make no use of specific CFT notions. The Green function is then normalised as in [4,5] which differs from that in [6]. To aid comparison, my coordinate choices are polar coordinates r, θ tangential to the defect and y along it, i.e. the same as [5]. The Euclidean interval between x 1 and x 2 is The standard Green function, G 0 , is normalised by the usual textbook expression, Its dependence on the polar angle difference is written, for short, as G 0 (θ). If all that is wanted is an explicit formula for the Green function, equation (4) in [2] is a suitable starting point. This reads, where the contour A has two parts, one in the upper half-plane and a reflected one in the lower. P is a 're-periodising' factor given by, [1], It can be written as an image sum on Sommerfeld's Riemann surface, and related to replicas, but I do not enlarge on this aspect here. While (2) is perfectly general, it leads to closed forms only in even dimensions or when β = 2π/n , n ∈ Z. Since it is desirable to keep d arbitrary, I will consider only the case when β = 2π i.e. the pure Aharonov-Bohm flux tube which still provides sufficient analytical interest.
The useful hyperbolic angle, α 1 , is defined by, so that the interval is, which factorises as, where h and u are defined by, Without loss of generality, h can be assumed greater than 1. The parameter ξ used in [5] is related to α 1 by ξ = sinh 2 α 1 2 . To proceed with the calculation of (2), the lower contour can be turned into the upper one by reversal of α. Hence we need only compute the upper contribution. Then, changing coordinates from α to u the contour becomes one familiar in the theory of Bessel functions i.e. one running from −∞, around the origin and back to −∞. 2 Convergence requirements at infinity dictate that this contour, C, has to enclose the unit u-circle including therefore u = e iθ and u = 1/h as well as the origin, 0, while it excludes the points h and ∞. These are the five singular points of the integrand. (The lower α-contour gives a singular point at u = e −iθ ). Note that the contour 'splits' the light-cone singularity of the G 0 (α) Green function. At the coincidence limit, three of these points come together, pinching the contour.
In terms of u, the periodising function reads, The first term is the upper α contour contribution and the second one the lower contribution. They are related by complex conjugation and the replacement δ → 1− δ. Hence the first term is computationally sufficient and I denote the corresponding Green function component by G U .
From (2) with (1) and (3), the Green function component is then given by, The convergence properties of the integrand allow the contour C to be swung around to the positive real u axis, now running from +∞ and looping around the point u = h (thus enclosing two singular points).
A coordinate change to v = h/u, gives the equivalent form, The integral around the cut from 0 to 1 converts to a line integral and yields, which is, more or less, the final answer except that it can be given a name by noting that Picard's integral formula for the Appell F 1 function is, [7], so that, The total Green function is got by adding the expression obtained by taking the complex conjugate (i.e. θ → −θ) and sending δ → 1 − δ, i.e. G(δ) = G U (δ) + G U (1−δ). 3 I will, therefore, consider only G U in the following and may, occasionally, refer to it as the Green function.
For comparison, in terms of the x and x coordinates employed by [6], h −1 e iθ = x and G U is more neatly expressed as, This formula agrees, up to normalisation conventions, with that derived by Gimenez-Grau, 4 using CFT expansions discussed in the next section. 3 Incidentally, when physical quantities are being calculated for a complex field, charge conjugation invariance implies that one should use the combination G(δ) + G(−δ). Since periodicity in the phase entails G(−δ) ≡ G(1 − δ), this combination gives a real value. 4 Private communication.
Relation to other formulations
The result (6) has been obtained without any knowledge of eigenfunctions. These are, of course, embedded in the formalism and can be extracted in various ways. 5 The representation of G in terms of 'defect blocks' computed in Bianchi et al, [4], employs eigenfunctions and can be construed as just a Fourier series expansion with coefficients related to Bessel functions. Such a form was already given in [1], and related to the contour (2). As mentioned, for no monodromy, this was done by Carslaw many years ago.
In [4], the evaluation of the Fourier coefficients as hypergeometric functions involves the Lipschitz-Hankel integral for the Laplace transform of a Bessel function expression. The original proof of this particular textbook formula involved power series expansion and term by term integration. 6 It gives rise to a 'radial' expansion in sech 2 α 1 , in the notation here, and is further discussed later.
According to Hankel, there is no contour proof of his formula. However, the same integral is calculated, at some useful length, by Graf and Gubler, [9], in two ways. One is the standard method just mentioned but the other does involve a contour manipulation (the prototype of that used above) and yields a different hypergeometric form. Graf and Gubler then show the equivalence of these two expressions in a very detailed way. It is, actually, a consequence of Kummer's quadratic hypergeometric relation (see below). The radial expansion is one in 1/h 2 = e −2α 1 = xx, and corresponds to the solution here, (6), which can be expanded in hypergeometric functions as given in [10], equation (2), repeated here, This can be obtained by expansion of one of the brackets in (5) and termwise integration.
Applied to the F 1 in (7), this yields the Fourier expansion, which are just the defect blocks written down in [6], equn. (2.3) and so, in the present scheme, have been derived from the contour integral (2). 5 The mode boundary conditions are built into the contour construction and, by default, give the Friedrich's extension of the Laplacian. Singular modes, as used in [4], must be treated separately. 6 A slightly more expansive treatment can be found in Kratzer and Franz, [8].
Transformations
The fact that the standard Lipschitz-Hankel Bessel integral cannot be derived directly by contours raises the question of the analogue of the Appell formula (6) for this alternative representation. For this, the Fourier series, as given in [5], equn.(2.12), could simply be taken. However, to keep the present calculation selfcontained, I will firstly rederive this and then rewrite it in terms of another named double series.
As already mentioned, Kummer's quadratic relation can be applied to the hypergeometric function in (8). This relation is, in the form I need it, 7 h −a−c 2 F 1 a + c, c; a + 1; where h = e α 1 and 2b = 2 cosh α 1 = h + h −1 .
I have not been able to find the results of this transformation on the Appell F 1 in the literature 8 and so a little intermediate algebra will be given. Substitution of (9) into (8) produces, (10) Use of the Γ-duplication formula allows the Pochhammer combination to be simplified, giving, 7 Graf and Gubler, [9], give a geometrical interpretation of this relation. 8 It has been used in [11] in the context of bulk channel block expansion. The Appell F 1 function also appears in [11] but only as a hypergeometric surrogate.
after some rearrangement and cancellations.
There are many other transformations that can be applied to F 1 . Most are to be found in Appell and Kampé de Feriet, [13], and relate F 1 to itself or to the other Appell functions. For example one has, [13] p.35. equn.(9), This is actually derived from a double integral representation of F 1 which yields the more significant expansion, 9 , [13], p.34 equn.(8), having, as pointed out in [13], the remarkable property that the coefficients are finite polynomials in (1 − yx −1 ) of degree n. Inserting the CFT parameters and putting x = x, y = xx, one finds, which is a Fourier series whose coefficients are finite polynomials in 1 − x of degree n, giving yet another defect block expansion.
As an example of a more out-of-the-way transformation, I cite a formula given by Bailey, [14], which reads, x, xy . 9 Avoid confusing the generic 'x' with the x variable of [6].
The coincidence limit
One standard computation of one-point vacuum averages consists of applying differential operators to the Green function and then taking the coincidence limit. Divergences appear that have to be taken off, essentially by hand. While not difficult, this process could be avoided if the source of the infinities, the G 0 Green function, were to be removed prior to performing the above operation. This can easily be accomplished at the integral level by deforming the contour A in (2) to pass through the pole in the periodising function (this gives G 0 ) and then expunging the circuit around this pole to leave a modified contour which can be taken as two vertical lines in the α-plane. In the u plane, this corresponds to moving the Bessel contour, C, through the pole at e iθ which is then discarded to leave a contour, C, running mostly around the negative real u axis and looping around the point u = 1/h. This is finite in the coincidence limit (h → 1, θ → 0) and the integral becomes an Euler-type one, evaluating to Gamma and Beta functions. This will be considered in a later communication.
Comments and conclusion
It has been shown that the free-field bulk correlator for a pure monodromy defect is contained in an old contour integral which evaluates to an Appell F 1 function and can be expanded in several different ways into defect blocks.
Knowing that the correlator is an Appell function, gives access to a large number of known transformations. Whether this is physically useful is unclear.
The contour integral applies also if the defect has a conical singularity but the resulting formulae are not so neatly developed. | 2022-05-18T06:47:12.913Z | 2022-05-17T00:00:00.000 | {
"year": 2022,
"sha1": "437c00ca10b6298a1d1f23c8905772e358d4090d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "89fc3e661be71a57a8209e6ca757d2b342d29809",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
5001656 | pes2o/s2orc | v3-fos-license | Similarities and differences between helminth parasites and cancer cell lines in shaping human monocytes: Insights into parallel mechanisms of immune evasion
A number of features at the host-parasite interface are reminiscent of those that are also observed at the host-tumor interface. Both cancer cells and parasites establish a tissue microenvironment that allows for immune evasion and may reflect functional alterations of various innate cells. Here, we investigated how the phenotype and function of human monocytes is altered by exposure to cancer cell lines and if these functional and phenotypic alterations parallel those induced by exposure to helminth parasites. Thus, human monocytes were exposed to three different cancer cell lines (breast, ovarian, or glioblastoma) or to live microfilariae (mf) of Brugia malayi–a causative agent of lymphatic filariasis. After 2 days of co-culture, monocytes exposed to cancer cell lines showed markedly upregulated expression of M1-associated (TNF-α, IL-1β), M2-associated (CCL13, CD206), Mreg-associated (IL-10, TGF-β), and angiogenesis associated (MMP9, VEGF) genes. Similar to cancer cell lines, but less dramatically, mf altered the mRNA expression of IL-1β, CCL13, TGM2 and MMP9. When surface expression of the inhibitory ligands PDL1 and PDL2 was assessed, monocytes exposed to both cancer cell lines and to live mf significantly upregulated PDL1 and PDL2 expression. In contrast to exposure to mf, exposure to cancer cell lines increased the phagocytic ability of monocytes and reduced their ability to induce T cell proliferation and to expand Granzyme A+ CD8+ T cells. Our data suggest that despite the fact that helminth parasites and cancer cell lines are extraordinarily disparate, they share the ability to alter the phenotype of human monocytes.
Introduction A variety of mechanisms used by tumor cells to escape the host's immune system are similar to those used by some parasites. Both parasites and tumors have developed strategies to escape the immune system by expanding T regulatory cells [1,2], by inducing the production of certain inhibitory cytokines [3,4], or by altering the function of antigen presenting cells (APCs) that, in turn, results in diminished ability of these cells to activate T cells [1,5,6].
Monocytes and macrophages are heterogeneous populations of cells that display high plasticity and are essential for the host innate immune response. Consequently, a change in their function contributes to alterations of immune function that may lead to dysregulation of responses important in limiting cancer progression [7] and constraining some infectious diseases. Based on responses to different stimuli, macrophages can be categorized into classically activated M1 (type 1 or pro-inflammatory activated by LPS or IFN-γ (Interferon-gamma)), or alternatively activated M2 (type-2 anti-inflammatory activated by IL-4 or IL-13) [8]. In fact, both parasites and tumors alter the balance of these monocyte /macrophage sub-populations [6,[9][10][11][12].
When recruited to the tumor tissue, monocytes can differentiate into tumor associated macrophages (TAM), a heterogenous population of myeloid cells with both antitumor (M1) and pro-tumor activities (M2) (reviewed in [13]). In fact, TAMs have a wide range of functions including those with beneficial effects, such as phagocytosis of tumor cells and production of cytotoxic factors, [13,14] and the more deleterious effects, such as tumor-associated immunesuppression through the expression of inhibitory immune checkpoints PDL1 (CD274) and PDL2 (CD273) [15].
M2 macrophages with both anti-inflammatory and tissue repair functions [16,17], largely driven by IL-4 and/or IL-13 can also be induced by helminth parasites [6,18]. Helminthinduced M2 macrophages have been shown to play a role in control of Th1-type inflammation, worm expulsion, and wound healing in murine models [19]. Interestingly, microfilariae (mf) of Brugia malayi, the bloodborne stage of one of the helminth parasites that cause lymphatic filariasis in humans, alter monocyte populations somewhat differently in that both M1 and M2 phenotypes are induced [6,18,20,21]. Furthermore, monocyte dysfunction [20,22] of either subset in filarial infection is one of the many mechanisms proposed for parasite antigen-specific T cell hyporesponsiveness seen in humans with lymphatic filariasis.
Because the regulation of monocyte function plays a critical role in both helminth infection and tumor progression, in the present study we assessed the similarities and differences between parasite and cancer-induced alterations of both phenotype and function of human monocytes in hopes of identifying potential new targets that can be exploited by host directed therapeutics. Breast and ovarian cancers are considered to be among the leading types of cancer in North America and Europe [23,24]. Clinical data demonstrate a strong relationship between increased monocyte/macrophage density and poor prognosis in breast and/or ovarian cancers and glioblastoma [25,26]. In addition, tumor associated macrophages play an important role in all three cancer types [27][28][29]. Therefore, in the present study we chose breast, ovarian, and glioblastoma cancer cell lines to compare their effects on human monocytes to that of helminth parasites.
Ethics statement
The elutriated monocytes and lymphocytes from leukopacks of healthy adult donors from North America were collected by counterflow centrifugal elutriation under a protocol approved by the Institutional Review Board (IRB) of the Department of Transfusion Medicine, Clinical Center, National Institutes of Health (NIH; IRB 99-CC-0168). The healthy adult volunteers were given informed written consent.
mf preparations
Live Brugia malayi mf (provided under contract with the University of Georgia, Athens, GA) were collected by peritoneal lavage of infected jirds and separated from peritoneal cells by Ficoll diatrizoate density centrifugation. The mf were then washed repeatedly in RPMI medium with antibiotics and cultured overnight at 37˚C in 5% CO2 before use.
In vitro exposure of monocytes to cancer cell lines and live mf
CMFDA labeled cancer cell lines (MDA, OVCAR and U87) were cultured for 24 hours prior to co-culture with monocytes. Human monocytes were cultured at 50 × 10 6 per 6-well plate in serum-free media RPMI 1640 medium supplemented with 20 mM glutamine (Lonza) P/S for 2 h, after which the medium was removed and the adherent cells were harvested. Monocytes were then either cultured alone or exposed to live mf (50,000 per million cells to reflect physiologically relevant concentrations), or to three different CMFDA labeled cancer cell lines at a 1:2 (cancer cell lines: monocytes) ratio for 48hrs. For monocyte co-cultures with each cancer cell lines, we chose the media used for culturing the relevant cancer cell line alone. Therefore, for MDA cell line and the co-culture of monocytes and MDA (mon/MDA) DMEM complete media; for OVCAR and co-culture of monocytes and OVCAR (mon/OVCAR) RPMI complete media; and for U87 and co-culture of monocytes and U87 (mon/U87) EMEM complete media was used. Furthermore, monocytes alone or monocytes exposed to mf were cultured in DMEM complete media as there was no difference between the three-different media in mRNA expression, cell surface expression or viability of monocytes (S1 Fig). After 48hrs, the cells were harvested by cell scraping (Corning Costar) and washed once with PBS (without Ca ++ /Mg ++ ), counted monocytes were first incubated with human gammaglobulin (Sigma) at 10 mg/ml for 10 min at 4˚C to inhibit binding of the monoclonal antibody to Fc receptor (FcR) and were subsequently labeled with mouse phycoerythrin labeled anti-CD45 mAb (eBioscience, San José, CA; Cat No. 12-9459-42), at saturating concentrations for 30 min at 4˚C. The cells were then washed twice with FACS medium and sorted on FACSAria III, 6-laser, 15-parameter, cell sorter (Becton Dickinson, Sparks, MD) by gating on the expression of CD45 and lack of CMFDA (CD45 + /CMFDA -). Sorted monocytes were then used for gene expression or functional analysis.
Cytokine measurements
After 48hrs of monocytes co-culture with cancer cell lines or mf, exposed and unexposed sorted monocytes were cultured in DMEM media overnight without any stimulation and the production of TNF-α (Tumor necrosis factor-alpha), IP-10 (Interferon gamma-induced protein (CXCL10)), IL-6, CCL4 (Macrophage inflammatory protein 1-β (MIP-1β)) and CCL22 (macrophage derived chemokine (MDC)) in the culture supernatants were measured using a Multiplex human cytokine/chemokine magnetic bead panel kit (EMD Millipore, Billerica, MA) and a Luminex 100/200 system (Luminex, Austin, TX). The lower limit for detection for these assays was 3.2 pg/ml.
Phagocytosis
Phagocytic activity of the human monocytes was assessed by a phagocytosis assay kit (Molecular probes; Invitrogen) with some modifications. Briefly, sorted monocytes (1.0 X10 6 cells/ well) were incubated with 10 8 inactivated E.coli Alexa 488 Bioparticles for 1 hr at 37˚C, 5% CO 2 in serum free DMEM media. The cells were washed with PBS, incubated for 1 min with 0.4% trypan blue to quench any extracellular fluorescence, and washed twice with PBS. Intracellular fluorescence intensity was quantified by flow cytometry. All experiments were done with six replicates. Phagocytic activities of monocytes were expressed as percent phagocytosis relative to that seen with controls.
Flow cytometry staining
Human monocytes were cultured alone or with mf or three different CMFDA-labeled cancer cell lines (green; FITC channel) as mentioned above. After 48 hrs, cells were harvested, washed with PBS and incubated with 10 μl human IgG (10 mg/ml; Sigma-Aldrich, St. Louis, MO) for 10 min at 4˚C to inhibit nonspecific binding through FcγRs and then incubated with marker-specific mAb conjugated with PDL2-APC (eBioscience,
Flow cytometric analysis
For flow cytometric analysis, 50,000 events were acquired per tube using a BD LSRII flow cytometer (BD Biosciences, San Jose, CA). Compensation was performed in every experiment using BD CompBeads (BD Biosciences) for single-color controls and unstained cells as negative controls. Data were analyzed using FlowJo Software (Tree Star, Ashland, OR). Nonviable cells were excluded from our analysis on the basis of forward and side scatter. Fold upregulation in Mean Fluorescence Intensity (MFI) was measured for all markers.
RNA preparation and real-time RT-PCR
Exposed and unexposed sorted monocytes were used for isolation of total RNA and RT-PCR to measure gene expression. Total RNA was prepared from 8 to 15 independent donors using an RNAEasy minikit (Qiagen). RNA (1 μg) from the cells was used to generate cDNA and then assessed by standard TaqMan assays (Applied Biosystems Inc.) using an ABI 7900HT system (Applied Biosystems, Inc.). Briefly, random hexamers were used to prime RNA samples for reverse transcription using MultiScribe reverse transcriptase (Applied Biosystems Inc.), after which PCR products for all genes, as well as an endogenous 18s rRNA control, were assessed in triplicate or duplicate wells using TaqMan predeveloped assay reagents. The threshold cycle (CT), defined as the PCR cycle at which a statistically significant increase in reaction concentration is first detected, was calculated for the genes of interest and the 18S control and used to determine relative transcript levels.
Relative transcript levels were determined by the formula 1/ΔCT, where ΔCT is the difference between the CT of the target gene and that of the corresponding endogenous 18S reference. Fold change in gene expression was measured using 2 −−ΔΔCT , where ΔΔCT is the difference between the ΔCT of the gene of interest in exposed monocytes and that of the unexposed control.
Compensation was performed in every experiment using BD CompBeads (BD Bioscience) for single-color controls and unstained cells. Nonviable cells were excluded from the analysis based on forward and side scatter. CellTrace Violet labeled lymphocytes were further gated on expression of CD4 + or CD8 + , and proliferation was measured by flow cytometry for the dilution of fluorescent dye. Proliferation indices were calculated by using the FlowJo proliferation analysis program (Tree Star, Ashland, OR). Granzyme A expression was measured in CD8 + T cells by staining with pacific blue anti-granzyme A antibody (Biolegend, Cat No. 515407).
Statistical analysis
Unless noted otherwise, geometric means were used as a measure for central tendency. Omni-bus2 normality test was performed to confirm the data is not normally distributed and then the nonparametric Wilcoxon signed-rank test was used for paired group comparisons. All analyses were performed using GraphPad Prism 6.0 (GraphPad Software, Inc., San Diego, CA).
mRNA expression of selected genes associated with inflammation, type 2, regulatory and angiogenesis in human monocytes after exposure to either cancer cell lines or mf
To assess whether cancer cell lines and helminth parasites share similar features in shaping human monocyte gene expression, human monocytes were either exposed to CMFDA-labeled cancer cell lines (breast cancer; MDA-MB-231 (MDA), ovarian cancer; OVCAR-3 (OVCAR), and glioblastoma; U87-MG (U87)) or to live mf of Brugia malayi for 48 hours, sorted for CD45 + /CMFDAmonocytes and assessed for mRNA expression by RT-PCR. To further assess the phenotype of exposed monocytes, we selected genes associated with inflammatory response (M1), type-2 response (type 2/M2), regulatory response (Reg) or responses associated with angiogenesis (Ang) (Fig 1A and 1B and S2 Fig). Our results indicate that cancer cell lines significantly upregulated genes associated with M1 monocytes including PDL1, TNF-α, IL-1β, IL-6, IL-8, CCL3, and prostaglandin-endoperoxidase synthase 2 (PTGS2) (Fig 1A and S2 Fig).
The expression of genes associated with M2 monocytes (TGM2 (Transglutaminase 2), PDL2, CD206, CCL13), regulatory monocytes (IL-10, TGF-β) and angiogenesis (MMP9 (Matrix metallopeptidase-9), and VEGF (Vascular endothelial growth factor)) were significantly induced in all three different cancer cell line-exposed monocytes when compared to unexposed monocytes (Fig 1A and S2 Fig). Although there were major differences between cancer cell line exposed-and mf-exposed monocytes, mRNA expression of IL-1β, MMP9, and TGM2 was shown to be significantly upregulated in monocytes following exposure to either stimuli compared to unexposed monocytes ( Fig 1B).
Cancer cell lines and mf significantly upregulate the production of IP10 and CCL22 in human monocytes
We next measured monocyte cytokine production (Fig 2) after exposure to cancer cell lines or mf. To further assess the phenotype of exposed monocytes, we selected cytokines associated with inflammatory response (M1), type-2 response (type 2/M2), regulatory response or responses associated with angiogenesis (Fig 2 and S1 Table). To this end, human monocytes were either exposed to CMFDA-labeled cancer cell lines (MDA, OVCAR, and U87) or to live mf of Brugia malayi for 48 hours, sorted and rested in media for an additional 24 hours following which cytokine production was assessed in the culture supernatant. As seen in Fig 2, both cancer cell lines and live mf significantly (p = 0.001) upregulated the production of IP-10 and CCL22, while IL-6 production was significantly upregulated only by cancer cell lines and not by mf (Fig 2). Among cytokines/chemokines tested the production of TNF-α, CCL4 (Fig 2A), IL-10 ( Fig 2C), and VEGF ( Fig 2D) was not affected by exposure to any of the stimuli.
Both mf and cancer cell lines induce the monocyte cell surface expression of PDL1, PDL2, VCAM-1, and CD206
We then determined the phenotype of the monocytes following exposure to either live mf or to the cancer cell lines. The basal expression of selected cell surface markers (M1, M2, inhibitory) on either human monocytes or cancer cell lines suggest that human monocytes do not express PDL2 (M2/inhibitory), VCAM-1 (M1), CD206 (M2), and have a low expression of PDL1 (M1/inhibitory) and CD163 (M2) (Fig 3A, black lines). To our interest, after 48 hours exposure to either of the three cancer cell lines, the expressions of PDL1 (p = 0.001), PDL2 (p = 0.003), VCAM-1 (p = 0.003), and CD206 (p = 0.003) on monocytes were significantly upregulated (Fig 3A and 3B). Similar to cancer cell lines, but to a lesser extent, live mf significantly upregulated the cell surface expression of each of these markers (p = 0.001) with the exception of CD163 (Fig 3A and 3B). Longer exposure (5 days) of monocytes to mf further induced the level of cell surface PDL1 (S3A and S3B Fig).
Cancer cell lines and MCSF (but not mf) significantly induce the phagocytic ability of human monocytes
To compare the ability of cancer cell lines and mf to shape the phagocytic function of human monocytes, we measured the ability of monocytes to take up fluorescently labeled E.coli following
Breast cancer cell line MDA significantly inhibit the proliferation of allogeneic and autologous CD4 + T cells in a PDL1-dependent manner and reduce the frequency of Granzyme A + CD8 + T cells
The suppressive effects of tumor-associated macrophages on T cell proliferation have been shown previously [30]. To assess whether these particular cancer cell lines or live mf alter the ability of monocytes to drive T cell proliferation and mediator release, human monocytes were cultured in media alone, or with CMFDA-labeled breast cancer cell lines or live mf of Brugia malayi for 48hr. CD45 + /CMFDAsorted exposed and unexposed monocytes were then co-cultured with CFSE-labeled allogeneic or autologous lymphocytes in the presence of anti-CD3 for an additional 4 days. While exposure to mf did not alter the ability of monocytes to induce T cell proliferation, exposure to MDA and the other two cancer cell lines (U87 and OVCAR; S5 Fig) significantly diminished their ability to promote allogeneic and autologous CD4 + (Fig 5A and 5B), and allogeneic CD8 + (Fig 5A) T cell proliferation.
We next aimed to investigate the role of PDL1 in this diminished T cell proliferative activity. As shown in Fig 5C and 5E, blocking the PDL1 pathway significantly increased the proliferation of allogeneic CD4 + T cells, suggesting that the PDL1 upregulation in cancer cell line exposed-(but not mf-exposed) monocytes plays an important role in T cell suppression.
Because regulation of cytolytic CD8 + T cells is crucial in controlling tumor progression and growth particularly through the release of granzymes (reviewed in [31]), we studied the effect of cancer cell line exposed monocytes on granzyme A release in CD8 + T cells. As shown in Fig 5D, MDA-and U87-exposed monocytes but not OVCAR-or mf-exposed monocytes significantly decreased the percentage of Granzyme A + allogeneic CD8 + T cells when compared to the unexposed monocytes. Furthermore, longer exposure of monocytes to mf (5 days) did not result in a decrease in allogeneic or autologous (α-CD3 dependent) CD4 + or CD8 + T cell proliferation (S6A and S6B Fig
Discussion
Within most solid tumors, monocytes and macrophages are the major inflammatory infiltrates that can be recruited to the tumor microenvironment by tumor-derived chemokines, cytokines and other signals [32]. The majority of these infiltrating cells differentiate into TAMs promoting cancer cell proliferation, immunosuppression, and angiogenesis [33][34][35]. While the immunosuppressive TAMs are activated by IL-4 and IL-13 (Th2-associated cytokines), IL-10, glucocorticoids and vitamin D3, and can exert functions similar to M2 macrophages [36], recent findings suggest that some TAMs are Th1-skewed and resemble M1 macrophages [37]. In murine models, glioblastoma-associated monocytes/macrophages have shown to produce a broad range of cytokines/chemokines with both anti-and pro-inflammatory properties [38]. However, in humans, the same monocytes have phenotypes that are largely anti-inflammatory [39]. In addition, monocytes co-cultured with breast cancer cell lines have shown to have protumor patterns of activities [40]. Therefore, TAMs are not simply restricted to M1 and M2 phenotypes and can represent a spectrum depending on tumor type, location, and microenvironment (reviewed in [41]).
In general, helminth parasites are shown to induce an M2 response in monocyte/macrophage populations (reviewed in [42]). Although, it has already been reported that both helminths and tumors alter the function of monocytes/macrophages, there are no studies assessing the similarities and differences between parasites and cancer cells in their effect on the phenotype and function of these cells. Given that both parasites and tumors share features designed to manipulate the host immune response, we aimed to study the similarities and differences between them with respect to monocytes (S1 Table).
To do so, we established a comparison between three cancer cell lines (MDA, OVCAR, and U87) and the circulating stage of the helminth parasite, Brugia malayi and then assessed the phenotype and function of human monocytes after exposure to either cancer cell lines or parasites. While we have looked at gene expression and cytokine production of mf-and cancer cell lines-exposed monocytes at various time points (5 or 7 days; S7 Fig), we chose 48 hours as mf exert its profound effect on DC and monocytes at this time point [6,9,43,44]. In our hands, and in agreement with previous studies [45,46] upon exposure of monocytes to cancer cell lines, both pro-and anti-inflammatory genes are induced (Fig 1, S2 Fig and S1 Table). In fact, all three-cancer cell lines significantly enhanced the monocyte mRNA expression of genes associated with inflammatory responses, Type-2 responses, regulatory responses, and angiogenesis (Fig 1 and S2 Fig). The M2 differentiation of monocytes co-cultured with breast cancer cell lines in transwell has been shown in the past [46]. In our studies, co-culture of monocytes with the supernatant of cancer cell lines resulted in enhanced expression and frequency of CD206 and PDL1 positive cells (S8 Fig). Furthermore, cancer cell lines-exposed human monocytes demonstrated a significant induction of TGF-β, PGE2 and IL-10 as compared to unexposed cells (Fig 1). These immunosuppressive cytokines are known to be produced by macrophages in the tumor microenvironment to promote tumor growth by maintaining T regulatory cell differentiation [11,47]. Moreover, angiogenesis is a key event in tumor growth and progression, and TAMs are the major cell types promoting this event by producing factors such as VEGF in the tumor microenvironment [48,49]. For example, the interaction between monocytes/macrophages and ovarian cancer cells results in an increased ability of endothelial cells to promote tumor progression through angiogenesis [33]. Here our data indicate that exposure of human monocytes to all three-cancer cell lines results in significant induction of VEGF and MMP9 (Fig 1) suggesting a phenotype similar to TAMs.
One of the major similarities between mf-and cancer cell line-exposed monocytes is the significant upregulation in the mRNA levels of IL-1β (associated with inflammation or M1
Fig 5. Cancer cell lines significantly diminishes PDL1-dependent proliferation of allogeneic and autologous CD4 + T cells and significantly diminishes the frequency of Granzyme A + CD8 + T cells.
Human monocytes were cultured in media alone, or with CMFDA-labeled MDA-MDA or live mf of Brugia malayi, for 48hr. Cells were harvested and CD45 + /CMFDAmonocytes were sorted and co-cultured with CFSE-labeled A) allogeneic or B) autologous lymphocytes in the presence of soluble anti-CD3 (10ug/ml) for an additional 4 days. Percent proliferation of CD4 + and CD8 + T cells was measured by flow cytometry (n = 7) either A and B) in the absence of antibody or C) in the presence of isotype control (closed circle) or anti-PDL1 (open circle). Each line represents an independent donor. Ã , P < 0.05. D) Frequency and MFI of allogeneic Granzyme A + CD8 + T cells was measured by flow cytometry. The data are expressed as geometric mean of percent decrease in frequency and MFI of Granzyme A + /CD8 + T cells. ND = No Decrease; Ã P < 0.05. E) One representative set (n = 7) of flow histograms demonstrating proliferation of allogeneic CD4 + T cells either without antibody (first panel), or in the presence of isotype control or α-PDL1 (second and third panels). phenotype), MMP9 (associated with angiogenesis), and TGM2 (associated with M2 phenotype) (Fig 1B). However, the magnitude of this upregulation is less profound in mf-exposed monocytes (Fig 1A and 1B).
Another similarity between cancer cell lines and mf is in their regulation of cytokine production. For example, while neither (mf or cancer cell lines) induces the production of CCL4, TNF-α, IL-10, or VEGF, they both significantly enhance the production of IP-10 and CCL22 in human monocytes (Fig 2). Both IP-10 and CCL22 are involved in lymphocyte chemotaxis and recruiting regulatory T cells to the tumor microenvironment [50,51]. IP-10 also binds endothelial cells and exerts a potent angiogenic activity in tumor settings [52]. While the role of IP-10 in T cell recruitment has been shown with intracellular parasites [53], the importance of this chemokine in helminth infection is still not fully understood. In humans, CCL22 and CCL18 are the chemokines expressed by M2 macrophage [54,55] and mf of Brugia malayi upregulate the mRNA expression of both chemokines [6] and also induce the production of CCL22 (Fig 2).
An important similarity between helminth parasites and cancer cell lines demonstrated here is their ability to upregulate monocyte cell surface expression of inhibitory molecules such as PDL1 and PDL2 (Fig 3). While unexposed monocytes have low expression of PDL1, CD163, and CD206 (Fig 3A; black solid lines), exposure to mf, similar to those of cancer cell lines, significantly upregulate the cell surface expression of PDL1 (inhibitory), PDL2 (M2/ inhibitory), CD206 (M2), and VCAM-1(M1) (Fig 3A and 3B). Interestingly, while the upregulation in cell surface expression of PDL1 on monocytes was less profound with mf than with cancer cell lines, longer exposure to this parasite further increased the level of PDL1 (S3 Fig). PDL1 and PDL2 are the two ligands for a major immune-checkpoint receptor PD1 (CD279) [56,57]. The engagement of PD1 on T cells with its ligands (PDL1/PDL2) on APCs inhibits kinases that are involved in T cell activation [56,58]. The majority of lymphocytes that infiltrate tumor microenvironment express PD1 and acquire a phenotype of hyporesponsiveness [59,60]. On the other hand, PDL1 is shown to be on most melanoma, ovarian and many other cancer types [61]. In addition to tumors, myeloid cells in tumor microenvironment such as TAMs also express high levels of PDL1 and PDL2 [62][63][64][65][66][67]. Therefore, blockade of this inhibitory pathway is essential in cancer immunotherapy (reviewed in [62]).
Our data suggest that monocytes that are exposed to MDA (Fig 5; and other cancer cell lines, S5 Fig) have significantly decreased their ability to promote allogeneic CD4 + and CD8 + and autologous CD4 + T cell proliferation as compared to unexposed monocytes (Fig 5A and 5B), suggesting a suppressive phenotype. Interestingly, blocking PD1/PDL1 pathway with anti-PDL1 mAb reversed the suppressed proliferation in allogenic CD4 + but not CD8 + T cells that were co-cultured with MDA exposed monocytes (Fig 5C).
Similar to cancer settings, chronic filarial infection with continuous release of parasite antigens is associated with a lack of CD4 + T cell proliferation and production of IFN-γ and IL-2 [68]. The role of PD1/PDL1 (PDL2) in regulating T cell response has also been extended to several infections [69]. Recent studies have suggested that macrophage expression of PDL1 is important in regulating T cell responses to influenza infection [70]. In acute malaria the induction of PD1 + CTLA4 + effector T cells results in suppressive function and inhibition of other CD4 + T cells [69]. In our study, one major difference between helminth parasites and cancer cells is how they shape monocytes to promote T cell activation (Fig 5). In contrast to cancer cells, exposure to mf did not diminish the ability of monocytes to promote T cell proliferation (Fig 5A, 5B and 5C). Therefore, blocking PDL1 pathway in mf-exposed monocytes did not have any effect on T cell proliferation. Furthermore, longer exposure of monocytes to mf (5 days, S6 Fig) does not inhibit T cell proliferation. While, it has been suggested that other inhibitory molecules such as CTLA4 can play a role in T cell hyporesponsiveness seen in filarialinfected individuals [71], how PD1/PDL1 may play a role in this suppression is not known.
The ability of cytotoxic lymphocytes to recognize and kill infected or transformed cells is an important part of both innate and adaptive arms of the immune system. In fact cytolytic T lymphocytes are key players in current immunotherapies and promote apoptosis of cancer cells through granule-mediated as well as receptor-mediated mechanisms (reviewed in [31]). Stimulation of these cytolytic T cells through their receptors induces the activation of effector mechanisms including the granule exocytosis pathway releasing granule-associated enzymes (granzymes) as well as other factors resulting in target cell death [72]. Granzyme A and B are two important serine proteases that are involved in lymphocyte mediated cytotoxicity [31]. In fact, it has been shown that inhibition in the function of these cytolytic T lymphocytes such as CD8 + T cells by TAM establishes a suppressive microenvironment for the infiltrating immune cells [73,74].
Here, we demonstrate that exposure of human monocytes to MDA and U87 cancer cells, but not to OVCAR or mf (Fig 5D) significantly downregulated the percentage of Granzyme A + CD8 + T cells suggesting further suppressive function of these cancer cell associated monocytes. In general, CD8 + T cells can play both effector and regulatory role in parasitic immunity (reviewed in [75]). CD8 + T cells mediated killing activities have been mostly directed and demonstrated against number of intracellular parasites that infect host cells [75]. How CD8 + T cells are regulating immunity against extracellular parasites is not fully understood. In filarial infections, CD8 + T cells exhibited a unique transcriptome in chronically-infected patients when compared to those with relatively acute infections, suggesting an importance of CD8 + cells in this infection [76]. Immune suppression in variety of helminth infections involves regulatory T cells [42]. For example, in onchocerciasis, Granzyme A/B expression was associated with Treg induction and subsequent immune suppression [77]. Induction of Treg were shown both in vitro [78,79] and in filarial-infected patients [80,81].
One important function of macrophages is their ability to phagocytose [82,83]. Macrophage phagocytosis plays a major role in tumor immune surveillance [84] in that antibody-dependent cellular phagocytosis mediated by macrophages contributes significantly to anti-tumor activity [85]. Macrophages that are polarized within the tumor microenvironment have increased phagocytic ability [86]. In the present study, exposure to MDA (Fig 4) and other cancer cell lines (S4 Fig) significantly induced the phagocytic ability of human monocytes to phagacytose bioparticles, suggesting that human monocytes exposed to cancer cell lines in vitro behave similarly to TAMs.
Our data suggest that despite the fact that helminth parasites and tumor cell lines are extraordinarily disparate, they share the ability to alter the phenotype of human monocytes although the nature of this alteration differed (see S1 Table). Nevertheless, similarities between the two types of stimuli in eliciting macrophage phenotypes similar to that of TAMs were observed, most notably in their ability to drive the surface expression of immune inhibitory molecules such as PDL1. Finally, utilizing a multidisciplinary approach to understand the mechanisms underlying immune evasion by both tumors and parasites could be beneficial to our understanding in both fields.
Supporting information S1 Table. Similarities and differences between cancer cell lines and filarial parasites in shaping the phenotype and function of human monocytes. (TIFF) S1 Fig. Comparing media for monocyte cultures. Human monocytes were cultured in either complete DMEM media, complete RPMI media, or complete EMEM media for 48 hours. Cells were harvested, A) viability was measured using trypan blue exclusion, B) mRNA levels were measured by TaqMan real-time PCR and normalized to the levels of 18S rRNA, and C) surface expression PDL1 and CD206 was measured using flow cytometry. The data are expressed as the geometric mean (n = 2). (TIFF) S2 Fig. mRNA expression of selected genes associated with inflammation, type 2, regulatory, and angiogenesis. Human monocytes were either unexposed (Mon) or exposed to CMFDA-labeled three different cancer cell lines (MDA, OVCAR, U87), or to live mf of Brugia malayi for 48 hours. CD45 + /CMFDAmonocytes were sorted and mRNA levels of selected genes associated with A) inflammation, B) type 2, C) regulatory and D) angiogenesis were measured by TaqMan real-time PCR and normalized to the levels of 18S rRNA. The data are expressed as the geometric mean with 95% confidence interval of 1/delta CT (n = 10). lines-exposed monocytes diminish CD4 + T and CD8 + T cells proliferation. Human monocytes were cultured in media alone, or with CMFDA-labeled-OVCAR, or CMFDA-labeled U87 for 48hr. Cells were harvested and CD45 + /CMFDAmonocytes were sorted and co-cultured with CFSE-labeled A) autologous or B) allogeneic lymphocytes in the presence of soluble anti-CD3 (10ug/ml) for an additional 4 days. Percent proliferation of CD4 + and CD8 + T cells was measured by flow cytometry either A and B) in the absence of antibody or C and D) in the presence of isotype control or anti-PDL1. The data are expressed as the geometric mean (n = 2). (TIFF)
S6 Fig. Longer exposure of monocytes to mf does not inhibit T cell proliferation.
Human monocytes were cultured in media alone, or with live mf for 5 days. Cells were harvested and co-cultured with CFSE-labeled A) autologous or B) allogeneic lymphocytes in the presence of soluble anti-CD3 (10ug/ml) for an additional 4 days. Percent proliferation of CD4 + and CD8 + T cells was measured by flow cytometry either in the absence of antibody or in the presence of isotype control or anti-PDL1. The data are expressed as the geometric mean (n = 2). (TIFF) S7 Fig. mRNA expression of selected genes associated with inflammation, type 2, regulatory, and angiogenesis following longer exposure. Human monocytes were either unexposed (Mon) or exposed to CMFDA-labeled three different cancer cell lines (MDA, OVCAR, U87), for either 5 or 7 days. CD45 + /CMFDAmonocytes were sorted and mRNA levels of selected genes associated with A) inflammation, B) type 2, C) regulatory, and D) angiogenesis were measured by TaqMan real-time PCR and normalized to the levels of 18S rRNA. The data are expressed as the geometric mean of 1/delta CT (n = 2). (TIFF) | 2018-04-27T04:32:24.530Z | 2018-04-01T00:00:00.000 | {
"year": 2018,
"sha1": "30295ee0c6de450f8cc683b30ffd0cf645019244",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0006404&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "30295ee0c6de450f8cc683b30ffd0cf645019244",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
234830960 | pes2o/s2orc | v3-fos-license | Consumer Panic Buying: Realizing Its Consequences and Repercussions on the Supply Chain
: Globalization has brought not only advantages but also risks into the supply chains. One lesser studied risk is the effect of consumer behavior in crises. The recent COVID-19 pandemic has shown that the most efficient and optimized supply chains are susceptible to consumer panic buying. There is a severe need to understand the multitude of scenarios that could manifest after a catastrophe due to the change in consumer behavior so that businesses can develop a mitigation plan. The authors have developed an agent-based model that can simulate the various outcomes of a crisis using a consumer panic buying model and a supply chain model. The model quantitatively evaluates the panic purchase intention of a consumer while assessing the impact of panic buying on the supply chain. This paper introduces the implementation of the model, focusing on output analysis of the various situational settings in disaster aftermath. Preliminary study has revealed that implementing quota policy or rationing uniformly is very effective while controlling media reports or panic buying consumers can reduce consumer demand significantly.
Introduction
Increasing globalization has a multitude of companies taking advantage of global sourcing through procuring inexpensive raw materials or lower labor costs from developing countries, or with technical expertise from advanced nations to increase its profitability. The supply chain, being the lifeline of most retail businesses, benefits from the many dynamics of globalization. The supply chain network of the global FMCG (Fast Moving Consumer Goods) is well woven from one corner of the world to another and monitored real-time with the help of advanced technology. The complex and long supply chains are inevitably subject to disruptions. The increasing risks have had the companies rethinking the management of the extensive supply chains in response to the various internal or external risks, such as inaccurate forecasts, transportation issues, political changes, economic instability, or natural disasters [1][2][3]. The recent COVID-19 pandemic exposed structural flaws that have indicated a requirement for organizations to reassess their risk management approaches. Lockdowns, transportation disruptions, and panic buying led to shortages of products in almost every sector, from medical and essential commodities to automotive and electronic components. While movement restrictions have caused production or distribution problems, panic buying has been a major cause for the severe shortage. The inability of supply chains to cope with the situation during the pandemic has explicitly put forward a lack of expertise and research to mitigate the risk of panic buying during emergency situations or disasters.
Human behavior is impacted when surrounded by a sense of fear or anxiety, especially during disasters or extreme situations. This follows an urge to act on the situation to take control [4][5][6]. Performing this action provides a sense of certainty over the situation [7]. Hence, in the case of a consumer, this 'fear of unknown' stimulates the precautionary action of stockpiling as protection against having no stock, which is often termed as 'panic buying'. Panic buying can be seen regularly before or after natural disasters such as hurricanes and earthquakes, which are mostly regional occurrences limited to the affected areas or countries. However, the recent COVID-19 pandemic has shown that this behavior can be seen in all uncertain situations and has established sufficient proof of panic purchase behavior among consumers [8]. People were hoarding available goods irrespective of necessity. Certain goods such as masks, sanitizers, and toilet paper were flying off the shelves, which resulted in stores implementing sales restrictions in most parts of the world. Panic buying of a product leads to a sudden increase in demand, which creates mayhem along the supply chains of the retail industries, as they mostly work on just-intime techniques [9]. This disruption progresses to initiate further panic buying, which converts into a vicious circle. Essential commodities, such as food or water are typically at higher risk and might lead to chaotic situations, increasing the number of vulnerable people. Such situations highlight the importance to control the panic among the public along with the presence of a resilient supply chain to be able to provide essentials to the consumers.
Disasters directly affect the infrastructure and operation of supply chains, and this has been discussed by the researchers at large, but these studies have ignored a major cause of supply chain disruption, which is the change in consumer behavior in uncertain situations. This has led to major supply chain failures in challenging circumstances of large-scale disasters. Hence, it is important to study and anticipate consumer behavior in crises. Several researchers studied consumer behavior in disasters. However, the study on panic buying by sociologists or psychologists is still meagre. Yuen et al. [6], had recently presented a comprehensive review of the existing literature present on panic buying, highlighting the lack of research in this area. As mentioned, the supply chain can be disrupted at large due to the sudden change in consumers purchase behavior. The impact on the supply chain due to consumer panic buying is an even lesser-discussed issue. Shou et al. [3] studied consumer panic buying and quota policy under supply disruptions related to supply reliability and the cost of ignoring consumer behavior. Yoon et al. [10] studied sourcing strategies for consumer stockpiling in supply disruptions. However, these studies have not included disasters or emergency situations, which have a significant impact on both the consumer and supply chain stakeholder. The lack of a predictive consumer model or studies of supply chain disruption due to consumer behavior was clear from the past literature [6]. To bridge the gap between supply chain risk studies and consumer behavior in disaster situations, we aimed to build a model that can study consumer panic buying and its impact on the supply chain and that can support in predicting and mitigating the various consequences of large-scale panic buying. Dulam et al. [11] developed a model to analyze both the consumer panic buying of bottled water and the response of the supply chain in the disaster aftermath using an agent-based model. The consumer model has been enhanced for assessing the panic purchase intention of a consumer in an uncertain situation based on the factors that play a significant role during crises [12]. The combined simulation model can help to understand the possible aftereffects of disaster scenarios, such as expected panic buying, the possible supply chain disruption due to it, and the outcomes of the intended mitigation measures used to control the situation, which could help industries be prepared for potential disruptions. For example, stores can foresee the possible demand given the behaviors of consumers in their area and manage their inventory accordingly. The supply chain can test mitigation measures they intend to apply and anticipate consumer reactions. The government can have a thorough view of the chaos that can unfold after a large-scale disaster. The current paper exhibits the application of the model to elevate the importance and benefits of such tools.
The rest of the manuscript is divided into four sections; next section discusses the past literature on this issue. The third section deals with the methodology used for model development. The fourth section shows the outcome of possible disaster scenarios. The fifth section reports the discussion. The last section provides the conclusions of this study.
Literature Review
This section discusses some of the available literature regarding consumer panic buying, followed by studies on supply chain impact due to disasters and finally on consumer panic buying and supply disturbances.
Consumer Panic Buying
The study of consumer behavior is a very popular field, as it forms the basis for the future of any business. The behavior of consumers has been evolving with changing lifestyles, and trends and patterns have been constantly studied by sociologists and psychologists. Consumer stockpiling was initially seen when product promotions or price fluctuations are prevalent. Consumers tend to stockpile when there is an uncertainty in the number of deal opportunities or regular prices. However, consumer panic buying in a large-scale disaster or panic situations is a lesser studied subject, which needs more focus in the wake of an increasing number of disasters. A few of the limited works on consumer behavior in crises is mentioned below. Strahle and Bonfield [5] tried to understand collective consumer actions with a model of individual decision-making in panic situations using eight structural factors, listed from previous literature. Liren et al. [13] advised that government involvement can control panic purchase based on an evolution mechanism and development tendency of panic purchase. Kurihara and Maruyama [14] found that unpreparedness for disaster and excessive media coverage had caused stockpiling of essential goods from an analysis of a survey conducted after the Great East Japan Earthquake. A study by Cavallo et al. [15] reported that disasters can impact product availability directly using online data collected from retailers. Forbes [16] found that consumers purchase increased levels of utilitarian products necessary for survival from a study of scanner data of purchases after the Christchurch earthquake. Lindsay and Hyunju [17] found that high utilitarian and hedonic shopping can be associated with a high level of fear. Recently panic buying has caught the researcher's attention due to its widespread presence during the pandemic. Yuen et al. [6], identified the causes of panic buying to be (1) an individual's perception of threat or scarcity, (2) fear of unknown, (3) coping behavior, and (4) social psychological factors based on a literature review of the existing academic papers on panic buying. Loxton et al. [8] compared the spending patterns of consumers during the COVID-19 pandemic and identified similarities with previous crises and shock events. Keane and Neal [18] and Prentice et al. [19] highlighted the influence of government policies on consumer panic buying using an econometric model based on Google search data and on semantic analysis, secondary data, and big data analytics, respectively. It can be observed from the above works that the study on panic buying behavior is still in its earlier stages, requiring research that can lead to forecasting the consumer behavior in times of disaster.
Supply Chain Management
Today's global supply chains are extremely complex, and a minor deviation of an element exposes the entire chain to disturbances, which impact both the performance and its long-term sustainability. As the supply chain's performance is vital to any business, supply chain disturbances could have a significant effect. Bernstein [20] wrote, "the demand for risk management has risen along with the growing number of risks." Additionally, Christopher and Peck [21] said, "in today's uncertain and turbulent markets, supply chain vulnerability has become an issue of significance for many companies and appropriate research on resilient supply chains are yet to be conducted". There is immense literature in the field of supply chain risk management, which studies the causes, effects, and mitigation of the typical risks in an event of a disaster. Katsaliaki et al. [22] provided a comprehensive review of possible risks, supply chain disruptions, modelling approaches, and resilience strategies, highlighting the impact, importance, and necessity of supply chain risk management. These works are paving way for better-equipped, resilient supply chains. Mensah and Merkuryev [23] prioritized the risks and proposed appropriate strategies and tools for understanding supply chain resilience in order to avoid those risks. Highlighting environmental risks such as natural disasters, several researchers have worked to study the effect on supply chains. Ivanov and Wendler [24] studied the existing quantitative methods and identified issues in emergency logistic management in disaster situations. Inoue and Todo [25] built a model to calculate the indirect damage along with its propagation during the 2011 triple disaster by simulating nationwide supply chains. Giannakis and Louis [26] modelled the supply chain risks for manufacturing industries using an agentbased approach. A few researchers in recent years have studied supply disturbances under stockpiling conditions. Yoon et al. [10] mentioned that stronger panic buying behavior is seen when consumers hold an experience of similar problems while studying a retailer's single and multiple sourcing strategies for consumer stockpiling initiated by supply disruptions. Zheng et al. [27] studied optimal inventory ordering policy for the retailer by taking into consideration consumers social learning behavior. Tsao and Raj [28] studied product segmentation and found that substitution and customer segmentation will increase profits by categorizing retailers in a panic-induced supply disruption scenario. Hobbs [29] suggested that the just-in-time supply chain model may be vulnerable to demand and supply shocks, as seen in the early months of the COVID-19 pandemic. The author raised several questions regarding the timing of the implementation of limits on purchasing, and the consumer's trust in the government and food system to manage the crisis.
The above discussions show that studies about consumer panic buying and supply chain disturbances are present, albeit the former in its early stages. However, the impact on supply chain due to consumer panic buying was not explored to date. This strengthens the previously discussed lack of research on the concerned topic. Furthermore, the necessity for predictive models was also highlighted in the above works. The current study addresses exactly this issue by proposing a novel means to approach the problem.
Methodology
This section explains the model in brief, as the article's focus is the implementation of the simulation tool. The objective involves the study of consumer behavior and its impact on the supply chain. Hence, the model chiefly comprises a supply chain model and a consumer model. The supply chain involves the stakeholders and their actions to complete the functions of the supply chain. The consumer model involves the process leading to the purchase of essential commodities in disaster aftermath.
The model is influenced by the consequences of the Great East Japan Earthquake, which had a severe impact on the infrastructure, the economy, and even the people. The nuclear accident intensified the crisis due to the radiation leak. One major consequence of the disaster was panic buying, which was prevalent in a wide range of sectors, such as the food and beverage industry and the auto component sector [10,30]. Fuel, bottled water, bread, and instant meals were among the top sellers [14]. Due to the radiation leak, the Tokyo water department had warned that infants must not consume tap water due to increased radioactive iodine levels. This notice instigated people to panic-buy bottled water in unprecedented quantities. There were several media reports and articles about the stockout of food and drinking water in stores, especially in the national capital. Analysis of such situations helps us identify potential problems associated with panic buying. Hence, we used the consequences of the triple disaster as a reference for identifying the key parameters of the model.
Supply Chain Model
The supply chain is an integration of a multitude of elements, such as people, organizations, activities, resources, and information, required for the manufacturing, distribution, and sale of a product. The model implements the key functions of the supply chain members required for sustaining their businesses. A simple supply chain with four tiers was considered with five stakeholders involved in the production and supply of bottled water as supply chain agents (SCAs), which are manufacturers (M), distributors (D), individual retailers (R), chain heads (CH), and their chain retailers (R). The product moves down the hierarchy, changing hands from the sellers into their buyers in the above-mentioned order.
The model was built with a supply chain design of continuous replenishment; the sellers provide the product to their buyers daily or at regular intervals depending on their lead time. In an interval, the SCAs prepare for sale and, when a consumer places an order, make the sale depending on the availability of stock. After all the consumers complete their purchase action, the SCAs conduct an inventory check and place orders to their sellers. The above actions of inventory management are based on the conventional economic order quantity model for variable consumer demand [31]. The supply chain model is designed such that the stakeholders strive to satisfy their customer demand. However, there might be some intervals where the demand is more than the usually expected quantity, which appears as a disturbance in the supply chain. Supply chain disturbance is defined in the current work as a minor imbalance in supply and demand for an interval or two, which is normalized immediately. However, when the disturbance is continuously persistent for several intervals, it converts into a supply chain disruption.
Consumer Model
The consumer model deals mainly with the decision-making of the consumer depending on the circumstances. Consumer decision-making is the study of how individuals make decisions on how they spend their available resources, such as time, money, and effort, on the consumption of products, which is influenced by the consumer's emotional, mental, and behavioral states. The influencing factors have been categorized as personal, social, individual, psychological, and situational. The model is built using a six-step decision-making process viz., need recognition, information search and processing, factor valuation, decision, purchase, and purchase evaluation. The household agent undertakes the above-mentioned process to make the purchase decision based on its available resources and information.
The consumer agent is a 'household', which includes its members. The main consumer model involves the decision-making process. Some sub-models are required to support the agent action, explained as follows. The product consumption model (here, water) calculates the average and per interval consumption of the household depending on the weight of each member [32,33]. The store selection model helps the consumer to select a store in its vicinity based on its preferences. The purchase quantity model calculates the required quantity based on available inventory and consumption habits. The satisfaction model evaluates consumer satisfaction based on product availability for the interval and outcome of the purchase action.
The decision-making process is phase-dependent; the household's decision-making criteria are different in the pre-and post-disaster phases. Pre-disaster decision-making is based on the available inventory and the consumption of the household. However, the postdisaster scenario unveils a multitude of psychological and situational factors evoked as a consequence of the disaster. Hence, post-disaster decision-making was developed based on a logistic transformation model, using a multiple regression analysis of a questionnaire survey. A questionnaire survey was conducted to comprehend the factors influencing the decision-making of the purchase of essential commodities, specifically bottled water, during a crisis. The questionnaire is designed such that the questions output the respondent's opinions on the influencing factors that are identified from past literature and articles published during the 2011 triple disaster [14,34,35].
Based on the survey outputs, the regression model is built based on significant personal factors (explanatory variables): household size, children, past purchase difficulty, past purchase behavior, risk, and anxious temperaments and effect of situational factors (response variables): sales restrictions employed by the retail stores, media reports, rumors about shortages, and neighbor's panic purchase behavior, as listed in Table 1. The regression equations obtained from the regression analysis give the model parameters (α), which along with the situational factors give the response variable (Y), and in turn, the purchase intention (PI) can be obtained, shown in Equations (1) and (2), respectively.
Here, PI is the purchase intention or probability of stockpiling; Y is the response variable; x i is the situational vector element for the ith situational factor; α i is the linear model parameters for the ith situational response variable; ε is a constant. Figure 1 shows the process of decision-making in the post-disaster phase. The model parameters for each consumer agent are calculated, based on the regression equations obtained from the questionnaire survey, during the generation and initialization of the agents. The process in the dashed box is conducted for every interval. Once the disaster occurs, the situational variables are calculated in every interval based on the circumstances of the consumer agent. The dynamic situational factors and the pre-calculated model parameters are inputted into the decision-making model to arrive at the purchase decision as explained earlier.
Simulation Design
The simulation tool, shown in Figure 2, is designed using an agent-based approach. The agent-based methodology allows one to capture the complexity of the system and the heterogeneity of human behavior. The randomly generated personal consumer attributes and the dynamic situational factors are inputted into the model to obtain the panic behavior of the consumers along with how the supply chain performance (SCP) has been impacted. The model involves all the actions required by the consumer agents and supply chain agents to satisfy their respective objectives. The output evaluates the changes in consumer behavior due to the disaster. The factors inducing panic buying can be identified along with the increase in product demand. The impact on the supply chain is measured by the product availability or stockout condition of the SCAs. The combined consumer panic buying and supply chain model can be used to test several mitigation scenarios and understand their outcomes. This tool will help test mitigation measures and learn their effectiveness, the situational factors that would increase or decrease panic buying, how to manage the inventory to increase availability, etc. The most commonly used strategy to curb panic buying is limiting sales per person. Hence, sales restrictions and rationing have been used as control measures in the current work. The simulation runs through three time phases. After the initialization of agents, the pre-disaster phase is initialized. The SCAs maximize their utility, while the consumers purchase according to their consumption and purchase habits. The post-disaster phase is triggered at the onset of the disaster. Consumer behavior changes depending on the changing circumstances. Consumers stop the consumption of tap water in fear of possible contamination. The strategy phase is initialized when a mitigation measure is employed to control the panic buying. A more detailed explanation and development of the model can be found in earlier works [11,12].
Results and Analysis
The simulation is set in an environment with a virtual grid of 100 × 100 size. The generation of the consumer agents and SCAs is random and are uniformly distributed, as shown in Figure 3. The location of the consumer agents and the SCAs for the simulations is also shown. The scope of the simulation is very small, considering its initial stages. Currently, for trial purposes, only 2000 households have been considered with a population of 4208. The household count and the age distribution of the population are set according to the Japan demographic data [36]. The personal attributes of the consumer agent are generated randomly but distributed according to the questionnaire survey. In the supply chain model, there are four retail stores, two individual retail stores, and two chain stores, as it is required to have at least two stores for every 1000 households in accordance with the store density of Tokyo [37]. There is one chain head for the two chain stores, one distributor, and one manufacturer to produce the product. The agents along with their attributes and properties are initialized before the time is initiated. The simulation is run for a time period of 60 intervals, where the disaster occurs in the 30th interval and the strategy is initiated in the 31st interval when employed. The initial parameters of the agents are initialized in such a way that the warm-up period of the simulation is reduced to around five intervals. For example, the initial average sales are initialized for the retailers based on the number of consumers in their vicinity. The higher level SCA's initial average sales are cumulative of the buyer agent's average sales. However, these values are recalculated at regular intervals based on actual sales. With the help of these initial settings, the warm-up period and the run time have been considerably reduced. The stochasticity of the consumer and supply chain agents makes it a difficult task to identify the changes due to the implementation of the different scenarios. In order to concentrate on the impact, one set of consumers is used for the production of the results. The simulation is coded using Java language on the Eclipse development environment, while the data analysis for the consumer model was conducted using JMP pro statistical software. We conducted simulations for different scenarios by varying several key parameters of the model, such as the initialization of the strategy or varying the situational elements. The following four disaster and mitigation scenarios have been considered as the setting for the simulations: Case 1: a normal scenario without any disaster; Case 2: disaster occurs, but no strategy is implemented; Case 3: disaster occurs, and sales restrictions are implemented; Case 4: disaster occurs, and rationing is implemented. The difference between the two strategies is that in the sales restrictions case, the consumers can make multiple purchases by visiting several stores until their requirement is met, while the consumer's purchase is limited to one purchase per interval in the rationing case.
Validation
The verification and validation of agent-based models is a difficult task. A possible method is to compare the simulation outputs with the factual records. However, access to such data is difficult. The performance of the supply chain model, as explained in Section 4.3, shows a similar effect to the bullwhip effect in a normal scenario. The consumer demand is variable, so the small variation in demand results in the amplified response along the hierarchy of the supply chain. This phenomenon can be seen in Figure 4, which validates the model. The consumer decision-making model in a disaster scenario is also verified by running the simulation where there are no changes in consumer behavior due to panic. In the current model, disaster impact on consumer agent is the discontinuation of tap water consumption, an increase in purchase quantity, and the influence of situational factors. Therefore, the consumers continue consumption of tap water, the panic buying factor is 1, and the situational elements of reports and rumors do not exist, as demand has not increased. With the above setting, the SCP is evaluated, as shown in Figure 5, for the disaster case without any strategy. The performance is similar to that of a normal case, and the stark difference from the disaster case can be seen (refer Section 4.3). The above aspects demonstrate that the model is consistent with the conceptual model.
Effect of Strategy on Consumer Satisfaction
The impact of the disaster or the strategy on the consumers is measured with the number of satisfied consumers. The satisfaction (S f ) of consumers is calculated based on the product sufficiency and outcome of the purchase action with equal weightage to both factors. The consumers are divided into five categories based on their satisfaction: full satisfaction (S f = 1), high satisfaction (0.5 < S f < 1), medium satisfaction (S f = 0.5), low satisfaction (0 < S f < 0.5), and zero satisfaction (S f = 0). Fully satisfied consumers are those who have sufficient inventory for the interval or who have made a full purchase; hence, the satisfaction is the maximum level, i.e., 1. The highly satisfied consumers are those who have sufficient inventory for the interval but have purchased less than the required quantity. Medium level satisfaction consumers are those who have sufficient inventory for the interval but could not purchase the product at all. The low satisfaction consumers do not have an inventory sufficient for the interval, but could make an insufficient purchase; hence, the product available with the consumer might or might not be sufficient for the interval. Finally, the zero satisfaction consumers neither have sufficient inventory nor could make any purchase. Hence, the consumer has to survive the interval without access to the product. Figure 6 shows the number of consumers in each satisfaction category for each interval. It can be seen that 100% of consumers have been satisfied in the normal case. While, in the disaster without a strategy case, the number of fully satisfied consumers has reduced, and the consumers with zero satisfaction have increased in the intervals after the disaster, peaking around the 40th interval, with almost 50% of the consumers with zero satisfaction. The implementation of the restrictions can be seen to reduce both the number of zero satisfaction consumers and the number of intervals with the presence of such consumers. While there are still around 10 intervals, where some consumers have to cope without the product, it can be seen that the number of zero satisfaction consumers has reduced drastically in the last case, indicating that rationing can be helpful in increasing the reach of the product, necessary for the survival of the people.
Effect on Supply Chain Performance
Supply chain performance (SCP) is one of the key outcomes of the current research. Supply chain performance can be evaluated using several factors such as efficiency, delivery time, and reliability. However, as the focus is on supply chain disruption in disaster times, we have used the base elements of inventory and sale.
Inventory-Sale Ratio
The Inventory-Sale ratio, R IS , is a ratio of inventory to sale, where the sale is the sum of the sale and the lost sale (after stockout), as shown in Equation (3). A lost sale is the sale opportunity lost by the SCA due to the unavailability of the inventory. Hence, the lost sale appears in the equation only if a stockout occurs.
Here, R ISi is the inventory sale ratio in interval i; I SCi is the current inventory of the supply chain agent in interval i; S i is the sales in interval i; S Li is the lost sales in interval i.
The inventory-sale ratio is a tier-level cumulative value of all the SCAs in the tier. Figure 4 shows the R IS of each tier along the time. This ratio indicates the efficiency of the SCA in managing its inventory. Hence, if R IS > 1, the white region in Figure 7, the on-hand inventory is greater than the demand, placing the SCA in a safe condition, and if R IS < 1, the red region, the SCA has insufficient inventory and moves into the stockout zone, which is seen as a disturbance. An inventory-sale ratio graph looks as shown in Figure 4, but to focus on the stockout zone, the inventory-sale ratio axis is shown from 0 to 3 in the above graphs. Figure 7 shows the impact on SCP in different scenarios. It can be seen that the retail tier has moved into the stockout zone in the normal phase, indicating the presence of lost sales in some of the retail store agents. An unexpected increase in demand for a retail store has caused this disturbance, as there was no cumulative stockout. Hence, it can be regarded as a minor disturbance. However, once the disaster is triggered in the 30th interval, it can be seen in the disaster case that the demand is so high that all the SCAs are in the stockout zone for consistently more than 10 intervals, indicating a crash in the supply chain. The sales restrictions have helped in averting the supply chain crash but could not avoid the disturbances, while the rationing case was effective in avoiding the disruption and maintaining the SCP. When the strategies are in place, the consumer demand is not satisfied; hence, the R IS of the retail stores continues to be very low as demand is far greater than the inventory.
Stockoutness
A parameter, named stockoutness, is introduced to understand the difference in the performance of the supply chain in various cases. Stockoutness indicates the stockout condition of the agent or the depth of the SCA in the stockout zone. This parameter is a cumulative inventory-sale ratio in the stockout zone as shown in Equation (4).
The stockoutness of the supply chain in the four cases is shown in Table 2. This is depicted in Figure 8. It can be observed that the strategies have allowed the stockoutness of the supply chain to reduce, while rationing brings the stockoutness to almost zero. As explained earlier, the stockoutness of the retail tier would continue to be on the higher end, as the sale is restricted. Stockoutness would be used to compare the SCP in the rest of the results.
Effect of Time Variation of the Strategy Initialization
In the simulation runs of the above results, the strategy is implemented in the consecutive interval of the disaster. However, in reality, such a fast reaction might not be possible. Hence, the effect on SCP is studied by varying the interval in which the strategy is initialized. The occurrence of disasters such as typhoons and hurricanes can be known beforehand, and mitigation measures can be taken earlier. Hence, to realize the effect of varying the start of the strategy, five cases were considered with the strategy initialized in different intervals with respect to the disaster interval: three intervals before disaster occurs (D − 3), one interval before disaster occurs (D − 1), one interval after disaster occurs (D + 1), five intervals after disaster occurs (D + 5), and ten intervals after disaster (D + 10). Figures 9 and 10 show the performance of the supply chain given sales restrictions and rationing for a few of the above-mentioned cases. It can be observed that the supply chain disruption can be averted if the sales restrictions are employed as early as possible, while the supply chain recovers immediately with rationing. The stockoutness, for the five cases and the case without any strategy, was calculated and is shown in Table 3 for the sales restrictions case. Figure 11 shows that stockoutness increases as initialization of the strategy is delayed. This indicates that the early employment of sales restrictions by stores could avoid a possible supply chain disruption. The SCP is improved immediately with the help of rationing at any point in time, as can be seen in Table 4 and Figure 12; the effectiveness of rationing is evident, as stockoutness is greatly reduced in all cases when compared with the case without any strategy (disaster case in Figure 7).
Effect of Variation of Stores with Strategy
Effect on the system when the number of stores employing a strategy varies can also be studied using this tool. In the results presented thus far, all the retail stores employ the strategy, while in reality that might not be the situation. There will be some stores that place sales restrictions, while some stores do not employ any strategy. Five cases were considered: 0%, 25%, 50%, 75%, and 100% of the stores employ a strategy. The effect on the supply chain along with the effect on consumer satisfaction is presented here. Figures 13 and 14 show the SCP and the consumer satisfaction for the 25%, 50%, and 75% cases considered with sales restrictions and rationing, respectively. 0% (Disaster) and 100% (Strategy) cases can be found in Sections 3.2 and 3.3. It can be seen that, in both the strategy cases, the difference or improvement of SCP by increasing the number of stores with the strategy is not evident. However, uniform (100%) enforcement of the strategy is very effective. Figures 15 and 16 show the stockoutness of the supply chain tiers, whose values are given in Tables 5 and 6. It can be observed that the effects of increasing the number of stores employing the strategies are not apparent; moreover, it creates an imbalance among the retail stores. This is because stores that do not employ a strategy continue with regular sales and the consumers can purchase their desired quantity by visiting the stores without sales restrictions. The stores without restrictions always stockout immediately. Hence, the expected improvement by increasing the number of stores with a strategy cannot be observed.
Effect of Variation in the Amount of Reports and Rumors
The effect when the situational elements of reports and rumors are decreased was also examined. The amount of reports and rumors is varied to generate various distributions, ranging over {0,1}, {0,0.75}, {0,0.5}, {0,0.25}, and {0,0}, while the pattern is uniform, as shown in Figure 17. These runs were conducted in the case of disaster and no strategy alone, to understand the impact of controlling the amount of reports and rumors alone. The effect of varying the amount of reports and rumors on the SCP is shown in Figure 18 using the stockoutness, whose values are given in Table 7. It can be seen that there is not much effect on the SCP, especially after the range falls below 0.5. Figure 19 shows the variation in the cumulative demand of the community with varying amounts of reports and rumors. It can be seen that there is a significant decrease in demand when the amount of reports and rumors is reduced to 0.75 and 0.5. However, a further decrease in the situational elements has no impact, as this demand is from consumers who have stopped tap water consumption. Hence, even the absence of the reports and rumors has resulted in increased demand. Figure 20 shows the effect on the satisfaction of the consumers by varying the amounts of reports and rumors. Similar to the effect on cumulative demand, decreasing the amount of reports and rumors from 1 to 0.5 helps in reducing the peak of the zero satisfaction consumer curve from around 1000 in the disaster case to around 700. However, a further decrease in the amount of reports and rumors has no effect due to the presence of the demand of consumers who have converted to bottled water following the disaster.
Effect of Variation in the Number of Panic Buyers
The presence of panic buyers increases product demand, disrupts the supply chain, negatively affects consumer satisfaction, and influences their neighbors to panic-purchase. Hence, it is important to investigate the impact of the presence of panic buyers on the system. In the current context, a consumer is a panic buyer if s/he stockpiles, i.e., if the panic buying factor (P b f ) is greater than 1.5. With this understanding, the survey showed that 29.6% of the respondents were panic buyers. We considered several cases by reducing the number of panic buying consumers from 29.6% to 0%, as shown in Table 8. The last case was set such that P b f is 1 for all the households. Hence, no consumer stockpiles or increases their purchase quantity. The distribution of the panic buying factor for consumer agents is also shown. The simulations for these cases are considered with the case of disaster and no strategy. The effect of varying the number of panic buyers on SCP is shown through the stockoutness in Figure 21, with the values in Table 9. The impact on the supply chain with the reduction of panic buyers is not as significant; however, there is a slight decrease in stockoutness as the number of panic buyers decreases. This is because the change in the conditions due to the reduction of panic buyers is not significant to alter the SCP. Figure 22 shows the effect on cumulative demand. It can be observed that there is a gradual decrease in demand as the number of panic buyers is reduced. In the case with 0% panic buyers and P b f = 1 for all consumer agents, the cumulative demand is still high. This demand is due to consumers who have stopped tap water consumption and due to the presence of consumers who make a purchase at every interval because of the situational factors of reports and rumors. The effect on consumer satisfaction is shown in Figure 23. Similar to the cumulative demand, there is a gradual decrease in the number of zero satisfaction consumers and a gradual increase in full satisfaction consumers with an even reduction of panic buyers.
Discussion
The purpose of this research is to understand how consumer behavior is affected due to disasters and mitigation measures and how these changes, in turn, affect the supply chain process targeting the demand risk [22]. It can be seen from the output of the above results that the model can be used to test any desired scenario to perceive possible outcomes.
The above results demonstrate an impact on the supply chain along with the effectiveness of the strategies considered. As reported in several past works, the consumer behavior undergoes radical changes after a disaster leading to increased demand of the product [4,6]. This change in consumer behavior has a negative effect on the supply chain, leading to disruption. The strategy of restricting sales implemented by retail stores is beneficial in controlling the supply chain disruption. The rationing system effectively avoids a crash of the supply chain and allows more consumers to procure the product. The model can also be used to study the time at which mitigation measures should be implemented, as it can be seen that a delay in the implementation of strategies can cause a delay in recovery. Therefore, looking at the performance of strategies, an early implementation of the measures would produce favorable outcomes. It would also help the community to recover faster from negative disaster consequences. However, this model can be used to develop and test other strategies that could contain the damage caused due to unforeseen occurrences.
This model helps us to understand the outcome with varying degrees of situational elements. The strategies when employed by only a portion of stores create imbalance and reduce the effectiveness of the strategy. However, a scenario where many stores employ the strategy could be a step forward in controlling the panic situation. Regarding media reports, it was seen that restricting them by 50% led to a significant decrease in demand, which headed to increased availability of the product. Hence, if authorities can contain information broadcasts about panic buying and shortages, it could facilitate in reducing panic among consumers considerably. Along with media, people are highly influenced by the actions of others. The decrease in the number of panic buyers resulted in a proportional decrease in the demand, and in turn, the number of zero satisfaction consumers decreased.
The increased demand and unavailability of the product immediately after disaster is in accordance with previous reports during past disasters [14,17]. The findings of this work on the strategies were found to be consistent with previous belief regarding the efficiency of the quota policy [3,10] in bringing the situation to control and it would reassure the stakeholders for the continuation of this strategy in crises. However, the outputs highlight rationing, as it could be appealing for supply chain managers and researchers to work on better implementation methods of this strategy. The impact of media, contrary to the popular belief of being a major influencer [8], was found to have 50% impact in this case.
The above observations are based on the settings used for the simulation. Hence, increasing the scope or settings could possibly result in a different outcome according to the inputted setup. Nevertheless, this analysis puts forward the usage and application of the model in multiple scenarios. The current work could benefit supply chain risk managers, as this tool allows us to understand consumer behavior and predict the demand for panic buying scenarios for any product. A survey can be conducted in a community to study their behavior in case of a disaster. When inputted with the actual data of the consumers in the community, this simulation tool will provide with the future demand and help the stores and supply chain managers to develop the BCPs for future disasters. In addition, the managers can identify the weak links, distribution issues. The stores can understand the reaction of the consumers and manage their activities by maintaining their image, service and quality. The optimal timing of implementing the mitigation measure can also be identified. These managers can test other mitigation measures to avoid supply chain disruption. This work can accommodate regular SC risks such as demand risks, supply risks, behavioral risks as mentioned by Katsalaki et al. [22] by using the modeling approach for quantitative analysis. The higher level SCAs can prepare resilient measures to satisfy their customer's order or test the effectiveness of quota policy on the lower SCAs. The retail chains would particularly benefit from this tool, as they can acquire sufficient knowledge of the consumer behavior of their retail stores and develop quantitative plans to reduce impact in case of panic buying. As mentioned earlier, it could be of great help to policymakers in analyzing the consequences of large-scale disasters, to develop strategies such that the basic necessities are available to a maximum number of people. The information and broadcasting department can regulate media articles in order to reduce panic buying. The consumer model can be utilized by studies dealing with influence of government policies such as Liren et al. [13], Keane [18] and Prentice et al. [19] and to understand the consumer reaction to the policies. Furthermore, the consumer model for panic situations could be of interest to researchers, as it could be used for identifying reasons which instigated stockpiling, mentioned by Yuen et al. [6]. This can help several researchers to focus the study on the prime influencers. It is known from past literature that panic buying is a herd mentality. This model could allow us to investigate the diffusion of panic buying in the community. On the other hand, academicians can study the prominence of non-panic buyers in lending their hand to decrease mass panic buying and provide essential insights for a resilient society. Researchers can use the model to simulate panic buying among consumers, especially as a COVID-19 case study, and to strengthen the efficiency of supply chain process.
Conclusions
Consumer behavioral changes in uncertain situations, such as large-scale disasters, have a severe impact on supply chain management. We developed a multi-agent model of consumer panic buying and the supply chain that can be used to study the outcomes of several scenarios and to obtain an optimal approach to tackle the possible problems in the disaster aftermath. The model can analyze behavioral changes and identify reasons for the changes. The trial simulations show that strategies are beneficial in avoiding supply chain disruptions, with rationing being a more effective measure. An earlier implementation of quota policy lowers the impact on the supply chain while rationing can be implemented depending on the criticality of the situation. A 50% reduction of reports can reduce the demand significantly. Reduction of panic buyers has the potential to increase product availability. The tool could be useful in helping industries understand consumer behavioral changes and guiding them in developing optimal measures for a resilient supply chain. The model could also be beneficial for governments seeking to make necessary efforts to control chaotic post-disaster situations, where providing essentials to the citizens is one of its prime tasks. There is an increasing need for more work on panic buying to understand it better and improve situations in times of increasing disasters.
There is a vast scope of future work given the comprehensiveness of the model. The inter-agent communication in the current model is limited, which has to be improved for more realistic outcomes. Another limitation is that supply chain stakeholders do not take any action, apart from the implementation of strategies, when the disaster occurs, which is in contrast to reality. This has to be improved using methods such as emergency safety stock. The consumer characteristics are assumed to be static in the current model, but such characteristics are dynamic in nature, especially the psychological elements, and change depending on the situational factors, circumstances, and the environment. | 2021-05-07T13:09:52.345Z | 2021-04-14T00:00:00.000 | {
"year": 2021,
"sha1": "41a4ef3067a9da16b349b681de0423e074f21269",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/8/4370/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "09014e7f1c46ffe8e563860f1307fd976147dd85",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
247689350 | pes2o/s2orc | v3-fos-license | In vitro screening of EPS‐producing Streptococcus thermophilus strains for their probiotic potential from Dahi
Abstract Dahi is a very common and traditional fermented dairy product in Pakistan and its neighboring countries, it represents a rich source for the isolation of many new strains of lactic acid bacteria (LAB). The major objective of this study was to evaluate the probiotic potential of novel exopolysaccharide (EPS)‐producing strains of S. thermophilus isolated from Dahi, sold in the local markets of Rawalpindi and Islamabad, Pakistan. In this study, 32 isolates of S. thermophilus were initially isolated from Dahi and out of these, 10 identified strains were further screened for their EPS‐producing ability. Maximum EPS production was estimated for RIY strain (133.0 ± 0.06), followed by RIH4 strain (103.83 ± 0.76) and RIRT2 strain (95.77 ± 0.22), respectively. Thereafter, in vitro studies revealed that these newly identified EPS‐producing strains of S. thermophilus fulfilled the basic requirements for probiotic functions; including resistance to harsh conditions of GIT, good cell surface hydrophobicity, auto‐aggregation, and co‐aggregation, especially against L. monocytogenes. Finally, the safety assessment displayed that these strains were also sensitive to clinical antibiotics, including vancomycin. Thus, these selected EPS strains of S. thermophilus act as potential candidates for biostabilizers in the preparation of consumer‐friendly fermented probiotic milk products.
. Due to this capability EPSs can be used in fermented foods as a natural emulsifier or food-grade hydrocolloid (Ruhmann et al., 2015). In addition to these functional properties, this bacterium is reported to have many health benefits for animals and humans, and numerous strains of S. thermophilus have been known to possess probiotic properties. Probiotics are briefly defined by WHO/ FAO (2006) as "live micro-organisms which when administered in sufficient amounts (i.e., minimum 10 6 CFU gm/L) (Shah, 2007) give their consumer or host a specific health benefit." They have either a direct or indirect impact on the gastrointestinal tract; including immunemodulation, mitigation of diarrhea due to miscellaneous causes, and prophylaxis of gastrointestinal infections. Probiotics are also effective against various intestinal diseases such as Helicobacter pylori infections, colon cancer, and inflammatory bowel disease (Marteau et al., 2001;Tuncer & Tuncer, 2014), as well as for lactose intolerance, blood cholesterol, bacterial vaginosis, and atopic dermatitis (Shah, 2007).
Probiotic strains must possess some basic characteristics including survival in simulated gastrointestinal (GI) tract conditions, antibacterial activity, cell aggregation, and cell surface hydrophobicity or bacterial adhesion to hydrocarbons (BATH), as a measure of intestinal colonization ability against adhesion of enteropathogens (Monteagudo-Mera et al., 2019). Previously, the higher colonization was observed for high hydrophobicity (De Souza et al., 2019;Miljkovic et al., 2015Miljkovic et al., , 2019. Essentially, BATH is associated with the adherence of strains and aggregation is the clumping of cells associated with their persistence in the GI tract (Saito et al., 2019). These properties are also required for probiotic starter culture development (Guarner et al., 2005;Miljkovic et al., 2015Miljkovic et al., , 2019Vinderola & Reinheimer, 2003).
A commonly used domestic dairy product in Asian countries including Pakistan is "Dahi"; an indigenous yogurt containing a mixture of LAB strains with Lactobacillus bulgaricus and Streptococcus thermophilus as major microbiota. Previous studies have confirmed the presence of LAB, including Lactobacillus bulgaricus (Ali et al., 2019), Lactobacillus acidophilus (Farid et al., 2021), and S. thermophilus (Mahmood et al., 2013) in Dahi samples collected from the markets of Rawalpindi and Islamabad, Pakistan.
These microorganisms have antibacterial activity against pathogenic bacteria and prevent gastrointestinal infections in the consumers, presenting probiotic effects (Mahmood et al., 2013;Soomro & Masud, 2012). However, studies related to the isolation of EPSproducing strains of S. thermophilus from Dahi, with well-studied characteristics and probiotic features, are limited. Therefore, the aim of this study was to assess the probiotic potential of new EPS-producing strains of S. thermophilus, with biostabilizing effects, obtained from a Dahi source which can be used in consumer-friendly dairy products.
| MATERIAL S AND ME THODS
Dahi samples (n = 101) were collected from the local markets of Rawalpindi and Islamabad. These samples were collected under aseptic conditions and taken immediately to the laboratory for analysis.
| Isolation and identification of S. thermophilus
The selective medium M17 (CM0817) Oxoid England was used to recover isolates of S. thermophilus from Dahi samples. M17 agar medium (composed of 3.725g M17 broth, 1.5% Technological agar and 10% lactose) was prepared in 100ml distilled water according to the instructions of the manufacturer; pH was adjusted to 6.9 with 6N NaOH, mixed on a magnetic stirrer and sterilized in a digital autoclave at 121°C for 15 min. (Hirayama, Japan). The agar was then poured into sterilized Petri dishes and allowed to solidify. Inoculation of collected samples was performed by the streaking method, on M17 agar plates, and incubated at 37°C for 24-48 h. The obtained isolates were tested through Gram staining and only Gram (+) colonies were further screened based on their morphological and biochemical properties, according to Buchanan and Gibbons (1974)
| Exopolysaccharide production of strains
The identified and characterized S. thermophilus strains were further evaluated for having exopolysaccharide production ability.
Finally, mucoid or ropy (EPS producing) colonies of S. thermophilus were selected and further assessed for their EPS production.
In order to estimate EPS production, strains were inoculated (2%) in sterilized fermentation medium and incubated at 42°C for 24 h. 100ml of fermentation medium was prepared by adding 7ml of 11% skim milk (LP0031) Oxoid England, 3.0g nutrient agar (Oxoid, England), and 1.0g tryptone (Oxoid, England) in distilled water to make the volume up to 100ml, mixed on magnetic stirrer, and autoclaved at 121°C for 15 min.
| EPS isolation
Exopolysaccharides were isolated from the fermented medium according to the method described by Rimada and Abraham (2003), with slight modifications. The fermented sample (100ml) was taken and heated to boiling point (100°C) in a water bath for about 15 min. in order to remove proteins (to inactivate enzymes) and polysaccharides attached to the cell walls. After cooling to room temperature, the sample was centrifuged at 8,000 rpm for 10 min.
to remove the cells. 17ml of 85% trichloro-acetic acid (TCA) was then added to the sample (100 ml), cooled to 4°C, and centrifuged again at 8,000 rpm for 10 min. EPS concentration in the supernatant was increased by precipitation with cold ethanol (−20°C), with a 1:3 concentration and stored overnight at 4°C. The final precipitate, obtained by centrifugation at 8000 rpm for 10 min., was dissolved in distilled water (100ml) and stored at 4°C. The collected pellets of EPS were again suspended and filtered through a dialysis tube (molecular weight cut-off 8-14 KDa, Beijing Solarbio Science & Technology Co., Ltd., China). The dialysis was performed against water for 48h., with water removal after every 8th hour. For further quantity determination, the solutions were prepared according to the method of Xu et al. (2010).
| EPS quantification
EPS quantification was carried out according to the method of Dubois (Dubois et al., 1956) based on the phenol-sulfuric acid method, with slight modifications. Firstly, 5% of phenol red solution was prepared in the distilled water then 2ml of sample (EPS solution) and 1ml of phenol solution were mixed in the test tube. 5ml of concentrated sulfuric acid was added to the mixture and left for 10 min. Then, the mixture was shaken by vortex and incubated at 30ºC for 10 min. (until development of a yellow-orange color). The control was prepared by adding 400µl phenol solution in 400µl of distilled water. Afterward, the absorbance of samples was measured by spectrophotometer (UV-9200) at 490nm, and readings were compared with the control to measure total carbohydrate content. The amount of EPSs in each sample was interpreted by using glucose standard calibration line and expressed as mg/L glucose equivalent. Calibration line is prepared by glucose solution (1mg/ml) as standard, using 6 different proportions as defined by Feldmane et al. (2013) and Muigei et al. (2013).
| Evaluation of technological properties of EPS-producing strains
The strains identified as EPS producers were used to ferment milk in order to determine technological properties, such as titratable acidity, curdling time, flavor, body, and texture of the curd. Sensory attributes (flavor, body, and texture) were determined through sensory evaluation method and titratable acidity by titration method. All the experiments were conducted in triplicates.
| Antibacterial activity of S. thermophilus strains
For measuring antibacterial activity, pathogenic strains E. coli ATCC25922, S. aureus ATCC6538, P. aeruginosa ATCC25923, and L. monocytogenes ATCC 19,115 were obtained from the Department of Pathology, Pakistan Institute of Medical Science (PIMS) (Mahmood et al., 2013). The stocks of all the strains were maintained in 20% (v/v) glycerol and stored at −80°C.
For this purpose, the paper disc method was used, as described by Soomro and Masud (2012), with slight modifications. Sterilized paper discs of 6-mm diameter made of Whatman filter paper no. 1, were kept on nutrient agar plates having a target pathogenic strain, whereas discs carried an adsorbed aliquot (20µl) of cell-free supernatant. pH of the nutrient agar medium was adjusted to 7.2. To obtain a cell-free supernatant, freshly overnight-grown culture was attained in broth medium, and its pH was adjusted to 5.5 with 1 M NaOH. It was then centrifuged at 13,000 rpm for 10 min and the supernatant (cell-free) was collected to send through a syringe filter (0.2µm) to remove bacterial cells. For comparison with the control, Ampicillin disc (10µg) was used as a reference antibiotic. The concentration of the overnight-grown culture of indicator strains was adjusted according to 0.5 McFarland turbidity standard. The plates were then kept in an incubator for 24 h at 37°C. Resulting clear inhibition zones, formed around paper discs, were then measured for evaluation of antibacterial activity. Inhibition zones or spectra round discs were computed in diameter (mm).
| Bile salt resistance and acid tolerance test
Bile salt tolerance and acid tolerance of isolates was conducted by the methods of Hassanzadazar et al. (2012) and Singhal et al. (2010), with slight modifications. For the acid tolerance test, M17 broth medium with adjusted pH values (2 and 3) was used to create in vitro acidic conditions of the gastrointestinal tract. pH 2 and 3 were adjusted with 1N HCl while pH 6.9 was adjusted to serve as a control.
Overnight-grown culture of S. thermophilus strains (1%) were then inoculated to M17 broth and incubated at 37°C for 5h. Percentage acid tolerance was found by measuring optical density (O.D.) at 600 nm using following formula: For bile salt tolerance, a fresh overnight-grown culture of S. thermophilus strains (1%) was used for inoculation in M17 broth medium, supplemented with 0.3% and 1.5% bile salts (w/V), while M17 broth without bile salt (0%) supplementation was used as the control.
Samples were incubated at 37°C for 6 h. and optical density (O.D.) was measured at 600 nm to determine the bile tolerance percentage of strains using the following formula:
| Auto-aggregation assay
Auto-aggregation assay was performed in line with the method as outlined by Kaushik et al. (2009), with slight modifications. For this purpose, cell pellets from fresh growth of isolates were obtained by centrifugation (8000 rpm for 10 min.). The cell pellet was then washed and resuspended in 0.01 M phosphate buffer saline (PBS).
Initial cell concentration (initial absorbance) was adjusted according to 0.5 McFarland standard at 600 nm and then incubated at 37°C for 2h. After 2 h, the suspension was centrifuged to obtain the cell pellet and mixed with the respective broth of equal volume removed. The supernatant was used to measure its absorbance (final absorbance) while the broth was used as the control. The following formula was used for calculating the percentage of auto-aggregation capability.
| Bacterial adherence to hydrocarbons (BATH) test
The method used, with some modifications, for determining percentage of bacterial adherence or hydrophobicity of S. thermophilus strains, was described by Kaushik et al. (2009). For this purpose, three different hydrocarbons (xylene, n-hexadecane, and dichloromethane) were selected for measuring selected strains adherence percentage to these hydrocarbons. Briefly, fresh overnight-grown culture was centrifuged (at 8000 rpm for 10 min.) to obtain the cell pellet. The cell pellet was then washed and resuspended in 2.5ml 0.01 M phosphate urea magnesium (PUM) buffer. Initial absorbance of cell suspension was set to 0.7 at 600 nm and then 1 ml of any tested hydrocarbon (xylene, n-hexadecane, or dichloromethane) was added to the cell suspension. This suspension was then incubated at 37°C for 10 min. and vortexed (2 min.) to mix the two phases and again incubated at 37°C for 1 hr. After the incubation period, phases were separated and the aqueous phase was collected carefully to measure its absorbance at 600nm, using the following formula:
| Antibiotic susceptibility assay
The disc diffusion method (paper disc method) was used to determine the S. thermophilus strains' susceptibility to antibiotics, as defined by Pisano et al. (2014), with some modifications.
Antibiotics (10) were selected for this test.
In this method, a bacterial lawn was prepared on agar plates with the concentration adjusted according to 0.5 McFarland standard and antibiotic discs were kept on it. These plates were then incubated at 37°C for 24 h and after 24 h clear zones or zones of inhibition (ZoI) were measured in diameters (mm) and compared with the interpretative zone diameters (CLSI M100-S21, 2011).
The results were indicated as susceptible, moderately susceptible, or resistant.
| Statistical study of data
The resulting data were statistically examined using Statistical package (SPSS 16.0 version). For this purpose, completely randomized design (CRD) was used and for graphical representation of the data Microsoft Excel was used. ANOVA (two way) followed by Tukey's test was also applied for statistical differences with a level of significance = 0.05 (Han et al., 2016). and 32 as cocci Gram positive and catalase negative. All selected isolates resulted negative for motility and spore formation ability as also reported by Sharma (2014).
| Isolation and identification of S. thermophilus
In order to screen out S. thermophilus from 32 isolates of cocci, isolated Gram-positive and catalase-negative cocci were further differentiated on the basis of their growth at different temperatures and NaCl concentrations as well as their carbon dioxide gas production from glucose and confirmed through analytical profile index (API) test. Each experiment was conducted in triplicates and only promising isolates were further propagated for selection. The isolates which grew at 45°C but could not grow at even 2% NaCl concentration and did not produce carbon dioxide gas from glucose (homo-fermentative), were selected. Sharma (2014) (2007) and Mahmood et al., (2013). The PCR results confirmed the 10 selected strains as S. thermophilus (Ali et al., 2019;Kullen et al., 2000;Suhartatik et al., 2014).
3.2 | Exopolysaccharide production of selected strains of S. thermophilus 3.2.1 | Screening of ropy and mucoid strains to assess EPS-producing ability The 10 strains identified as S. thermophilus were tested for EPS production ability. For this purpose, initially their ropiness and mucoid nature was assessed through a visual observation method, that is, ropiness test. The strains which formed long ropy like structures when picked with a sterile inoculation wire loop, were considered as ropy strains. According to Gomez (2006) and Zivkovic et al. (2015), this phenotypic character can be associated with the production of exopolysaccharides on solid medium, however, exopolysaccharides can be capsular polysaccharides (CPS) or ropy polysaccharides (RPS).
Capsulation was determined through staining with crystal violet and subsequently rinsing with 20% copper sulphate solution. The results obtained are shown in Table 1 and it can be seen that all ten selected technique. According to Behare et al. (2010), the strains forming ropy polysaccharides were considered to be better than strains forming capsular EPS and, due to this, can be used in dairy industry as a biothickener.
| Exopolysaccharide isolation and quantification
EPSs produced by the tested strains (ropy or capsular) were further isolated and then quantified by the trichloroacetic acid method followed by precipitation through the cold ethanol method. A similar method was used by Han et al. (2016) for isolation and measuring the concentration of these polysaccharides.
The results obtained regarding EPS concentration are summarized in Table 1. It can be observed that different strains produced different amounts of extracellular polymers with a significant difference (p < .05) among all the tested strains. The selected strains were able to produce EPS in skim milk medium, from 19.67 to 133.0 mg/l. Stingele et al. (1996) that reported the presence in S. thermophilus SFI6 of epsM and epsA genes responsible of exopolysaccharides synthesis. Maximum EPS production was observed in RIY in skim milk medium (133.0 a ± 0.06) followed by RIH4 (103.83 b ± 0.76) while minimum EPS production was observed in RIRT (19.67 j ± 0.57).
This variation in the results of EPS production might be attributed to the reason that exopolysaccharides production is dependent on the strains, which might be associated with the gene encoding on chromosome for EPS formation. In the literature it was reported that the total yield of EPS produced by the lactic acid bacteria (LAB) depends on the composition of the medium and conditions in which the organisms grow (i.e., medium, temperature, and incubation time) (Cerning et al., 1990). Gamar et al. (1997) also reported that EPS production and yield were influenced by the carbon source and concentration. Consequently, those strains which produced maximum quantity of EPS have a potential to replace the usage of chemical stabilizers in the dairy industry.
| Technological screening-a comparison of EPS-producing strains
Technological properties including acidity, curdling time, body and texture of curd, and other sensory features are summarized in Table 2.
As shown, EPS production greatly affects sensory evaluation, body,
| Antibacterial activity of S. thermophilus strains
Our traditional fermented dairy product, Dahi, can be used as a source of probiotics because the microbial isolates included strains of S. thermophilus, which is identified as a probiotic bacteria (Bhowmik et al., 2009;Mahmood et al., 2013). In addition to the primary role of their milk acidification, these bacterial strains of S. thermophilus produce secondary metabolites such as antibacterial peptides and possess other probiotic features.
EPS-producing strains were firstly investigated to ascertain their possible antimicrobial activity against food pathogens before determining other probiotic properties. Four pathogenic strains were used for this purpose (as shown in Table 3), namely L. monocytogenes ATCC 19,115, E. coli ATCC25922, S. aureus ATCC6538, and P. aeruginosa ATCC25923 as also previously used by Mahmood et al. (2013).
Therefore, determination of the antibacterial activity of S. thermophilus strains against these indicator strains would be a novel character.
It is revealed from the results that all the ten tested strains gave variable results and showed a wide range of antimicrobial activity against different pathogenic/indicator strains, having more or less zone of inhibition against one pathogen or more. These differences in the inhibitory activities of tested strains against different indicator strains may be due to their genotype or environmental factors.
The results of antibacterial activity of cell-free supernatants, from S. thermophilus strains, are presented in
Acid tolerance
If a minimum amount of 10 6 log CFU (Nagpal et al., 2012) bacterial culture tolerates pH up to 2-3, it can be a potential candidate for a probiotic, as the initial pH of the stomach is 1.5 and it reaches up to pH 3-4 as food enters, which can remain for 4-5 h (Slavin, 2013).
Low pH tolerance or acid tolerance of S. thermophilus strains was measured in vitro at two pH levels (pH 2 and pH 3). Only six strains out of ten selected were found to be tolerant to acid at both pH levels (2 and 3). The maximum tolerance under acidic conditions was observed in RIY with a 69% survival after 5h of incubation at pH 3 and 25% at pH 2, followed by RIH4, showing 65% survival at pH 3 for 5h and 20% at pH 2 ( Figure 3). RIH4 is further followed by RIK with 62% survival at pH 3 and 19% survival at pH 2. Strain RIRT2 has 58% survival at pH 3 and 16% at pH 2 while strains RIRT and RIR1L showed almost similar survival rates at both pH levels with 52% survival at pH 3 and 15% at pH 2. Control strain gave survival percentages of 10% at pH2 and 47% at pH 3 as compared to other selected strains. Consequently, it can be said that pH 2 was more harmful for S. thermophilus than pH 3, however, the viability of cells declined during incubation. All six strains which remained viable at pH 3 had a survival rate of more than 50% and hence can be probable candidates as a probiotic culture (Liong & Shah, 2005).
Several studies have determined that S. thermophilus strains were unable to grow at low pH levels (Haller et al., 2001;Khalil, 2009;Mahmood et al., 2013;Maurad & Meriem, 2008); Tuncer and Tuncer (2014) reported that pH 1 was more lethal to S. thermophilus ST8.01 than pH 3, but during incubation at pH 3 viability of cells still declined and the percentage of inhibition was found to be more than 99.99%, whereas at pH 5 it was 95.43% and viability was retained. Another study by Mahmood et al. (2013)
TA B L E 3
Antibacterial activity of cellfree supernatants from S. thermophilus strains against different food pathogens was resistant to pH greater than 2 but nonresistant to 1.5. Some studies related to other probiotic bacteria also gave similar findings to our results. Maurad and Meriem (2008) reported that L. plantarum strains survived up to 6h after incubation at pH 2. According to Aswathy et al. (2008), LAB including Streptococcus growth increased at pH 5 and facilitated in the production of fermented vegetable and milk products.
Bile tolerance
Bile tolerance is one of the most essential criteria for a strain to be used as a probiotic culture (Hassanzadazar et al., 2012;Soleimanian-Zad et al., 2009;Vizoso-Pinto et al., 2006). Bile resistance and the ability of LAB to inhabit the intestinal tract appear to be correlated (Soomro & Masud, 2012). According to Aswathy et al. (2008), probiotic strains which are intended to be used for humans must have resistance to bile salts at 0.3% concentration.
| Cell aggregation
Auto-aggregation ability of probiotics is a prerequisite for their adherence with the epithelium cells of the intestine (Aslim et al., 2007;Collado et al., 2008). It can be seen from Figure 5 that the cellular aggregation percentage was variable for all the six selected strains. Maximum auto-aggregation was found for RIRT (98.8 ± 0.6) followed by RIY (97.8 ± 0.4), RIRT2 (61.2 ± 1.0), and RIH4 (53.6 ± 0.6), respectively, while the minimum was observed for RIR1L (12.0 ± 0.5) and RIK (8.8 ± 0.6). These variations can be probably due to the auto-aggregation ability of the single strain as also observed by other researchers (Kos et al., 2003;Todorov et al., 2009;Tuncer & Tuncer, 2014;Vlkova et al., 2008) reporting that physico-chemical properties of cell surfaces such as hydrophobicity might have affected the auto-aggregation abilities. The results in Table 4 show that high EPS-producing strains exhibited more aggregation. Aslim et al. (2007) and Darilmaz and Beyatli (2012) also reported that high EPS-producing strains exhibited significant aggregation. However, RIRT strain is less EPS producing but showed high auto-aggregation ability which might be due to the strain specificity.
| Bacterial adherence to hydrocarbons (BATH)
The ability of bacteria to adhere to different hydrocarbons is the measure of the bacterial hydrophobicity to assess adherence of bacterial strains to the intestinal lining. Previously, in vitro analysis of bacterial adhesion to hydrocarbons using n-hexadecane and xylene was carried out by Schillinger et al. (2005) and Kaushik et al. (2009), while using dichloromethane as a source of hydrocarbons was conducted by Jose et al. (2015).
In the present study, three hydrocarbons were used, namely nhexadecane, xylene, and dichloromethane (DCM) for testing the adherence percentage of selected strains of S. thermophilus, as shown in Figure 6. Among the three different hydrocarbons used, there is significant difference (p <.05) between n-hexadecane and the other two hydrocarbons (xylene and dichloromethane), whereas between xylene and dichloromethane there is a nonsignificant difference (p >.05). In contrast to this study, Kaushik et al. (2009) Iyer et al. (2010). Although small differences exist for adherence percentage, the present study values are still higher than many other findings ( Figure 6). According to the criterion as described by Tyfa et al. (2015)
| Antibiotic susceptibility
A key requirement for probiotic strains is that they should not carry transmissible antibiotic resistance genes. Ingestion of bacteria carrying such genes is undesirable, as horizontal gene transfer to recipient bacteria in the gut could lead to the development of new antibiotic-resistant pathogens (Guglielmetti et al., 2009;Saarela et al., 2000;Salminen et al., 1998). For this, the assessment of S. thermophilus strains' susceptibility to clinically important antibiotics becomes important (Tuncer & Tuncer, 2014). The six selected strains were tested against 10 antibiotics by agar diffusion method as presented in Table 4. These strains were grouped as susceptible (S:20mm or >), resistant (R:0-10mm), or intermedi- Temmerman et al. (2003), Aslim and Beyatli, (2004), Tosi et al. (2007), and Mahmood et al. (2013). These differences in the degree of inhibition with various antibiotics were possibly due to the difference in environment of strain isolation, as this is not the intrinsic feature of strains, or might be due to their different actions on the cell components such as the cell wall, protein and DNA synthesis, DNA gyrase, and RNA polymerase (Neu, 1992).
| CON CLUS ION
Today, the selection of probiotic, natural, EPS-producing strains is gaining importance throughout the world for replacing artificial stabilizers. The present in vitro findings reflected that these three novel EPS-producing strains of S. thermophilus (RIRT2, RIH4, and RIY), isolated from indigenous Dahi samples fulfill the basic criteria for the selection of probiotics with additional health benefits.
Thus, these strains have a potential to be used as a source of biostabilizer starter culture for the different probiotic fermented milk products.
ACK N OWLED G M ENTS
The authors thank the Institute of Food and Nutritional Sciences, F I G U R E 6 Adherence to different hydrocarbons of S. thermophilus strains isolated from local Dahi (mean ± SD)
CO N FLI C T O F I NTE R E S T
The authors declare that they do not have any conflict of interest.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data that support the findings of this study are openly available | 2022-03-26T15:09:39.505Z | 2022-03-24T00:00:00.000 | {
"year": 2022,
"sha1": "8558bb18bfc0c1f30a4bd04fc597d99390d296ec",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "345197886393e61c814f52c66b10c6b03db95caa",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
10812214 | pes2o/s2orc | v3-fos-license | Efficiency Enhancement of Dye-Sensitized Solar Cells’ Performance with ZnO Nanorods Grown by Low-Temperature Hydrothermal Reaction
In this study, aligned zinc oxide (ZnO) nanorods (NRs) with various lengths (1.5–5 µm) were deposited on ZnO:Al (AZO)-coated glass substrates by using a solution phase deposition method; these NRs were prepared for application as working electrodes to increase the photovoltaic conversion efficiency of solar cells. The results were observed in detail by using X-ray diffraction, field-emission scanning electron microscopy, UV-visible spectrophotometry, electrochemical impedance spectroscopy, incident photo-to-current conversion efficiency, and solar simulation. The results indicated that when the lengths of the ZnO NRs increased, the adsorption of D-719 dyes through the ZnO NRs increased along with enhancing the short-circuit photocurrent and open-circuit voltage of the cell. An optimal power conversion efficiency of 0.64% was obtained in a dye-sensitized solar cell (DSSC) containing the ZnO NR with a length of 5 µm. The objective of this study was to facilitate the development of a ZnO-based DSSC.
Introduction
Dye-sensitized solar cells (DSSC) belong to the third generation of solar cells. Due their low-cost materials and low-cost technologies, they are the promising replacement for conventional silicon-based solar cells [1]. The highest single-cell conversion efficiency of 13% is comparable to the Si cells [2]. Generally, TiO 2 nanoparticle films coated onto fluorine-doped tin oxide (FTO) layers are made as the photoelectrode in DSSCs because of their suitable chemical affinity and surface area for dye adsorption as well as their proper energy band promising charge transfer between the electrolytes and dye [3,4]. However, the one problem of DSSCs is that not all of the photogenerated electrons can arrive at the collecting electrode, because electron transport within the nanoparticle network takes place via a series of hops to adjacent particles, and the energy damage that occurs during charge transport processes results in conversion efficiency. This trapping process results in the transport becoming slow, and an increase in scattering, which greatly increases the recombination of the electrons with the oxidized dye molecules, reducing efficiency and oxidized redox species. In order to enhance dye adsorption, the thickness of TiO 2 should be increased. However, this recombination problem is aggravated in TiO 2 nanocrystals by reason of a depletion layer on the TiO 2 nanocrystallite surface, and its severity increases as the photoelectrode film thickness increases [5]. In response to this problem, the paper proposes a ZnO-based DSSC technology as a replacement for TiO 2 in solar cells. Zinc oxide has received a great deal of attention as a photoanode in dye-sensitized solar cells (DSSCs) due to its large exciton-binding energy (60 meV) and large band gap (3.37 eV) [6]. Furthermore, its electron mobility is higher than that of TiO 2 by two-to-three orders of magnitude [7]. Therefore, ZnO is anticipated to demonstrate faster electron transport as well as decreased recombination damage compared to TiO 2 . Nevertheless, studies have reported that the entire efficiency of TiO 2 DSSCs is higher than that of ZnO DSSCs. The efficiency of TiO 2 thin-passivation shell layers is higher than the highest reported efficiency of ZnO DSSCs [8], in which the principal problem is the dye adsorption process in ZnO DSSCs. Because of the high carboxylic acid binding groups in the dyes, the dissolution of ZnO and precipitation of dye-Zn 2+ complexes occurs. This phenomenon results in a poor overall electron injection efficiency of the dye [9].
Several approaches exist for enhancing the efficiency of ZnO DSSCs. One method is to introduce a surface passivation layer to a mesoporous ZnO framework; nevertheless, this may aggravate the dye adsorption problems. Alternatively, conventional particulate structures can be changed by replacing the internal surface area and morphology of the photoanode. Nevertheless, the surface area and diffusion length are incompatible. Augmenting the photoanode thickness empowered a higher number of dye molecules to be fixed; this, however, increases the possibility of electron recombination because of the extended distance through which electrons diffuse to the transparent conductive oxide (TCO) collector. This trapping process results in augmented scattering and slows down the electron transport which increases the recombination of the electrons with the oxidized redox species or the oxidized dye molecules, hence reducing efficiency. One probable strategy for ameliorating electron transport in DSSCs is to supersede the nanoparticle photoelectrode with a single-crystalline nanorod (or nanosheet, nanobelt, nanotip) photoelectrode. Electrons can be led through a direct electron path within a nanorod rather than by multiple-scattering transport between nanoparticles. In research, the electron transport is tens to hundreds of times slower in nanoparticle DSSCs than in nanorod-based DSSCs [10][11][12]. Therefore, many works have been performed on the synthesis of TiO, and ZnO nanostructures for applications in DSSCs [13][14][15].
However, the utilization of FTO may not be the best method for improving the cell performance. One problem is that the small difference in the work function between ZnO and FTO does not supply sufficient driving force for the charge injection from the ZnO nanowires to FTO, which hints that new TCO materials should be used in ZnO-based DSSCs. Lee et al. use the ZnO:Al (AZO) film to replace the FTO layer as the TCO layer [4]. Their structure was accomplished by a three-step process, TCO, seed layer, and nanostructure, but this method was slight complicated. To simplify the procedures, we used a two-step process in this study, and present a detailed discussion. These characteristics were observed using X-ray diffraction (XRD), UV-visible spectrophotometry, field-emission scanning electron microscopy (FE-SEM), electrochemical impedance spectroscopy (EIS), incident photon-to-electron conversion efficiency (IPCE), and solar simulation. Figure 1 illustrates the schematic structures of DSSCs with ZnO nanorods of various lengths, which are shown in Figure 1. First, radio-frequency sputtering was used to deposit a ZnO:Al (AZO) seed layer (approximately 300 nm) on Corning-glass substrates with a sheet resistance of 18 Ω/sq, and the defined area of the seed layer was 1 cm 2 . The Pt (H 2 PtC l6 solid content: <6%, viscosity:~50 cps, eversolar Pt-100) film was also deposited on the AZO/Corning-glass substrates by spin-coating. These substrates were used for growing ZnO NRs. The ZnO NRs were deposited using zinc nitrate (Zn(NO 3 ) 2 6H 2 O, Aldrich) and hexamethylenetetrasece (C 6 H 12 N 4 , HMT, Aldrich). Both mixtures were melted in deionized water to a concentration of 0.02 M and stored at 90˝C for 9 h. These solutions were replaced every 9 h, and the corresponding ZnO NRs were denoted by 18-and 27-h NRs. The hydrothermal chemical reactions for the ZnO NRs are expressed as follows: 3 (1)
Experimental
After the reaction was complete, the resulting ZnO NRs were rinsed with deionized water to remove residual ZnO particles and impurities. A D-719 dye, cis-bis(isothiocyanato) bis(2,2 1 -bipyridyl-4,4 1 -dicarboxylato) ruthenium(II)bis-tetrabutylammonium, (Everlight Chemical Industrial Corp., Taipei, Taiwan) was dissolved in acetonitrile for preparing a 0.5 mM dye solution. Dye sensitization was propagated by soaking the ZnO photoelectrodes in the D-719 dye at room temperature for 2 h. A sandwich-type configuration was used to measure the presentation of the DSSCs. An active area of 1 cm 2 was assembled by using a Pt-coated AZO substrate as a counter electrode, and the Pt/AZO was heated at 200˝C for 30 min in air. The DSSC was sealed employing a polymer resin (Surlyn) to act as a spacer. The electrolyte was injected into the space among the electrodes from these two holes, and then these two holes were sealed completely by using Surlyn. The electrolyte (0.5 M 4-tert-butyl-pyridine + 0.05 M I 2 + 0.5 M LiI + 0.6 M tetrabutylammonium iodide) was injected to the cell and then sealed with UV gel. The influence of growth time on the structural and optical properties of these ZnO NRs was analyzed by XRD and UV-visible spectrophotometry. Surface morphologies of the ZnO nanorods were examined using field-emission scanning electron microscope (FESEM). The photocurrent-voltage (I-V) characteristic curves were measured using Keithley 2420 under AM 1.5 illumination. The electrochemical impedance spectroscopy (EIS) was measured under the light illumination of AM 1.5 G (100 mW/cm 2 ) with an impedance analyzer (Autolab PGSTAT 30) (Metrohm Autolab, Utrecht, Netherlands) when a device was applied with its open-circuit voltage (Voc). An additional alternative sinusoidal voltage amplitude 10 mV was also applied between an anode and a cathode of a device over the frequency range of 0.02~100 kHz. The external quantum efficiency (EQE) results were acquired from a system using a 300 W xenon lamp (Newport 66984) light source and a monochromator (Newport 74112) (Newport Corporation, Taipei, Taiwan). The beam spot size at the sample measured was approximately 1 mmˆ3 mm. The temperature was controlled at 25˝C during the measurements.
After the reaction was complete, the resulting ZnO NRs were rinsed with deionized water to remove residual ZnO particles and impurities. A D-719 dye, cis-bis(isothiocyanato)bis(2,2′-bipyridyl-4,4′-dicarboxylato) ruthenium(II)bis-tetrabutylammonium, (Everlight Chemical Industrial Corp., Taipei, Taiwan) was dissolved in acetonitrile for preparing a 0.5 mM dye solution. Dye sensitization was propagated by soaking the ZnO photoelectrodes in the D-719 dye at room temperature for 2 h. A sandwich-type configuration was used to measure the presentation of the DSSCs. An active area of 1 cm 2 was assembled by using a Pt-coated AZO substrate as a counter electrode, and the Pt/AZO was heated at 200 °C for 30 min in air. The DSSC was sealed employing a polymer resin (Surlyn) to act as a spacer. The electrolyte was injected into the space among the electrodes from these two holes, and then these two holes were sealed completely by using Surlyn. The electrolyte (0.5 M 4-tert-butylpyridine + 0.05 M I2 + 0.5 M LiI + 0.6 M tetrabutylammonium iodide) was injected to the cell and then sealed with UV gel. The influence of growth time on the structural and optical properties of these ZnO NRs was analyzed by XRD and UV-visible spectrophotometry. Surface morphologies of the ZnO nanorods were examined using field-emission scanning electron microscope (FESEM). The photocurrent-voltage (I-V) characteristic curves were measured using Keithley 2420 under AM 1.5 illumination. The electrochemical impedance spectroscopy (EIS) was measured under the light illumination of AM 1.5 G (100 mW/cm 2 ) with an impedance analyzer (Autolab PGSTAT 30) (Metrohm Autolab, Utrecht, Netherlands) when a device was applied with its open-circuit voltage (Voc). An additional alternative sinusoidal voltage amplitude 10 mV was also applied between an anode and a cathode of a device over the frequency range of 0.02~100 kHz. The external quantum efficiency (EQE) results were acquired from a system using a 300 W xenon lamp (Newport 66984) light source and a monochromator (Newport 74112) (Newport Corporation, Taipei, Taiwan). The beam spot size at the sample measured was approximately 1 mm × 3 mm. The temperature was controlled at 25 °C during the measurements.
Results and Discussion
In this study, ZnO NRs with various lengths were grown on AZO substrates of photoanodes to increase the optical absorption of the dye. Figure 2a shows the respective XRD patterns for the ZnO NRs derived from the 9-, 18-, and 27-h reactions, respectively. The crystalline structure was analyzed using XRD measurements according to a θ/2θ configuration. In principle, the XRD spectra indicate that the ZnO films developed without the presence of secondary phases or groups. All the samples have a hexagonal wurtzite structure of ZnO and grew along the c-axis; this enabled the observation of the ZnO (002) diffraction plane in the XRD pattern. The increase in intensity of the diffraction peak
Results and Discussion
In this study, ZnO NRs with various lengths were grown on AZO substrates of photoanodes to increase the optical absorption of the dye. Figure 2a shows the respective XRD patterns for the ZnO NRs derived from the 9-, 18-, and 27-h reactions, respectively. The crystalline structure was analyzed using XRD measurements according to a θ/2θ configuration. In principle, the XRD spectra indicate that the ZnO films developed without the presence of secondary phases or groups. All the samples have a hexagonal wurtzite structure of ZnO and grew along the c-axis; this enabled the observation of the ZnO (002) diffraction plane in the XRD pattern. The increase in intensity of the diffraction peak and also the narrowing of the peak, in other words, decrease in the full width at half maximum (FWHM) of the peak, with the length of ZnO NRs increased, and the crystallinity improvement of the ZnO NRs. Existing dye uptake measurements were based on dye desorption from the photoanode after a specified 30 min using a NaOH solution, and the succeeding UV-Vis spectroscopy. For the quantitative analysis of dye loading, the washing course for desorption of dye from the anodes was performed using the known volume of 0.1 mM NaOH aqueous solution. The dye detached from the ZnO NRs as implemented for different lengths of ZnO NRs in literature. Figure 2b As mentioned, ZnO NRs with various lengths were grown on AZO substrates, and these NRs were used in DSSCs (Figure 3). As mentioned, ZnO NRs with various lengths were grown on AZO substrates, and these NRs were used in DSSCs (Figure 3). Figure 3a-f illustrate FE-SEM images of the respective ZnO NRs from the 9-, 18-, and 27-h reactions grown on the AZO substrates, indicating that the ZnO NRs were adequately grown on substrates with a distinctive, clear morphology. Furthermore, the diameters, lengths, and aspect ratios of the NRs were in the range of 76-110 nm, 1.5-5 µm, and 20.7-47.9, respectively. Greene et al. indicated that the growing temperature influences the upright growth of ZnO NRs [16]. Figure 4a depicts the Nyquist plots of the impedance spectra. To characterize the AZO/dye/electrolyte interface, the open-circuit voltage (Voc) levels of the DSSCs were evaluated under AM 1.5 illumination by conducting EIS measurements. The Nyquist plots indicate a small semicircle at high frequencies and a large semicircle at low frequencies. The inset in Figure 4a shows the equivalent circuit. Usually, all the spectra of the DSSCs exhibit three semicircles, which are ascribed to the electrochemical reaction at the Pt counter electrode, charge transfer at the TiO 2 /dye/electrolyte, and Warburg diffusion process of I´/I 3´, respectively [17,18]. In the present study, the charge transfer resistance at the ZnO/dye/electrolyte interface (Rct 2 ) decreased when the aspect ratio of the ZnO NRs was varied from 20.7 to 47.6. This may be attributable to the increase in the diameter size, length, and quality of ZnO NRs, which led to an increase in the dye adsorption as well as penetration of electron mobility into the pores of the AZO electrode (Figure 4a). The better collected and transported electrons had a lower possibility of recombination, and the electron lifetime was increased [19]. Figure 4b shows Bode phase plots indicating the characteristic frequency peaks (1-10 4 Hz). The characteristic frequency peak shifted to a lower frequency when the aspect ratio increased, and the characteristic frequency can be considered as the inverse of the electron lifetime (τ e ) or recombination lifetime (τ r ) in an AZO film [20,21]. This implies that the NRs with an aspect ratio of 47.6 (grown for 27 h) had the longest electron lifetime in the AZO film. The results indicate that the ZnO NRs, which were grown for 27 h (aspect ratio: 47.6), on the AZO film had a lower transport resistance and a longer electron lifetime in the AZO electrode. The electron lifetimes in the AZO films increased from 3.25 to 6.12 ms when the aspect ratio increased from 20.7 to 47.6. This result is consistent with the following results obtained from cell performance and EIS analysis. As mentioned, ZnO NRs with various lengths were grown on AZO substrates, and these NRs were used in DSSCs (Figure 3). Figure 3a-f illustrate FE-SEM images of the respective ZnO NRs from the 9-, 18-, and 27-h reactions grown on the AZO substrates, indicating that the ZnO NRs were adequately grown on substrates with a distinctive, clear morphology. Furthermore, the diameters, lengths, and aspect ratios of the NRs were in the range of 76-110 nm, 1.5-5 μm, and 20.7-47.9, respectively. Greene et al. indicated that the growing temperature influences the upright growth of ZnO NRs [16]. Figure 4a shows the equivalent circuit. Usually, all the spectra of the DSSCs exhibit three semicircles, which are ascribed to the electrochemical reaction at the Pt counter electrode, charge transfer at the TiO2/dye/electrolyte, and Warburg diffusion process of I − /I 3− , respectively [17,18]. In the present study, the charge transfer resistance at the ZnO/dye/electrolyte interface (Rct2) decreased when the aspect ratio of the ZnO NRs was varied from 20.7 to 47.6. This may be attributable to the increase in the diameter size, length, and quality of ZnO NRs, which led to an increase in the dye adsorption as well as penetration of electron mobility into the pores of the AZO electrode (Figure 4a). The better (τe) or recombination lifetime (τr) in an AZO film [20,21]. This implies that the NRs with an aspect ratio of 47.6 (grown for 27 h) had the longest electron lifetime in the AZO film. The results indicate that the ZnO NRs, which were grown for 27 h (aspect ratio: 47.6), on the AZO film had a lower transport resistance and a longer electron lifetime in the AZO electrode. The electron lifetimes in the AZO films increased from 3.25 to 6.12 ms when the aspect ratio increased from 20.7 to 47.6. This result is consistent with the following results obtained from cell performance and EIS analysis. Figure 5a shows the J-V curve for the DSSCs containing the ZnO NRs obtained from the 9-, 18-, and 27-h reactions, indicating that the short-circuit current density (Jsc) and cell performance significantly increase with the NR length. As revealed in the figures, the photovoltaic performances of our DSSCs employing ZnO NRs are comparable to published literature [22][23][24]. A higher amount of dye was adsorbed on longer NRs than on shorter NRs, indicating that longer NRs improve photon absorption and carrier generation. These results indicate that cell performance is strongly dependent on the electrode surface area. Increasing the NR length results in a larger surface area, which leads to a higher adsorption of dyes as well as a higher conversion efficiency. Furthermore, the Voc of the longer ZnO NRs was higher than that of the shorter ZnO NRs. This higher Voc is attributable to a reduction in recombination losses at ZnO/dye interfaces. Regarding the performance of the cells containing the ZnO NRs grown for various periods, the cell containing the 27-h ZnO NRs demonstrated optimal performance with a conversion efficiency (η) of 0.64%, Voc of 0.62 V, Jsc of 2.56 mA/cm 2 , and fill factor of 0.42. The NRs also provide direct pathways from the point of photogeneration to the conducting substrate. These pathways ensure the rapid collection of carriers generated throughout the device. Figure 5b depicts the IPCE spectra of the DSSCs (D-719 dye) containing the 9-, 18-, and 27-h ZnO NRs, indicating a strong peak at 520 nm; this peak is attributable to the characteristic excitations of the D-719 dye. Our ZnO-based DSSCs show poor conversion efficiencies when compared to conventional TiO 2 -based DSSCs, as shown in the inset of Figure 5a. The main reason is the corrosion of ZnO on reacting with an acid and the low amounts of dyes that are adsorbed during the production. During the process, an amount of Zn 2+ ions are dissolved into the solution from the surface of the ZnO nanorods. Subsequently, aggregation of Zn 2+ ions with sensitizer dyes occurs, and the phenomenon was reported for several organic sensitizer dyes as well as ruthenium complexes [25,26]. Once aggregation takes place in DSSCs, the power conversion efficiency will dramatically decrease [27]. Despite the lower efficiencies in our ZnO-based DSSCs, the use of ZnO nanorods still shows high potential because of its better crystallinity and higher electron mobility. To overcome the chemical instability of ZnO, the introduction of non-ruthenium-based sensitizers and the utilization of different nanotechnological architectures of ZnO might be practical approaches.
6 as ruthenium complexes [25,26]. Once aggregation takes place in DSSCs, the power conversion efficiency will dramatically decrease [27]. Despite the lower efficiencies in our ZnO-based DSSCs, the use of ZnO nanorods still shows high potential because of its better crystallinity and higher electron mobility. To overcome the chemical instability of ZnO, the introduction of non-ruthenium-based sensitizers and the utilization of different nanotechnological architectures of ZnO might be practical approaches.
Conclusions
In this study, we prepared ZnO NRs, with a two-step process which is simple and easy, for use as photoanodes in DSSCs. Moreover, the results reveal that DSSCs containing longer ZnO NRs demonstrate higher photovoltaic performance than DSSCs containing shorter ZnO NRs. Compared with shorter ZnO NRs, longer ZnO NRs exhibit a larger surface area, which enables efficient dye
Conclusions
In this study, we prepared ZnO NRs, with a two-step process which is simple and easy, for use as photoanodes in DSSCs. Moreover, the results reveal that DSSCs containing longer ZnO NRs demonstrate higher photovoltaic performance than DSSCs containing shorter ZnO NRs. Compared with shorter ZnO NRs, longer ZnO NRs exhibit a larger surface area, which enables efficient dye loading and light harvesting, reduced charge recombination, and faster electron transport. These improvements enhanced power conversion for application in DSSCs. | 2016-03-01T03:19:46.873Z | 2015-12-01T00:00:00.000 | {
"year": 2015,
"sha1": "62e808b82cf47c7667ce19060d24eb3fa05fa2a0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/8/12/5499/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "62e808b82cf47c7667ce19060d24eb3fa05fa2a0",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
19007494 | pes2o/s2orc | v3-fos-license | Evaluation of the national swing-bed program in rural hospitals.
The Health Care Financing Administration (HCFA) implemented a swing-bed demonstration and evaluation program for rural communities in the 1970's. The demonstration substantiated the cost effectiveness of providing long-term care in small, rural, acute care hospitals. As a result, Section 904 of the Omnibus Reconciliation Act of 1980 (Public Law 96-499) authorized the national swing-bed program, allowing rural hospitals with fewer than 50 beds to provide Medicare- and Medicaid-covered swing-bed care. A congressionally mandated evaluation of the program was conducted and the national swing-bed program was found to be cost effective. In this article, HCFA's report and recommendations to Congress are summarized in the context of the evaluation findings. HCFA recommended that the program be continued and that consideration be given to extending the option to larger hospitals. In this regard, the Omnibus Budget Reconciliation Act of 1987 (Public Law 100-203) extended the program to include rural hospitals with up to 100 beds.
Introduction
The rural swing-bed program was enacted by Congress in the Omnibus Reconciliation Act of 1980 . In passing this legislation, Congress envisioned it would encourage efficient and effective use of inpatient hospital beds for the delivery of hospital, skilled nursing, or intermediate care services to Medicare and Medicaid beneficiaries in rural areas. To determine the program's impact, Congress required the Secretary of the Department of Health and Human Services to submit a report describing the program's experiences. Section 904(c) specified that the report consider: • The extent and effect of the program on the availability and effective and economical provision of long-term care services in rural areas. • Whether such a program should be continued.
• The results from any demonstration projects conducted under the program. • Whether eligibility to elect the swing-bed option should be extended to other hospitals regardless of bed size or geographic location where there is a shortage of long-term care beds. In carrying out this mandate, the Health Care Financing Administration (HCFA) contracted with the Center for Health Services Research of the University of Colorado to conduct an evaluation of the program. Conclusions from the findings and the issues identified in the course of the evaluation are presented in this summary. Some of the issues identified in the course of the evaluation, HCFA's recommendations on the positions to be taken at this time, and plans for further monitoring and evaluation of the program's experiences are discussed. A summary of recent legislative developments is also provided.
Findings and issues Hospital participation
By July 1986, about 40 percent of the eligible hospitals in rural areas were certified to provide swing-bed care. The total number of certified swingbed hospitals represented approximately 15 percent of all Medicare-certified, short-stay general hospitals in the United States. Although the national swing-bed program began slowly in the early 1980's, its growth during the 3 years before 1986 resulted in 899 hospitals certified for swing-bed care.
Swing-bed hospitals are predominantly concentrated in the larger rural land areas of the midwestern States, although western, southern, and southeastern States also have relatively high participation rates. Nine of the 11 States with no participating hospitals in July 1986 were located in or near the northeastern section of the country, largely because fewer rural communities and land areas are located in this region. In general, swing-bed hospitals and the communities in which they are located tend to be characterized by lower acute care occupancy rates, a lower ratio of physicians to elderly persons, a larger elderly population, and fewer Medicare skilled nursing facility (SNF) beds per elderly than rural hospitals and communities not participating in the program.
Use of swing beds
In 1986, 97 percent or 872 of the 899 certified swing-bed hospitals were providing long-term care in swing beds. Based on 1985 utilization data, the participating hospitals averaged 50 admissions and 964 days per year of long-term care in swing beds, with an average length of stay of about 20 days. Approximately three-fourths of all swing-bed admissions were from acute care. Only about one-half of nursing home patients in the same communities were admitted from acute care. Of all swing-bed admissions from acute care, about two-thirds of them were from the acute care portion of the swing-bed hospital itself. Medicare
General purposes served
Although a few hospitals provided more than 2,500 days of swing-bed care in 1985, more than 90 percent used swing beds to provide care to relatively shortstay, long-term care patients. For the most part, swing beds are used to provide subacute long-term care to patients who are more difficult to place in community nursing homes owing to their intense needs for medical and highly skilled nursing care. These patients are either discharged home or to community nursing homes after relatively brief stays in swing-bed hospitals.
In most instances, swing beds serve as holding beds until patients are sufficiently rehabilitated to return home or until nursing home beds become available in the community. At times, however, swing beds are used to fill other gaps in the long-term care delivery system in rural communities, especially when community nursing home beds are fully occupied. In such situations, swing-bed hospitals provide more traditional long-term care, such as that commonly found in nursing homes. Even in these circumstances, swing-bed stays appear to be considerably shorter than those of nursing homes.
Community retention and access
Earlier data from the Utah swing-bed demonstration program in the 1970's indicated that a per capita increase in the number of Medicare SNF patients receiving care in their home communities occurred as a result of the swing-bed approach. More than 75 percent of the nurses and more than 90 percent of the physicians in swing-bed communities believed that the program enhanced the retention of long-term care patients in their communities.
Although nursing home administrators and representatives of the nursing home industry have expressed concern that swing beds compete directly with nursing home beds, a partial exploration of this issue found no evidence of statistically significant decreases in nursing home occupancy rates. In fact, in one of the States with the largest number of swingbed hospitals, nursing home occupancy rates actually increased in swing-bed communities between 1982 and 1985.
Cost to hospitals
Under the assumption that hospital beds exist primarily for the provision of acute care services, their use for long-term care takes advantage of the declining acute care occupancy rates and the surplus in hospital capacity. As such, based on special analyses of hospital cost reports, the cost of swingbed care to long-term care patients was calculated as an incremental cost. This calculation was based on the additional cost of providing long-term care given that the beds and the associated resources already exist for the provision of acute care. The incremental costs for routine and for ancillary long-term care provided in swing beds in 1984 were estimated as $33 to $34 and $19 to $21 per day, respectively. These costs were below the estimated average per diem swing-bed revenues of $44 for routine care and $32 for ancillary services.
Cost to Medicare and Medicaid
It appears that a portion of Medicare use of hospital swing beds has been caused by the prospective payment system (PPS). The more specific issue of whether, under PPS, rural swing-bed hospitals have "gamed the system" by transferring patients to the SNF level of care to gain additional revenue cannot be definitively answered at this time. Available evidence does not indicate that this is a widespread practice. However, this issue is the subject of a study of the impact of PPS on the swing-bed program. The findings will be incorporated into the series of annual reports to Congress on the impact of the Medicare hospital prospective payment system.
There is consenus, however, that nursing home case mix did not change substantially in swing-bed communities before and after the implemenation of the swing-bed program. Site visits have tended to confirm that many of the SNF patients receiving care in hospital swing beds would have remained for longer periods as acute care patients prior to PPS and would have been discharged to urban SNF's at the present time had swing beds not been available. The use of swing beds to provide Medicare SNF care results in a per-day saving to Medicare of approximately $16. This estimate is based on 1984 data and the assumption that swing-bed patients would have been placed in equal numbers in freestanding and hospital-based rural SNF's in the absence of this program. The saving was greater to the extent that swing bed patients would have otherwise gone to urban SNF's or to newly constructed or expanded facilities.
The overall cost (including routine and ancillary services) to the Medicare program of providing SNF care in hospital swing beds in 1985 was $26 million. If the SNF care had been rendered in places other than rural hospitals' swing beds (i.e., freestanding SNF's or in a distinct-part SNF of a hospital), routine care would have cost an average of $16 more per day. The total Medicare cost for providing such care in rural communities in 1985 would have been $5 million more. Because Medicaid covered a substantially lower portion of swing-bed care, its 1985 annual cost for long-term care services provided to swing-bed patients is estimated at slightly over $2 million.
Volume thresholds
The incremental cost of routine long-term care provided in swing beds was found to increase as a function of total swing-bed patient days. The volume threshold was defined as the point at which the incremental cost of swing-bed care exceeds swing-bed revenues (i.e., it is no longer cost effective to retain hospital beds as swing beds, or where it would appear to be more reasonable to convert them to permanent nursing home beds). This volume threshold was found to range between 1,500 and 3,000 days of swing-bed care. (In this study, swing-bed routine cost for the average swing-bed hospital began to exceed swing-bed revenues at approximately 2,000 days of swing-bed care per year.) Although hospital circumstances and State-specific reimbursement rates may increase the break-even point even beyond 3,000 days, such thresholds would appear to exist for any hospital. If a hospital's swing-bed experience indicates a strong and stable demand for long-term care, such a demand clearly indicates a hospital has moved into the nursing home business. Its cost structure, staffing needs, and the care orientation associated with swing beds are basically the same as those for a nursing home when it reaches this threshold.
Case mix in swing beds
Swing-bed patients have substantially shorter stays and greater rehabilitation potential than do nursing home patients. They are less frequently characterized by typical long-term care problems, such as incontinence, impaired cognitive functioning, dependence in activities of daily living (ADL's), and related psychological or social problems. However, swing-bed patients tend to have more subacute problems, such as recovery from surgery, hip fractures within the past 6 weeks, shortness of breath, and the need for intravenous catheters.
In general, the long-term care needs served by swing-bed hospitals are substantially different from those served by community nursing homes. Swing-bed hospitals tend to treat patients with subacute problems that need more intense medical and skilled care, while nursing homes tend to treat patients with problems that are more typically seen in institutional long-term care settings. The average swing-bed patient appears to be at least 20 percent more costly to care for per day than the average nursing home patient. This case-mix difference appears to be one of the reasons why swing-bed care is not regarded as a substitute for (or in competition with) nursing home care in most locations.
Relative to home health patients, swing-bed patients are more dependent in ADL's such as bathing, dressing, and using the telephone. In addition, swingbed patients are characterized by a somewhat more intense set of medical and skilled nursing needs than are those of home health patients in areas such as hip fracture, stroke, and conditions requiring intravenous catheters. Swing-bed patients are also more dependent than are home health patients in areas such as incontinence and mental status problems. The differences between swing-bed and home health patients in terms of subacute needs are not as substantial as those differences between swing-bed and nursing home patients. The greater dependency in physical and cognitive functioning on the part of swing-bed patients renders their need for continuous skilled nursing and medical care greater than the intermittent needs of home health patients.
The results, therefore, suggest a continuum of dependency and subacute problem intensity in case mix that, respectively, characterizes swing-bed, home health, and nursing home patients. Overall, nursing home patients tend to be more dependent in traditional measures of functioning than do swing-bed patients, who, in turn, are more dependent in functioning than are home health patients. Subacute care needs appear to be strongest among swing-bed patients; although when a certain level of rehabilitation has been reached, such patients can and should be discharged to home health care. In home health settings, certain types of medical services are more frequently provided on an intermittent basis to patients with subacute needs than are services associated with more traditional nursing homes. Nonetheless, home health care also relies on a reasonable degree of independence in physical and mental functioning. Therefore, especially in rural communities where the distances that home health nurses have to travel can be substantial, swing beds offer the opportunity for continuous medical and skilled nursing care for a relatively short institutional stay to subacute patients. Often, such patients are subsequently discharged to their homes with intermittent home health care to continue the rehabilitation process.
Quality of care
Adjusting for the case-mix differences just noted, swing-bed patients were discharged home sooner and more frequently than were nursing home patients. (The frequency of written discharge plans was found to be greater for swing-bed than for nursing home patients, based on samples of patients discharged from each setting.) This discharge pattern seems to reflect the stronger rehabilitation philosophy that Health Care Financing Review/Fall 1988/volume 10, Number l accompanies the provision of acute and subacute care.
Based on the criteria used in this study, subacute nursing services were found to be provided moderately better in swing-bed hospitals relative to nursing homes. However, more traditional long-term care nursing services were found to be better in nursing homes. These differences tended to persist after adjusting for case-mix differences. Physician visits, X-rays, laboratory tests, and intravenous medications all occurred more frequently for swing-bed than for nursing home patients after adjusting for case-mix differences. In all, it appears that long-term care to subacute patients is provided at least as well and probably better in swing-bed hospitals than in community nursing homes in rural areas. However, it also appears that the care typically required for longer-stay, chronically ill, or disabled long-term care patients is provided better in community nursing homes.
Hospital and program administration
The reasons most frequently cited by hospital administrators for joining the program were to meet a community's needs for long-term care and to provide better continuity of care. Increased revenues and more efficient use of staff resources were also cited frequently as reasons for joining. Generally, the problems and difficulties associated with implementing the swing-bed approach at the hospital level declined in importance as hospitals gained experience with the program. Staff resistance was often cited as a major start-up problem, particularly from the nursing staff who were concerned that the hospital would become primarily a nursing home. Dissatisfaction with reimbursement was viewed as a significant start-up and ongoing problem by many swing-bed hospital administrators. Despite the fact that incremental cost appears to be covered by the current reimbursement structure, most hospital administrators felt payment was inadequate.
Other administrative entities such as HCFA regional offices, State certification agencies, State planning agencies, Medicare Part A intermediaries and Part B carriers, Medicaid fiscal agents, and peer review organizations (PRO's) generally required few resources to incorporate the swing-bed program into their operations. Most agencies shifted personnel to the swing-bed program, and usually no new employees were added. Few agencies reported any start-up or ongoing problems associated with the administration of the program. One of the most frequently mentioned problems by agencies was the misunderstanding of regulations and program requirements by hospital personnel.
Demonstration projects
No demonstration projects to test alternative arrangements for implementing the swing-bed concept have been carried out. There has been a great deal of interest in developing a swing-bed demonstration project in urban areas. HCFA and representatives of hospitals wanting to participate in such a demonstration project were unable to reach agreement on key issues. Although the evaluation study recommended experimentation in urban areas, because of this, further discussions were terminated. Based on the evaluation, however, HCFA plans to investigate the potential for developing new approaches toward the payment of acute and post-acute care in urban hospitals.
Policy issues and recommendations
This section addresses the questions specifically posed by the Congress in mandating this evaluation and discusses issues identified in the course of conducting the evaluation. HCFA's recommendations concerning the issues are listed. Following the list is a discussion of the rationale underlying the recommendations.
• The rural swing-bed program should be continued. • At this time, eligibility to elect the swing-bed option should not be extended to urban hospitals. HCFA plans to explore alternative models for testing the payment of acute and post-acute care in urban hospitals. • The current method for determining the rate of payment for routine long-term care in a swing-bed hospital should be retained. • Ancillary services to patients receiving long-term care in a swing-bed hospital should continue to be reimbursed at cost. • HCFA will continue to monitor the cost behavior of hospitals and nursing homes and the use of ancillary services in swing-bed hospitals. Alternative payment arrangements will be explored. • HCFA will review current visit screens for physician services by place of service with the intent of developing consistent criteria specific to physician visits to patients receiving long-term care in swingbed hospitals. • Swing-bed hospitals furnishing more than 1,000 days of long-term care to patients with stays of 60 days or more should be required to meet all conditions of participation for SNF's. • HCFA will undertake a review of the desirability of conducting regular surveys of long-term care services furnished in swing-bed hospitals. • HCFA will draw on the growing experience of PRO's to determine the feasibility of developing guidelines governing the transition of patients from acute to post-acute care in swing-bed hospitals. • Consideration should be given to extending the swing-bed option to larger rural hospitals; for example, to those hospitals with fewer than 100 beds.
Continuation of the program
Despite certain weaknesses and disadvantages of the rural swing-bed program that are usually restricted to individual communities or providers, the weight of the evidence gathered as part of the evaluation study supports a continuation of the national swing-bed program in rural hospitals. On balance, its most important attribute is that the program has increased access to cost-effective, long-term care in many rural communities throughout the country. It has been accepted by residents of rural areas and by health care professionals. Relatively few administrative difficulties have been encountered in its implementation. To eliminate the opportunity for small rural hospitals to provide long-term care in swing beds would be detrimental to many rural residents. It is doubtful that more than a small portion of Medicare SNF days of care provided in hospital swing beds would be eliminated by virtue of abolishing swing beds. Because of the incentives embedded in PPS, SNF admissions of Medicare patients to swing beds would probably translate into longer SNF stays in existing or newly constructed/converted SNF beds in rural and in more distant urban communities. This would be more costly to Medicare.
Bed-size limits
At the present time, whether the hospital meets the current bed-size limit for eligibility to participate in the swing-bed program is largely determined by applying the prior year's acute care occupancy rate to the total number of licensed beds that are not special care unit or newborn beds. Although this approach is somewhat generous in the eyes of some regulators, the resulting marginal increase in the number of swingbed hospitals compared with a more stringent method appears to be inconsequential. It does not appear appropriate to mandate a uniform method of determining eligiblity at the regional or State level. The individual circumstances of each region or State, in terms of the supply of and demand for subacute long-term care beds, can more readily be taken into consideration without a strictly enforced guideline for determining the applicable number of beds.
On the basis of this bed-size eligibility criterion, 2,236 rural hospitals were eligible to participate in the swing-bed program prior to the Omnibus Budget Reconciliation Act of 1987. There were an additional 1,023 hospitals in areas defined as rural by the U.S. Bureau of the Census. Many of these hospitals are located in rural communities where swing beds are not available and an unmet need for long-term care services appears to exist. Consideration of extending the swing-bed option to rural hospitals with fewer than 100 beds would, therefore, be appropriate. If the bed-size eligibility criterion was to be increased to include rural hospitals with fewer than 100 beds, it would increase the pool of hospitals eligible to elect the swing-bed option by 640 hospitals and increase the availability of long-term care services in rural areas.
Urban hospital eligibility
At the present time, urban hospitals are ineligible to elect the swing-bed option, regardless of size. The swing-bed approach has embedded in it the opportunity to "game the system" by discharging acute care patients to the skilled nursing level of care, thereby, gaining additional revenues beyond that provided by the diagnosis-related group (DRG) payment rate. However, the evaluation's findings suggested that the differences between urban and rural communities may lead to differential use of the opportunity to "game the system." In urban areas, distances between patients' residences and providers are usually substantially shorter and consumers usually have a larger number of providers to choose from. Further, urban hospitals generally have a greater capacity to maximize revenues through more sophisticated means.
Although the evaluators recommended experimentation on the swing-bed approach in urban areas, HCFA felt that the previously stated considerations suggested a conservative posture in extending the swing-bed option to urban hospitals. For these reasons, HCFA recommends that the swingbed option not be extended to urban hospitals at this time. HCFA plans to develop demonstration projects that test alternative methods of paying for acute and post-acute services, including combining both levels of care under one payment arrangement. These demonstrations would permit an assessment of the utility and cost effectiveness of expanding the swingbed approach to urban hospitals.
Many urban hospitals, under the impetus of PPS, are converting part of their facilities to distinct-part SNF's to create "transitional" beds; so-called because they are used to facilitate the patient's transition between acute care and SNF and/or home care. From the viewpoint of Medicare's program costs, the conversion to distinct-part SNF's may be more costly than providing the swing-bed option, assuming that conditions to control "gaming" can be developed.
Routine care
At the present time, Medicare per diem reimbursement for swing-bed routine care is determined separately for each State as the average statewide Medicaid reimbursement rate for the applicable level of care for the preceding year. On average, this rate covers the incremental cost of providing routine long-term care in swing beds. The current method of paying for routine care should be retained. HCFA will continue to monitor the relationship of payment rates based on the current methodology to changes in the relative behavior of nursing home and hospital incremental costs to determine whether, in the interest of equity, modifications of the current methodology are indicated. Concurrently, HCFA will continue to explore alternative payment arrangements through demonstration projects or through experience gained from nursing home reimbursement systems used in several States. These arrangements may include payments based on case mix, on total stay rather than on a per diem basis, or combining the cost of acute and post-acute care.
Ancillary services
At present, ancillary services to swing-bed patients are cost reimbursed by Medicare. As with reimbursement for routine care, the present reimbursement for ancillary care was found to cover the incremental cost of ancillary services provided to swing-bed patients. A potential problem that has surfaced, albeit in only a few settings, is the abuse of the present reimbursement methodology through the excessive provision of ancillary services. Medicare reimbursement for ancillary hospital services is a direct function of the volume of the services. Consequently, discharging patients as rapidly as possible from acute care to swing-bed care, and providing the ancillary services largely after discharge from acute care, results in maximizing ancillary reimbursement.
As mentioned, this post-acute overloading of ancillary services does not appear to be taking place in swing-bed hospitals except for a few individual facilities. Because abuse appears minimal at this time, HCFA recommends retention of cost reimbursement for ancillary services. However, HCFA will continue to monitor ancillary service use in swing-bed facilities and, concurrently, explore alternative arrangements for paying the cost of ancillary services. These arrangements may include setting limits per day or per year on payments for ancillary services to swing-bed patients. The feasibility of developing and testing alternative arrangements for combining routine and ancillary services in one payment scheme is an option that could be explored in demonstration projects.
Physician services
During the evaluation it was found that physicians visit their swing-bed patients with far greater frequency than they do their nursing home patients. Less than half the physician visits to long-term care patients in hospital swing beds appear to be covered by either third-party payers or by their patients.
However, it appears that the greater attentiveness on the part of physicians had a significant positive impact on rehabilitation of post-acute patients. A pattern of wide variation was found in the number of physician visits to post-acute swing-bed patients allowed by Medicare carriers. In practice, the number of visits allowed to different types of patients are based on screens promulgated by Medicare for physician visits in nursing homes (generally, intermediate care facilities), SNF's, and hospitals. The evaluation found that the limits on swing-bed physician visits are more generous than those of 92 physician visits for Medicare patients in certified SNF's. This appears reasonable in view of the greater intensity of care required by post-acute swing-bed patients. Limits on swing-bed physician visits that are closer to acute care physician visits appear warranted. However, the wide variation found in the limits may be inappropriate. HCFA will undertake a review of the situation, possibly adding the category of postacute swing-bed patients to those categories for which routine screens for physician visits are used.
Quality of care and standards Volume thresholds
Swing-bed hospitals and nursing homes in rural areas have evolved into serving two distinct, but partly overlapping, long-term care markets. Generally, following the acute hospitalization phase, swing-bed hospitals tend to treat those patients who might be characterized as "subacute." These patients require more intense and skilled nursing care services to further their recovery and rehabilitation from illness. They are discharged, on average, within 20 days. The tendency for swing-bed hospitals to avoid traditional nursing home care was found to be rather pronounced. At the subacute phase, the quality of services furnished by hospitals was found to be better overall than those services furnished by nursing homes. On the other hand, nursing homes provide higher-quality, traditional, long-term care services.
The differences found between the two types of facilities in case mix and the ability to care appropriately for the different types of patients were anticipated (on the basis of the swing-bed demonstrations in the 1970's) in the regulations that implemented the swing-bed legislation. The conditions of participation for swing-bed hospitals were predicated on the assumption "... that patients in swing-bed hospitals are less likely to become longterm residents." Accordingly, the regulations attempted to avoid imposing significant burdens on rural hospitals by requiring adherence only to those SNF standards that are necessary and appropriate to SNF patient care and do not duplicate existing hospital requirements, do not require extensive structural modifications, and are unnecessary in what is primarily a general routine inpatient hospital setting. In short, the regulations did not contemplate the swing-bed hospital as providing a significant amount of the traditional type of long-term nursing home care.
A few swing-bed hospitals provided care to persons who remained in the facility for 60 or more days. The evaluators found that these patients' needs are more akin to the "traditional" long-term nursing home patient. Swing-bed hospitals were found to be less capable of meeting these needs than were nursing homes. The evaluators, therefore, proposed the establishment of volume thresholds or levels of longterm care stipulating when a swing-bed hospital provides a significant amount of such care. At that level, special measures to assure quality of care need to be taken. HCFA agrees with this assessment and recommends that a swing-bed hospital that provides more than 1,000 days of long-term care (either at the skilled or intermediate levels of nursing care) to patients with stays of 60 days or more in 1 year should be required to meet all SNF conditions of participation.
To meet all conditions of participation as an SNF, the hospital may elect to create a distinct-part facility. If this option were chosen, the hospital's reimbursement for SNF services in the distinct-part facility would be based on incurred costs. However, despite the increased costs involved in meeting all SNF conditions of participation, some hospitals may prefer to retain the flexibility to use all of their beds for acute care although, as a swing-bed facility, they would still be paid on the basis of the previous year's Medicaid rates for "subacute" services. The hospital should be allowed to decide the course it elects to take.
A further threshold might be established at 2,000 days of long-term care to all patients. However, if the swing-bed option were extended to larger hospitals, these hospitals might be more likely to furnish 2,000 days or more of appropriate subacute care services to a larger number of patients. Extension of the threshold to 2,000 days is not being recommended at this time, although it can be reassessed in light of further experience.
Periodic review
The evaluators found greater State-to-State variations in certifying swing-bed hospitals as longterm care providers than they did in certifying nursing homes. The evaluators recommended that principles and guidelines should be established for conducting surveys in swing-bed hospitals that provide a significant amount of traditional nursing home care. HCFA will undertake a review of this issue in conjunction with the earlier recommendation on the establishment of volume thresholds and the requirement that PRO's review transfers between acute and long-term care in swing-bed hospitals as discussed in the next section. Thereafter, it will institute arrangements that will assure that the needs of long-term care patients are appropriately met in swing-bed hospitals.
Peer review organizations
HCFA's contracts with PRO's include a review of swing-bed hospitals; particularly, transfers between acute and post-acute levels of care. This review provides a mechanism for assuring that the transfer is medically appropriate. In addition, it provides a control on any "gaming" that might take place in making transfers to maximize total revenues derived from the DRG payments for the acute care phase and the payments for routine and ancillary services rendered during the post-acute phase.
Health Care Financing Review/Fall 1988/volume 10, Number l At this time, the indicators for the appropriateness of these transfers are not always clear. As such, the determinations of appropriateness are made on the basis of clinical judgments. It is expected that more experience will increase the clinical base for developing guidelines governing the transition of patients from acute to subacute care in swing-bed hospitals. An ongoing study, which examines the impact of PPS on the swing-bed program, will address this issue further through analysis of PRO acute care admission, readmission, and transfer denial rates. Also, comparisons will be made of acute care readmissions from swing-beds relative to nursing home or SNF beds.
Recent legislative and regulatory developments
Several significant changes in the swing-bed program were enacted under the Omnibus Budget Reconciliation Act of 1987 (Public Law 100-203). The program was extended to hospitals with up to 99 beds. However, the newly eligible hospitals with between 50 and 99 beds have two restrictions imposed on them. First, no Medicare payment may be made for skilled nursing facility (SNF) services provided to a patient in a swing bed more than 5 days after an SNF bed becomes available in the locality of the hospital, unless the patient's physician certifies that the patient's transfer to that facility would not be medically appropriate. In the absence of such certification, the hospital's designation of the swing bed as an SNF bed would become ineffective at the end of the 5 days. The hospital could not charge the Medicare beneficiary for continued care thereafter unless it gives the beneficiary a readmission notice of noncoverage. Second, a hospital may not be paid for swing-bed services to Medicare beneficiaries after the number of Medicare covered days of extended care services in a cost reporting period exceeds 15 percent of the licensed bed days available at the hospital during the reporting period.
In addition, Congress mandated a report by February 1989 on peer review organization denials of swing-bed care. The report is to include recommendations on how to encourage participation in the swing-bed program by eligible (but not participating) hospitals that have low occupancy rates and are located in areas with an unmet need for long-term care. HCFA has contracted for the collection of data for this report.
Data sources
A number of data sources were used in this study.
Although not all sources contributed directly to the results summarized in this article, they are included because they contributed indirectly by providing contextual information on issues related to the swingbed concept. Selected data sources are discussed in several articles in the list of references.
Patient-level primary data were prospectively collected on-site at swing-bed hospitals and community nursing homes for five different samples of patients. These data were used to assess case mix, process and outcome measures of quality, resource consumption and service-use patterns for admission, discharge, and cross-sectional cohorts of patients.
Medicare cost reports were obtained for several years for 75 swing-bed hospitals and 75 comparison hospitals in 27 States. These were used to analyze routine and ancillary service costs and revenue data for swing-bed hospitals relative to comparison hospitals. Approximately 20 different types of surveys were administered by phone, mail, or during on-site visits throughout the course of the study. These surveys involved State hospital associations, swing-bed and comparison hospital administrators, State Medicaid agencies, State fiscal agents, State planning agencies, certification agencies, nursing home administrators, home health agencies, swing-bed physicians, other swing-bed staff, comparison hospital physicians, hospital directors of nursing, directors of nursing in nursing homes, Medicare Part B carriers, and Medicare Part A intermediaries.
A number of secondary data sets were used. The more important ones consisted of American Hospital Association survey tapes; the American Medical Association Physician Masterfile; the National Center for Health Statistics Master Facility Inventory; several of HCFA's files from its Medicare Statistical System (including Medicare enrollee data, the Medicare provider of service file, Medicare hospital claims data, and the Medicare SNF claims data); and U.S. Bureau of the Census population tapes. At 6-month intervals, the survey and certification branches of HCFA's regional offices were contacted to obtain the number of certified swing-bed hospitals in each State. Data collected as part of other studies (conducted by the University of Colorado Center for Health Services Research) of nursing home, swing-bed, and home health care were used for comparative purposes. | 2018-04-03T01:14:39.027Z | 1988-01-01T00:00:00.000 | {
"year": 1988,
"sha1": "8095d172522c0852721b3862bfe8fd44635c73a4",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "8095d172522c0852721b3862bfe8fd44635c73a4",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
268781463 | pes2o/s2orc | v3-fos-license | Research on Convolutional Neural Network-Based Compression Methods for Multispectral Images
Considering the traits of multispectral images, which include numerous bands, high spatial and spectral redundancy, and a large data volume, there is a proposed investigation on compression methods based on convolutional neural networks to reduce the storage space consumed by an individual image and enhance compression effectiveness. This paper first introduces the development of image compression algorithms and deep learning in recent years. Based on these two structures, a framework for lossy compression of multispectral images utilizing an end-to-end convolutional neural network is proposed. A self-encoding structure is used to process three-dimensional hyperspectral images, extracting local spectral features and fusing spectral information using large convolutional kernels. Residual layers are employed to preserve spectral information. Rate-distortion optimization is performed to jointly optimize image distortion and compression bitrate. Finally, a comparison with the traditional JPEG method is conducted experiments to assess the efficacy of the proposed algorithm. The MS-SSIM is improved by nearly 0.08, and the compressed images exhibit no noticeable distortion.
Introduction
The last few years, the advancement of hyperspectral sensor technology has been rapid, multispectral cameras have gradually been applied to our daily lives.Multispectral images contain a wider range of spectral information and can provide observation capabilities beyond human visual abilities.However, this also leads to increasingly large storage space requirements for these images.The huge data volume poses significant challenges to image storage, transmission, management, and applications, which hampers the application and development of multispectral imaging technology.Therefore, highperformance multispectral image compression algorithms have become critically important.Traditional compression algorithms for multispectral images can be broadly classified into three categories: coding method based on prediction [1], methods based on vector quantization [2], and image Compression Methods Based on Transforms [3].Traditional methods all have their limitations.Prediction-based encoding methods have relatively low compression ratios, and the compression ratios may vary significantly between different images.Transform-based compression methods offer adjustable compression ratios and higher compression performance, but they may introduce block artifacts in multispectral images, which affect image quality.The complexity of vector quantization methods is relatively high, limiting their application on spectral images.Recently, driven by the success of deep convolutional networks in lossy compression of natural images [4], studies have begun to explore the use of deep mining in the compression of images.An end-to-end multispectral image compression method based on Convolutional Neural Networks is presented in this paper.In this method, the entire three-dimensional data is input into the network, and the bitstream is obtained through the encoder, quantizer, and entropy encoder.The decompressed image is then obtained through the decoder.The entire network optimization follows a rate-distortion optimization approach.By combining residual layers and convolutional layers, spatial and spectral features are effectively extracted, enabling the learning of compact representations for hyperspectral images.This approach significantly improves the compression ratio.Finally, by comparing the compression ratios and reconstruction quality of different compression methods for multispectral images, the proposed method was validated for its performance.
Research on Convolutional Neural Network-Based Compression Methods for Multispectral Images
The approach of multi-spectral picture compression is implemented using an end-to-end convolutional neural network comprising four components, namely, auto-encoder, a quantized construct, an entrepreneurial entropic editor, and a frequency-distortion curve.In this paper, the improvements of the end-to-end convolutional neural network image compression algorithm over classical convolutional neural network image compression algorithms are reflected in two aspects: The quantization structure adopts multi-level quantization to integer coefficients, which improves the efficiency of quantization; A Gaussian mixture model is used for entropy encoding.Compared to a single Gaussian model, it has a more powerful ability to approximate distributions.
auto-encoder
Auto-encoder composed by a code reader and a demodulator.[5][6][7].An encoder has been utilized to retrieve the feature message from the image and reduce its dimensionality.The autoencoder mainly includes convolutional layers, GDN activation functions, and LeakyReLU activation functions.The architecture of the autoencoder system is depicted in Figure 1.
Figure 1.
Network Structure of the Autoencoder.The "input" represents the input data, and "conv" represents the convolutional layer.The convolutional layer performs downsampling operations on the image.The GDN activation function is used to introduce non-linear relationships between the layers of the convolutional neural network, while the LeakyReLU activation function is applied to enhance the non-linear relationships between the convolutional layers.The structure of the decoder in the autoencoder is completely symmetric to that of the encoder.The decoder is responsible for reconstructing the feature image generated by the encoder and converting it back to the original image.
quantized construct
The obtained feature maps of the multispectral image need to undergo quantization.The compression of multispectral images can result in information loss due to the quantization process.Therefore, the quality of reconstructed images greatly depends on an efficient quantization structure.In this study, we utilize a multi-base quantization approach for converting coefficients into integers [8].By employing this method, we aim to minimize information loss during quantization and enhance the efficiency of end-to-end training.Since the quantization process is non-differentiable, the quantization structure incorporates uniform noise to simulate the process.This ensures gradient propagation and enables differentiability throughout the quantization process..
entrepreneurial entropic editor
After feature extraction and quantization through the autoencoder, there may still be residual redundancies in the multispectral image.In order to enhance the coding performance and eliminate redundancies, it becomes imperative to employ an efficient entropy coding stage.This paper explores the utilization of a Gaussian Mixture Model (GMM) is employed for entropy estimation [9].The distribution function of the GMM is represented as equation 1: denotes the power of various Gaussian projects, and K denotes the varying Gaussian targets , , , Σ represents the Gaussian distribution parameters of the models and () represents the entropy coding results.
frequency-distortion curve
In the context of end-to-end coding, the joint tuning of image distortion and compression code rate is referred to as rate-distortion optimization.The effectiveness of the entire structure heavily relies on accurate estimation of code rate and image distortion.Therefore, in order to optimize the compression network for hyperspectral images, we need to carefully address these factors, rate-distortion optimisation is used in this paper to trade-off the bit rate and image quality, as shown in equation 2: D uses the mean square error, which is a balancing factor, and R represents the rate loss.The distortion term is calculated as shown in equation 3: Where x n is the input 3D image, y n is the recovered 3D hyperspectral image and N denotes the batch_size.the entropy rate is calculated as shown below: is the probability density function of the continuous distribution obtained after spline interpolation of the intermediate feature map.The more sampling points, the more accurate the melting rate estimation.
Experimental Results and Analysis
The CAVE multispectral dataset was used in this study.The dataset consists of 31 bands ranging from 400nm to 700nm, with each band having a spatial size of 512 × 512 pixels.In order to conduct our research, we carefully selected 24 scenes from the dataset.These scenes were utilized as the training set for our end-to-end multispectral image compression method, which relies on convolutional neural networks, while the remaining 8 scenes were used as the test set.Examples of some training and test scenes can be seen in Figure 2. As the balancing factor increases, the corresponding S-bpp (bits per pixel) also increases.The Adam algorithm was used to update gradients, and the total number of iterations exceeded 1,000,000 steps.The learning rate decayed slowly from A to B at a rate of 1/10, and a batch size of 2 was utilized.Due to the limited compression capability and poor performance of JPEG compression, the evaluation of the end-to-end convergent neurological net based multispectral computer graphics compressor method only compared the MS-SSIM (Multi-Scale Structural Similarity) index under the extreme compression limit of JPEG.Additionally, visual comparisons were made using the decompressed images.Figure 4 shows the comparison of (a) the original image, (b) the image under JPEG compression at its limit, and (c) the decompressed image using the end-to-end convolutional neural network-based multispectral image compression method with λ=3.1.From the visual observation, it can be seen that JPEG compression at the current S-bpp introduces significant blocking artifacts, resulting in severe distortion and difficulty in distinguishing image content.In contrast, the method used in this paper, at a similar compression ratio, allows for clear visibility of image details and texture information.It effectively eliminates visual artifacts such as ringing and aliasing.The decompressed image exhibits high quality and demonstrates good compression performance at low bit rates.
Conclusion
Inspired by the natural image depth compression framework, this paper proposes an end-to-end convolutional neural network-based approach for multispectral image compression.The multispectral image is inputted as a three-dimensional tensor into the encoder, enabling the learning of spatialspectral fusion features.An arithmetic coding-based entropy encoder is used to further reduce data volume, and the decoded intermediate features yield the decompressed three-dimensional hyperspectral image.The loss function of the proposed network adopts a rate-distortion optimization method.The CAVE dataset is used for both training and testing, and a comparison is made against the JPEG method.Experimental results show significant improvements in MS-SSIM compared to traditional methods at low bit rates.The proposed method preserves more details and texture information without introducing visual issues like blocking artifacts and blurriness, making it closer to the original image.Moreover, compared to traditional methods, the spectral information is better preserved, closely resembling the original image's spectral curve.By combining lossy compression of multispectral images with deep convolutional frameworks, this paper demonstrates the greater potential of deep learning in multispectral image compression.However, it is noted that the proposed framework's generalization performance is limited due to the reliance on the CAVE dataset for training and testing.Future work could involve joint training on multiple multispectral image datasets to enhance the network's performance.
Figure 2 .
CAVE Dataset Illustrative Diagram.The research was conducted using a GPU configuration of NVIDIA GeForce GTX 1650 with 8GB memory.A series of individual compression models were trained within the range of balancing factors [0.005, 3.1]. | 2024-03-31T15:33:52.864Z | 2024-03-01T00:00:00.000 | {
"year": 2024,
"sha1": "f604ca10c57a103d50fcff49a9c096e68b5012e7",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/2717/1/012006/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "36b7a26b3d7c0173b980d98aabd91de490b44218",
"s2fieldsofstudy": [
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
10668941 | pes2o/s2orc | v3-fos-license | ISO far-infrared observations of rich galaxy clusters II. Sersic 159-03
The far-infrared emission from rich galaxy clusters is investigated. Maps have been obtained by ISO at 60, 100, 135, and 200 microns using the PHT-C camera. Ground based imaging and spectroscopy were also acquired. Here we present the results for the cooling flow cluster Sersic 159-03. An infrared source coincident with the dominant cD galaxy is found. Some off-center sources are also present, but without any obvious counterparts.
Introduction
The first paper in this series (Hansen et al. 1999, paper i) presented infrared data for the Abell 2670 cluster. We identified 3 far-infrared sources apparently related to star forming galaxies in the cluster. The present paper concerns the rich cluster Sérsic 159-03. The central part of the Sérsic 159-03 cluster was mapped by the Infrared Space Observatory (ISO) satellite, using the PHT-C camera (Lemke et al. 1996) at 60µm, 100µm, 135µm, and 200µm. The observations were performed twice with slightly different position angles which gives an opportunity to do independent detections and to study possible instrumental effects.
The Sérsic 159-03 cluster (Abell S1101, z=0.0564) is of richness class 0, Bautz-Morgan type iii with a central dominant cD galaxy (Abell et al. 1989). A cooling flow is present, and Allen and Fabian (1997) found a mass deposition rate ofṀ = 231 +11 −10 M ⊙ yr −1 from ROSAT PSPC data. The cooling flow is centered on the cD galaxy which exhibits nebular line emission. Crawford and Fabian (1992) obtained optical spectra and found from line-ratio diagrams that the ratios obtained along the slit bridged the gap between class i and class ii in the scheme of Heckman et al. (1989). Their spectra had position angle 90 • . West of the center they discovered a detached filament of emission having extreme class ii characteristics. They argued that the different line ratios are due to changes in ionization properties. Below in Fig. 4 we show the extent of the nebular emission. In a subsequent paper Crawford and Fabian (1993) included IUE data to obtain the opticalultraviolet continuum. They announced that a strong Lyα line is present in the IUE spectrum.
The ISO data
A rectangular area centered on the cD galaxy of Sérsic 159-03 was mapped by ISO May 7, 1996 on revolution 173. The projected Z-axis of the spacecraft had a position angle of 54. • 4 on the sky (measured from north through east). The observation was repeated June 4, 1996, during revolution 200, but this time with position angle 69. • 5. The observing mode was PHT 32 as for Abell 2670 (paper i). The 9 pixel C100 detector was used for 60µm and 100µm to map an area of 10. ′ 0 × 3. ′ 8. For 135µm and 200µm the 4 pixel C200 detector was applied to cover a mapped area of 11. ′ 0 × 4. ′ 6. The target dedicated times were 1467 seconds for C100 and 1852 seconds for C200.
As described in paper i we apply the ISOPHOT Interactive Analysis software 1 (PIA) for the reduction work. We also perform parallel reductions using our own least squares reduction procedure (LSQ, cf. paper i). Although counter-clockwise on the sky with respect to the revolution 173 maps. The C100 maps (left) cover 10. ′ 0 × 3. ′ 8 while the C200 maps (right) cover 11. ′ 0 × 4. ′ 6. The features marked with numbers in the 60µm maps are regarded as real sources. An optical image of the field is shown in Fig. 2 LSQ does not use sophisticated methods to correct for various effects -e.g. glitches from cosmic rays are simply discarded -we find it valuable for comparisons with the PIA reductions when evaluating the reality of features visible in the frames. The conclusion is that the PIA reduced images presented here ( Fig. 1) do not contain noticeable artifacts from glitches. As in paper i we present the data maps with pixel sizes 15 ′′ × 46 ′′ for C100 and 30 ′′ × 92 ′′ for C200, but the instrumental resolution is only about 50 ′′ for C100 and 95 ′′ for C200 (paper i). The uncertainty of the maps increases towards the left and right borders due to the way the mapping was performed.
Optical data
Optical imaging and spectroscopy were performed September 1996 using the DFOSC instrument on the Danish 1.54m telescope at La Silla. The field around the cen- Fig. 3. In order to image the distribution of the nebular emission we obtained narrow band exposures through a filter (λ6908, FWHM = 98Å, 1 hour) covering the redshifted Hα+[N ii] lines and an off-band filter (λ6801, FWHM = 98Å, 1 hour). After scaling and subtraction a Hα+[N ii] image is obtained. The central part of this image is shown in Fig. 4.
Details about the spectroscopy are found in Table 1. The slit was positioned on the cD nucleus with two dif-
Results
The general brightness distribution in the maps is described most easily for the C200 maps. The 135µm and 200µm maps are rather similar. An enhancement is seen at the center in all four maps concordant with the position of the cD. A maximum is present in the upper left corners. After rotating the revolution 200 maps 15 • into coincidence with the rev. 173 maps we find these maxima to overlap suggesting the presence of one or more real sources. Similarly there are maxima in the upper right corners. Their positions and relative brightness in the maps can be understood if a source is present in the upper right corner of the rev. 200 maps, but just outside the rev. 173 field. A third characteristic feature is the brightness minimum to the lower left (i.e. south) of the center of the C200 maps. Again, when we compare the maps after rotation the reality of this minimum is confirmed. We conclude that the brightness distribution seen in the C200 maps is real.
The C100 maps have the advantage of better resolution which improves the possibility of identifying optical counterparts. However, the reality of the peaks in the 100µm Fig. 3. A filament of emission points from the nucleus along position angle ≈ 20 • flaring towards north some 5 ′′ from the center. Emission towards the southwest is also seen. The filament discovered by Crawford and Fabian (1992) is clearly visible pointing outwards between 6 ′′ and 13 ′′ west of the center. If the image is smoothed faint emission becomes evident all the way from the center to the filament. Other faint filaments become visible as well, e.g. one associated with the blue object seen in the contours in the upper left part of the figure maps is not convincing when the maps are compared after rotation. Generally the peaks occur at different locations. Even the central source is doubtful: The rev. 200 map shows a weak enhancement slightly displaced to the right of the center, but the rev. 173 map shows a minimum at the same location.
A comparison between the 60µm maps is more successful. Both show a central enhancement (C100-1) although slightly displaced to the right (north) in the rev. 173 map. The maximum brightness (object C100-2) occurs in both maps near the upper left corners and overlap after rotation. In Fig. 2 the approximate positions of overlap is marked by numbers for the off-center sources. The rev. 200 map has a peak (C100-3) in the upper right corner which may be related to the source present in the C200 maps. Furthermore, the peak (C100-4) in the right part of the rev. 200 60µm map overlaps with an enhancement in the rev. 173 map. There are disagreements as well, however. The peak obvious in the rev. 173 map below C100-4 (confirmed by the LSQ reductions) is not visible in the rev. 200 map. We conclude that the 60µm sources C100-1, C100-2, C100-3, and C100-4 are likely to be real, but that the present reduction software still produces artifacts calling for caution in the interpretation.
In paper i we found that aperture photometry of the faint sources suffers significantly from the uncertainty in the evaluation of the background level. We therefore prefer to position, scale and subtract the PSF from the maps. The success in removing the source is then evaluated by eye. By varying the scaling we estimate the maximum and minimum acceptable flux. The median and its deviation from the limits are given in Table 2 for our identified infrared sources. We assume that the two sources in the upper corners of the C200 maps are identical to C100-2 and C100-3. The reality of C100-1 at 100µm may be questionable. C100-3 is outside the field in the rev. 173 map.
The cD galaxy
The central infrared source, C100-1, is detected in all maps except at 100µm. The measured fluxes in the two independent observations also agree within the limits. We therefore regard the source as real. A comparison with the list of Jura et al. (1987) shows that the luminosity of Sérsic 159-03 at 60µm is larger than other early type galaxies detected by IRAS by an order of magnitude or more, except the extraordinarily bright galaxy NGC 1275 which is the center of the Perseus cluster cooling flow, and which is undergoing an encounter with an other galaxy (e.g. Nørgaard-Nielsen et al., 1993).
In a previous paper (Hansen et al. 1995) we presented a model for the infrared emission from Hydra A measured by IRAS. We assumed that most of the mass cooling out of the cluster gas ends up in low mass stars forming in the flow. We further assumed that dust grains were able to grow in the cool pre-stellar clouds converting a fraction y of the mass into grains. If the mechanism is effective we expect y ≈ 1%. After a star has formed the remaining material is dispersed in the hot cluster gas. If a fraction f is recycled to the hot phase a dust mass of y×f ×Ṁ is continuously injected into the cluster gas. At forehand we expect f to be approximately 1 − 50%. The grains are destroyed by sputtering on a time scale τ d , and a steady state is obtained. At any time a dust mass of M d = y × f ×Ṁ × τ d is present. The grains are heated by hot electrons (in the inner galaxy the photon field may also be important), and the infrared emission can be evaluated. The present data do not allow testing of more elaborate models having radial distributions of e.g. the dust temperature. We therefore only make a simple estimate using mean values.
For Hydra A we found that y = 1% and f = 11% reproduced the observed IRAS flux. In Table 3 giving calculated fluxes we repeat the calculations for Sérsic 159-03, but with f reduced to 2%. Considering the crude model and the uncertainty of the measurements we find the agreement with the observed values in Table 2 satisfactory. This result has some significance although f has been used as a free parameter to obtain concordance. If a value of f much larger than unity had been necessary to fit the observations the model would have had to be rejected. Also, a value significantly lower than 1% would have made the model unconvincing.
A possible disagreement with the model is, however, the small extent of the source. One would expect the infrared emission to show some distribution within the cooling radius which is 1. ′ 89. Although the resolution at 60µm is 50 ′′ C100-1 is indistinguishable from a point source in all our measurements. The reason could be that (1) the star formation is concentrated to the center (as seems to be the case for Hydra A, see Hansen et al. 1995), (2) the model does not apply, or (3) instrumental effects prevents detection of a faint, extended distribution of FIR emission.
Alternative possibilities are that C100-1 is related to the active nucleus as inferred by the presence of a radio source (Large et al. 1981, Wright et al. 1994, or that dust has been introduced into the system by a recent merger event. A hint may be that all three measurable images of the revolution 173 maps show a tendency to be displaced from the center by ≈ 10 ′′ to the north where nebular line emission is seen (Fig. 4). The cD galaxy shows no signs of dust lanes, but exhibits a constant distribution in colour (Fig. 3). There are, however, two objects in the upper left part of Fig. 3 which are bluer than the cD. The brightest and bluest of these looks disturbed possibly due to tidal interaction. The spectra taken with P.A. = 21 • cover the object and contain emission lines. The emission is weak in Fig. 4 because the lines are shifted away from the peak transmission of the filter. Relative to the cD we find the velocity of the object to be +1800 ± 200 km s −1 . The galaxy may have plumped through the cD and contains young stars.
The origin of the optical filaments in Fig. 4 is a puzzle. It may be captured material from mergers, related to radio plasma, or connected to the cooling flow. The relative velocities do not support any particular model. The velocities have been measured from our spectra, and they are quite low as seen from Table 4. Donahue and Voit (1993) obtained spectra of the nuclear emission from the Sérsic 159-03 cD galaxy. They argued that the lack of [Ca ii] λ7291 emission indicates that Ca is depleted onto dust grains. We have added all our spectra of the center together and all of the filaments. No [Ca ii] emission was visible in any of the two resulting spectra. We then shifted the [N ii]λ6583 to the expected position of [Ca ii] and added the shifted line after scaling with various constants. In this way we find that no [Ca ii] emission stronger than 0.20 times [N ii]λ6583 is present. Figure 1 of Donahue and Voit (1993) predicts (from ionization calculations) that this ratio should never be smaller than 0.24. Although marginal compared to the case of Hydra A the The presence of dust in the nebular gas does not necessarily exclude that it originates from the cooling cluster gas. Dust may grow in dense, cool clouds in connection with star formation. For the nebular gas in Hydra A Donahue and Voit (1993) found a much tighter limit on the [Ca ii] line strongly suggesting the presence of dust. In Hydra A the nebular gas is concentrated to a central disklike structure of several kpc where vigorous star formation has taken place, and Hansen et al. (1995) argue that it is a result of the cooling flow (see also McNamara, 1995). In Sérsic 159-03 the extended nature of the filaments and the presence of the blue, star forming object is more in favour of a merger scenario, however.
Off-center infrared sources
There are no striking optical identifications to the offcenter sources. The position of C100-2 is relatively well determined by the overlap of the two observations. The nearest object visible in Fig. 2 is ≈ 0. ′ 5 to the south-west, is unresolved and of blue colour. It is not a known QSO (no QSO is closer than 30 ′ in the NASA/IPAC Extragalac-tic Database 2 ), and it is just outside the overlap of the two observations. There are several faint optical objects in the area of C100-3, but no show up in our data with characteristics favouring a candidateship. The difficulties in pointing out candidates are even more pronounced for C100-4 which agrees poorly with the nearest faint objects in Fig. 2. However, C100-4 is also the most uncertain of the sources as it is only visible at 60µm.
Conclusion
The availability of two observations covering essentially the same field at several wavelengths allows us to identify 4 faint (≈ 0.1 Jy) far-infrared sources with some confidence. A central source, C100-1, is attributed to the cD galaxy which contains optical filaments, but our optical images do not reveal significant evidence of dust lanes. The fluxes measured for C100-1 are of the same order of magnitude as expected from dust related to star formation in the cooling flow. For the non-central sources we cannot point out any particular optical candidates in contrast to the results from the Abell 2670 field (paper i) where galaxies with enhanced star formation were found coincident with the infrared sources. | 2014-10-01T00:00:00.000Z | 2000-02-22T00:00:00.000 | {
"year": 2000,
"sha1": "57f468f4ec49e9ae57693f55cd2da98627d61b6b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "57f468f4ec49e9ae57693f55cd2da98627d61b6b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
118256440 | pes2o/s2orc | v3-fos-license | Tunneling conductance in half-metal/conical magnet/superconductor junctions in the adiabatic and non-adiabatic regime: self-consistent calculations
The tunneling conductance in the half-metal/conical magnet/superconductor (HM/CM/SC) is investigated by the use of the combined Blonder-Tinkham-Klapwijk (BTK) formalism and the Bogoliubov-de Gennes (BdG) equations. We show that the conductance calculated self-consistently differs significantly from the one calculated in the non-self-consistent framework. The use of the self-consistent procedure ensures that the charge conservation is satisfied. Due to the spin band separation in the HM, the conductance in the subgap region is mainly determined by the anomalous Andreev reflection the probability of which strongly depends on the spin transmission in the CM layer. We show that the spin of electron injected from the HM can be transmitted through the CM to the SC adiabatically or non-adiabatically depending on the period of the exchange field modulation. We find that the conductance in the subgap region oscillates as a function of the CM layer thickness wherein the oscillations transform from irregular, in the non-adiabatic regime, to regular in the adiabatic case. In the non-adiabatic regime the decrease of the exchange field amplitude in the CM leads to the emergence of the conductance peak for one particular CM thickness in agreement with experiment [J.W.A Robinson, J. D. S Witt and M. G. Blamire, Science 329, 5987]. For both transport regimes the conductance is analyzed over a broad range of parameters determining the spiral magnetization in the CM.
I. INTRODUCTION
In recent years the quantum transport in the ferromagnet/superconductor (FM/SC) junctions has attracted growing interest due to a possible existence of the spintriplet pairing [1][2][3][4] and novel transport phenomena related to this unique superconducting state. 5,6 For the s-wave superconductors with the spatially symmetric orbital part of the Cooper pair wave function, the Pauli principle requires that its spin part is antisymmetric, which means that the spin-singlet seems to be the only possible state for the Cooper pair. However, many years ago Berezinskii 7 proposed a possible existence of the spintriplet state in a system with the s-wave interaction which do not violated the Pauli principle. The triplet pairing correlations proposed by Berezinskii are odd in time (or frequency) and can appear in systems with sort of the time-reversal symmetry breaking mechanism. Recent studies suggest that the spin-triplet Cooper pair correlations can be induced and observed experimentally in the FM/SC junctions with the spin-active or magnetically inhomogeneous interface. [8][9][10] In the normal metal/superconductor (NM/SC) junctions the electrons incident on the interface from the NM side are reflected as holes with opposite spins. This mechanism, known as the Andreev reflection, 11 leads to the proximity effect -the superconducting pairing correlations penetrate into the normal metal over the distance as long as one micron at low temperature. 12 The proximity effect significantly changes if we replace the normal metal by the ferromagnet. The exchange interaction in the ferromagnet results in the different Fermi wave vectors for electrons with opposite spins forming the Cooper pairs. This wave vector mismatch is compensated by the non-zero total momentum of the electron pairs giving rise to the oscillations of the spin-singlet superconducting correlations in the ferromagnet, 5,13 known as the FFLO oscillations. 14,15 Since the exchange field tends to align the electronic spins along the field direction, the spinsinglet superconducting correlations in the ferromagnet are strongly suppressed leading to the short-range penetration length. In contrast to the short-range proximity effect for the spin-singlet state, the spin-triplet state with m = ±1, with both electronic spin of the Cooper pair directed along the exchange field, is robust against the pair breaking induced by the exchange interaction. Therefore, the spin-triplet superconducting correlations (m = ±1), if they exist, can penetrate the ferromagnet over the distance comparable to this observed in the NM/SC junctions. This phenomenon, called long-range proximity effect was predicted theoretically by Bergeret et al. (see Refs. 6,8,and 9). It was found 6,8,9 that in the FM/SC multilayer junctions with the spin-active or magnetically inhomogeneous interface (the spin-flip processes are possible) all three components m = 0 and m = ±1 of the spin-triplet state can arise. Despite few theoretical studies on the spin-triplet pairing induced by the spin-active interface, including the effect of domain wall, 16 spin-orbit coupling 17 or spin-dependent potential, 18 up to date, the direct evidence of the spin-triplet supercurrent has been observed in multilayer FM/FM/SC systems with a noncollinear magnetization of the ferromagnetic layers. [19][20][21] A first experimental hint for the long-range proximity effect was reported in a half-metal Josephson junction based on CrO 2 . 22 However, since the measured critical current varied by two orders of magnitude in simi-lar samples, the results of this experiment needed to be confirmed. The strong evidence for the long range proximity effect was then reported in the Josephson junctions based on Co. 23 The dependence of the critical current on the Co layer thickness, which agrees with the theoretical expectations, provides a strong experimental confirmation of the existence of the spin-triplet pairing in the FM/SC heterojunctions. Further studies on the spin-triplet pairing concerned the FM/SC/FM and FM/FM/SC junctions with a relative magnetization between the ferromagnets. The spin triplet pairing in the clean FM/SC/FM nanostructures with an arbitrary angle between the magnetization of the FM layers was theoretically studied by Haltermann et al. in Refs. 24-27. The authors used the self-consistent solutions of the microscopic Bogoliubov de-Gennes (BdG) equations and analyzed the spin-triplet correlations as a function of the relative magnetization between the magnets. The self-consistent calculations allowed to confirm the experimentally observed angular dependence of the critical temperature T c which monotonically increases due to the presence of the long range spintriplet correlations. T c reaches minimum if the relative magnetization is parallel and maximum for antiparallel magnetization. 27 A different behavior was observed for the FM/FM/SC nanostructures for which the critical temperature is minimized in case of perpendicular alignment of the magnetization. 28,29 Research on the spin-triplet pairing in the FM/FM/SC junctions has been recently extended to systems with the conical (helical) ferromagnets (CM). Efforts to control the long-range triplet supercurrent has been recently demonstrated in Josephson junctions based on holmium(Ho)-Cobalt(Co)-holmium(Ho) multilayer setup. 30 One has been observed a nonmonotonic dependence of the critical supercurrent as a function of the Ho layer thickness, d CM , with peaks for d CM = 4.5 nm and 10 nm. By increasing the Co layer thickness a slow decay of the critical current has been reported in agreement with theoretical calculations. 31 Nevertheless, the theoretical model presented in Ref. 31 does not explain the complex dependence of the critical supercurrent on the Ho thickness. The nonmonotonic behavior of I C (d CM ) has been obtained by Halász et al. in Ref. 32 who have performed calculations in the clean limit using Eliashberg equations. Similar dependence has been also demonstrated by the use of the Blonder-Tinkham-Klapwijk (BTK) approach. 33,34 In the mentioned theoretical works 32-34 the proximity effect at the CM/SC interface has been neglected meaning that the superconducting pair potential has been assumed to be a step function. However, as shown by recent studies, 35 only the self-consistent calculations of the tunneling conductance guarantees that the charge conservation law is satisfied. It means that one cannot properly determine the tunneling conductance in the FM/CM/SC heterostructures by using the non-self-consistent framework. The full self-consistent approach is needed. The self-consistent calculations of the spin-triplet correlations in two layered CM/SC junctions have been presented in Refs. 36 and 37. Nevertheless, these studies concern only the spin-triplet correlations between the CM and SC. They do not include the analysis of the tunneling conductance (transport calculations), the influence of the FM layer attached to the CM or the influence of the CM layer thickness. Summing up, the theoretical analysis of the tunneling conductance through the FM/CM/SC heterojunctions with the inclusion of the proximity effect in the full self-consistent framework has not been presented until now.
In the present paper we report the full selfconsistent calculations of the tunneling conductance in the HM/CM/SC junctions. The charge transport in the considered system is mainly determined by the anomalous Andreev reflection, the probability of which strongly depends on the spin transmission in the CM layer. We consider the conductance in two cases in which the spin transport is adiabatic and non-adiabatic. The conductance is analyzed over a broad range of parameters determining the spiral magnetization in the CM. We show that the tunneling conductance in the HM/CM/SC junctions strongly depends on the spin transport regime. The paper is organized as follows: in Sec. II we introduce the basic concepts of the theoretical scheme based on the selfconsistent solution of the BdG equations and the BTK formalism. In Sec. III we present the results while the summary is included in Sec. IV.
II. THEORETICAL METHOD
We consider the FM/CM/SC structure schematically illustrated in Fig. 1. In the x − z plane the system is assumed to be infinite while the y axis is perpendicular to the layers whose lengths are denoted by d F M , d CM , d SC , respectively. The value and the direction of the exchange field h, denoted by the red arrow in Fig. 1, depends on the position. It is directed along the z axis in the ferromagnet, h = (0, 0, h F M ), while, in the conical ferromagnet, h is given by where h CM is the exchange field amplitude, β 0 determines the angle of a relative magnetization of the CM layer measured from the FM layer at the FM/CM interface while α and β are the cone and rotation angle whose physical meaning is depicted in Fig. 1(b). From Eq. (1) the spatial period of the helix exchange field is λ = 2πa/β, where a is the lattice constant.
In the present paper we consider the charge transport through the FM/CM/SC junction and analyze the tunneling conductance as a function of the system parameters. As mentioned above, the correct analysis of the conductance behavior require the inclusion of the proximity effect. This can be done only by the full self-consistent calculations in which the pair potential distribution is determined from the microscopic BdG equations. The selfconsistent procedure used in the paper consists of two steps. First, we determine the self-consistent pair potential ∆(y) in the nanostructure by solving BdG equations. Then, ∆(y) is used to calculate the tunneling conductance within the BTK approach. [38][39][40] Below, both these steps are described in detail.
A. Self-consistent pair potential ∆(y) calculations The effective BCS Hamiltonian of the considered system is given bŷ whereΨ † s (r),Ψ s (r) are the creation and annihilation operators with spin s, h(r) = h x (r), h y (r), h z (r) is the exchange field, σ σ σ = (σ x , σ y , σ z ) is the Pauli matrices vector, µ(r) is the chemical potential and ∆(r) is the spin-singlet pair potential in real space defined as where g(r) is the phonon-mediated electron-electron coupling constant. The generalized Bogoliubov transforma-tionΨ where γ n and γ † n are the quasiparticle annihilation and creation operators, u ns (r) and v ns (r) are the electron and hole components of the amplitudes vector and η s = 1(−1) for spin down(up), reduces the Hamiltonian (2) into the diagonal form. By the commutation relation where H e = −h 2 2m ∇ 2 −µ(r) is the single electron Hamiltonian, and using the fact that the system is infinite in the x− z plane we obtain the BdG equations in the quasi-one dimensional form Equations (7) are coupled with the expression for the pair potential given by where f (E) is the Fermi-Dirac distribution. The summation in Eq. (8) is carried out only over the electronic states with energy E n inside the Debye window |E n | <hω D , where ω D is the Debye frequency. In our approach g(r) is assumed to be nonzero only in the SC layer.
The self-consistent procedure used to solve the BdG equations (7) is similar to these reported in previous papers. [24][25][26]36 The main difference is that the assumed basis functions have the form of the plane waves. Such choiceis needed in the transport calculations because it guarantees nonzero current at the boundaries of the system. The self-consistent procedure can be described as follows. First, the BdG equations (7) are diagonalized in the basis of the plane waves whereũ nq↑ ,ũ nq↓ ,ṽ nq↑ ,ṽ nq↓ are the expansion coefficients, k q = 2πq/L is the wave vector with q being an integer while L is the total length of the nanostructure. Then, using the calculated wave functions (u n↑ , u n↓ , v n↑ , v n↓ ) T the new pair potential ∆(y) is determined on the basis of Eq. (8). This new ∆(y) distribution is used in the next iteration in which we again solve the BdG equations and determine ∆(y). This procedure is repeated until the convergence is reached. Due to the high computational complexity of such scheme the parallel implementation of the numerical procedure is required. Finally, we also calculate the magnetization vector m given by the formula where µ B are the Bohr magneton.
B. Tunneling conductance calculations
The tunneling conductance calculations have been performed within the tight-binding approximation using the Kwant package. 41 For this purpose we have transformed the BdG equations (7) into the discretized form on the grid y ν = νa with lattice constant a (ν = 1, 2, . . .). We introduce the discrete representation of the quasi-particle wave vector as follows: |Ψ(y ν ) = |u ↑ (y ν ) , |u ↓ (y ν ) , |v ↑ (y ν ) , |v ↓ (y ν ) T ≡ |Ψ ν , Introducing a set ρ ρ ρ of Pauli-like matrices in electron-hole space, the discretized tight-binding form of the Hamiltonian in Eq. (7) is given by Let us assume that the electron with spin-up is injected from the FM into the SC through the CM layer. There are five possible scattering processes: normal reflection with spin conservation (R ↑↑ ee ), normal reflection with spin-flip (R ↓↑ ee ), reflection as a hole with opposite spin (normal Andreev reflection, R ↓↑ he ), reflection as a hole with spin conservation (anomalous Andreev reflection, R ↑↑ he ) and transmission as a quasi-particle T e↑ . In the above, R ↑(↓)↑(↓) e(h)e(h) denotes the reflection probability where upper and lower right index corresponds to the state of an incident particle while upper and lower left index is associated with the reflected one, T e↑ is the transmission probability where upper indexes indicate the state of incident particle. Analogous scattering processes can be distinguished for the spin-down electron injected from the FM. Their probabilities are marked by R ↓↓ ee , R ↑↓ ee , R ↑↓ he , R ↓↓ he and T e↓ , respectively. According to the BTK approach the current through the FM/CM/SC junction can be calculated from the formula where V is the bias voltage and f (E) is the Fermi-Dirac distribution. At low temperature the energy dependent where h F M , expressed in units of µ, corresponds to the spin polarization at the Fermi level in the FM layer. In our calculations the reflection probabilities in Eq. (15) are determined by the use of the Kwant package 41 which requires the implementation of the discretized tightbinding Hamiltonian given by Eq. (13). In the paper, we consider the forward tunneling conductance with the angle of the incident electron θ = 0 and neglect the scattering potential at the interfaces.
III. RESULTS AND DISCUSSION
In this section we analyze the tunneling conductance through the FM/CM/SC junctions by the use of the full self-consistent approach presented in Sec. II. Since the first experimental evidence for the long-range proximity effect (spin-triplet pairing) was reported in a half-metal Josephson junction, 22 we restrict our analysis to the case in which the ferromagnetic layer is embedded in a halfmetal (HM), h F M = 1. In our calculations we neglect the Fermi wave-vector mismatch between the layers assuming a constant value of the chemical potential µ throughout the nanostructure. Its value is used as the energy unit. In the calculations we adopt the following values of the parameters: zero temperature energy gap in the bulk ∆ 0 = 0.01, Debye energyhω D = 0.1, temperature k b T ≈ 10 −5 and the lattice constant a = 0.35 nm corresponding to the conical magnet Holmium. 30 Other parameters determining the magnetic configuration of the MC, namely the exchange field amplitude h CM , the cone angle α, the rotation angles β and β 0 , as well as the CM layer thickness d CM are used to analyze the tunneling conductance through the HM/CM/SC junctions and vary from one simulation to another.
As predicted by Bergeret et al. 8,9 the join effects of the Andreev reflection and the proximity at the FM/SC interface allow for the coexistence of the spin-singlet pairing correlations (| ↑↓ − | ↑↓ )/ √ 2 and the spin-triplet pairing correlations with the total spin projection m = 0 (| ↑↓ + | ↑↓ )/ √ 2). If a magnetically inhomogeneous layer, such as the CM layer, is present between the FM and SC, the spin-triplet state with m = 0 can be rotated to the state with m = 1 (| ↑↑ ). It means that the existence of the spin-triplet pairing with m = 1 strongly depends on the spin transmission in the CM layer. This, in turn, is determined by the exchange field which in the CM has a rotating component varying with the period λ. Depending on λ the spins of electrons injected from the FM can be transmitted through the CM adiabatically -the spin orientation follows the spatial modulation of the exchange field, or non-adiabatically -the period of the exchange field modulation is so short that the electron spin is not able to adopt to the field changes. The degree of adiabaticity can be defined by the parameter Q = ω L /ω h , where ω h = 2πV F /λ is the magnetic field modulation frequency in the electron's frame of reference, V F is the Fermi velocity and ω L = h CM /h is the frequency of the spin Larmor precession. In the adiabatic regime, Q >> 1. Below, we analyze the tunneling conductance through the HM/CM/SC junctions in both adiabatic and non-adiabatic regimes.
A. Non-adiabatic regime
All results presented in this subsection have been obtained for the rotational angle β = 30 o which corresponds to the spatial period of the helical exchange field λ = 3.4 nm measured in the Holmium. 30 For this value of λ the spin transport through the considered system is non-adiabatic, Q < 1. In Fig. 2 we present the pair potential and the magnetization in the nanostructure calculated for h CM = 1, α = 90 • and the thickness of the conical magnet layer d CM = 20 nm. As shown in Fig. 2(a) the self-consistent pair potential significantly differs from the one used in the non-self-consistent approach. Due to the proximity effect ∆(y) does not have the step-like form but smoothly increases in the SC region reaching its bulk value ∆ 0 for a distance grater than the coherence length. Similarly, as the magnetism alters the superconductivity near the CM/SC interface, the superconductivity also influences the magnetism. This so-called reverse proximity effect allows to penetrate the magnetization into the SC region as presented in Fig. 2(b,c). Almost a tenfold reduction of the magnetization amplitude in the CM layer (red lines), as compared to the exchange field (blue dashed lines), results from the fact that the chosen value of λ corresponds to the non-adiabatic transport regime. In this regime the changes of the exchange field seen by elec-trons flowing through the nanostructure are so fast that the their spin do not have enough time to adopt to these changes. As a result, the electron spin rotates around the exchange field irregularly. Note that, in accordance with Eqs. (10)- (12), the magnetization is expressed as a sum of the averaged spin over states with different wave vectors. Since spins of these states rotate around h with different irregular frequency, this sum averages to low value. In subsection B we will show that the suppression of the magnetization does not dependent on the conical magnet thickness d CM but, as expected, is mainly determined by the spatial period of the helical exchange field λ. For the self-consistent pair potential ∆(y) we calculate the tunneling conductance using the procedure described in Sec. II B. Figure 3 presents the normalized conductance G as a function of energy for different thicknesses of the CM layer calculated by the use of the non-selfconsistent (dashed lines) and the self-consistent (solid lines) procedure. For comparison, we also mark the conductance calculated without the CM (gray lines). As we see the dashed and solid gray lines overlap which results from the fact that the conductance in this case is nonzero only for the high-energy limit, above the energy gap, for which the electron incident into the superconductor does not experience much difference between the step-like pair potential and the smooth pair potential from the self-consistent approach. Results presented in Fig. 3 clearly show that the self-consistent conductance is considerably different than this obtained in the non-selfconsistent framework. The most pronounced difference between them is observed in the subgap energy range. Based on the results presented in Fig. 2 and 3 one can formulate the following conclusion: to properly determine the tunneling conductance in the HM/CM/SC heterojunctions the full self-consistent calculations including the proximity effect are needed.
To explain the conductance behavior presented in Fig. 3 (we assume no scattering potential at the interface). The situation diametrically changes if we put the conical magnet between the HM and SC. As predicted by Bergeret et al. 8,9 the magnetic inhomogeneity at the FM/SC interface can induce the non-zero correlations of all three components m = 0 and m = ±1 of the spin-triplet state. As a consequence, one appears an extra scattering mechanism, called the anomalous Andreev reflection in which electron incident into the SC is reflected as a hole with the same spin (in contrast to the normal Andreev reflection in which the incident electron and the reflected hole have opposite spins). For the HM/CM/SC junctions, this new scattering mechanism, if it exists, leads to the nonzero conductance in the subgap region as presented in Fig. 3. As one can see the conductance strongly depends on the thickness of the conical magnet layer, d CM , i.e. its value reaches minimum for d CM = λ and maximum for d CM = 1.5λ, respectively. In Fig. 4 we present the reflection and transmission probabilities (R ↑↑ ee ), (R ↓↑ ee ), R ↓↑ he , R ↑↑ he , T e↑ as a function of energy for these two distinguished thicknesses. We see that for the CM thickness d CM = 1.5λ the increase of the conductance in the subgap region is mainly determined by the increase of the anomalous Andreev reflection probability R ↑↑ he . On the other hand the probability R ↑↑ he is suppressed for d CM = λ for which the normal reflection with spin conservation (R ↑↑ ee ) emerges and also contributes to the conductance value leading to its decrease. Regardless of the CM layer thickness, for low energy, the anomalous Andreev reflection probability R ↑↑ he drops to zero while the normal reflection R ↑↑ ee increases to unity. This results in the zero conductance at zero energy as demonstrated in Fig. 3. Now, we discuss the thickness dependence of the con-ductance G(d CM ), important from the viewpoint of experiments in which the critical current is measured as a function of the CM layer thickness. In Fig. 5 we present the tunneling conductance as a function of energy and d CM . We see that the conductance oscillates as Fig. 6(a) the non-self-consistent dependence G(d CM ) for E/∆ 0 = 0.02 are also presented. As we see the peaks in conductance calculated in the non-self-consistent framework are greater than the corresponding peaks calculated self-consistently. Moreover, the conductance decay rate (with increasing d CM ) is slower than in the self-consistent approach.
The irregular nonmonotonic dependence of the conductance G(d CM ) presented in Figs. 5 and 6 can be explained as follows. If the spin-active region (the CM layer) is present at the FM/SC interface the anomalous Andreev reflections can appear giving raise to the nonzero conductance in the subgap region. 8,9 The probability R ↑↑ he depends mainly in the spin transition in the CM layer. Note, that the strength of the spin-flip scattering in the CM is proportional to the off-diagonal matrix elements of the Hamiltonian (7) which have the form −h x (y)+ih z (y).
Since h x (y) and h z (y) vary periodically, the strength of the spin-flip scattering also oscillates with increasing CM layer thickness. Nevertheless in the non-adiabatic regime the modification of the exchange field seen by electrons flowing through the nanostructure is so fast that the electronic spins are not able to follow these changes. It entails the irregular oscillations of the anomalous Andreev reflection probability and, in consequence, the irregular oscillations of the conductance depicted in Figs. 5 and 6.
In delineating the role of the spin-triplet pairing in the charge transport through the HM/CM/SC junctions, it is necessary to understand the behavior of the conductance under the influence of the magnetic configuration in the CM layer determined by the value of the exchange field amplitude h CM and the angles α and β 0 (see Eq. 1). In Fig. 7 pression of the conductance in the subgap region with decreasing exchange field amplitude h CM . The comparison of Fig. 7 and Fig. 5 allows to conclude that the strength of this conductance suppression depends on the CM layer thickness. It is minimal for d CM = 2.5λ. Note that, for h CM = 0.2 the conductance peak for d CM = 2.5λ is still well pronounced [ Fig. 7(a)]. Further reduction of the amplitude h CM leads to the situation in which the conductance peak survives only for d CM = 2.5λ in consistency with the experimental measurements reporting the peak of the critical current exactly for this value of the CM layer thickness. Although the conductance for d CM = 2.5λ decreases slower than for other thicknesses, even for this value of d CM the conductance in the subgap region is suppressed with decreasing h CM . This suppression is depicted in Fig. 8 which presents G(E) for different values of the exchange field amplitude h CM . We see that in the limit h CM → 0, as expected, the conductance in the subgap region tends to zero. We should also no- As depicted in the insert of Fig. 8 for the HM/CM/SC junctions the dependence G(E = ∆ 0 , h CM ) is an increasing function of h CM in contrast to the FM/SC structure. This results from the fact that the conductance for the considered energy is mainly determined by the anomalous Andreev reflections whose probability increases with increasing amplitude of the spiral magnetic configuration in the CM layer. The magnetic configuration in the CM layer can be modified not only by changing the amplitude h CM but also by changing the spatial configuration determined by the angles α, β and β 0 . Figure 9(b) presents the conductance G(E) for different angles β 0 of a relative magnetization of the CM layer measured from the HM layer at the HM/CM interface. In panel (a) we present the z-component of the magnetization m z (z) for three different angles β 0 . Note that m z (y) is not discontinuous but changes smoothly at the interfaces HM/CM and CM/SC. It saturates to unity in the HM region on the left-hand and penetrates the SC region on the righthand. Although the amplitude of the exchange field h is the same for all three cases, the conductance in the subgap region decreases with increasing β 0 [see Fig. 9(b)]. This behavior can be easily understood by considering two factors. The fist is directly related to the oscillatory dependence of the conductance with the CM layer thickness. In fact, the introduction of a relative magnetization between the HM and CM layers corresponds to the phase shift in the oscillatory dependence of the helical magnetic configuration. Therefore, in the first approximation, the dependence G(d CM ) should be shifted in argument by ∆d CM = 2πa/β 0 . This shift causes the conductance for d CM = 2.5λ (corresponds to the maximum value for β 0 = 0) shifts to lower value related to G(2.5λ + ∆d). The second factor is the increase of the normal reflection probability at the HM/CM interface resulting from the discontinuity of the exchange field. The presented dependence G(β 0 ) is important in the simulation of the real HM/CM/SC structure since the conical magnets used in the multilayer setup have several ways to orient the magnetic moments with respect to the halfmetal magnetization depending on the magnetic coupling at the HM/CM interface.
All results presented so far have been obtained for α = 90 o for which the y-component of the exchange field h y is zero -the magnetization in the CM layer rotates in the x−z plane. Now, we analyze the conductance for the nonzero value of h y . In Fig. 10 the subgap region decays with decreasing the cone angle, whereas the decay rate strongly depends on d CM . For d = 2α it is stronger than for d = 2.5α.
B. Adiabatic regime
In this subsection we analyze the the tunneling conductance in the HM/CM/SC junctions in the adiabatic regime for a long period λ. In subsection III A we have demonstrated that in the non-adiabatic regime the magnetization in the CM layer is strongly suppressed compared to the helical exchange field. As presented in Fig. 2 (b,c) the amplitude of m x (y) and m z (y) modulation in the CM is about ten times smaller than the amplitude of the helical exchange field. Such a strong suppression have been obtained for a short period of the exchange field modulation λ = 3.4 nm corresponding to Q < 1 (non-adiabatic regime). It has been suggested that the magnitude of this suppression can be used as the additional parameter to measure the degree of adiabaticity. In Fig. 12 we demonstrate the self-consistent zcomponent of the magnetization for different values of λ assuming h CM = 1. For comparison the distributions of the exchange field are plotted by the blue dashed lines. As presented in Fig. 12 (compare the red and blue lines), the suppression of the magnetization in the CM layer is more pronounced for a short period of the exchange field modulation λ (non-adiabatic regime) and almost completely disappears for a long λ (adiabatic regime). For λ = 15 nm there is no difference between the magnetization m z (y) and the exchange field h z (y) except the boundaries of the CM layer where m z (y) smoothly changes penetrating the SC region due to the reverse proximity effect. Fig. 12 (e) presents the ratio of the amplitudes m max z /h max z in the CM as a function of the exchange field modulation period λ. As one can see the ratio saturates to the value m max z /h max z = 1 for λ grater than 15 nm for which the transport through the heterojunction can be assumed to be adiabatic. The further analysis presented in this subsection will be carried out in the adiabatic regime for λ = 15 nm corresponding to Q ≈ 10. Figure 13 shows the normalized conductance as a function of energy and CM layer thickness for the cone angles α = 30 o , 60 o and 90 o . For α = 90 o corresponding to h y = 0 [ Fig.13(c)] the conductance in the subgap region oscillates with a period λ/2. It reaches maximum for d CM being an integer multiple of λ/2. In this case (we assume β 0 = 0) the spin of electrons injected from the HM has the same direction as the exchange field in the CM. In the adiabatic regime the spin of electrons flowing through the nanostructure follows the exchange field. Therefore, in this case the anomalous Andreev reflection probability is exactly proportional to the off-diagonal elements −h x (y) + ih y (y) which oscillate leading to the regular oscillations of the conductance presented in Fig. 13(c). This behavior considerably differs from the irregular conductance oscillations presented in Fig 5 for non-adiabatic regime. Moreover, note, that the value of the conductance in the subgap region is lower than this obtained in the non-adiabatic regime (compare with Fig. 5) which is caused by the enhancement of the normal reflection probability due to increase of the exchange field modulation period λ.
For the cone angle α = 90 o , corresponding to the nonzero h y , the spin of electrons injected from the HM is non-collinear with the exchange field at the HM/CM interface. Therefore, the electronic spin starts to precesses around the exchange field direction with the Larmor frequency which, in the adiabatic regime, is much higher than the frequency of the exchange field modula-tion experienced by electrons flowing through the CM. For λ = 15 nm (Q ≈ 10) one period of the exchange field modulation corresponds to ten full-rotations of electron spin around the exchange field direction. Therefore, for the non-zero h y the spin behavior in the CM layer is determined by joint effects: Larmor precession and the exchange field modulation. For certain energies and the CM thicknesses this complex spin behavior leads to the enhancement of normal reflection probability with spin flip R ↓↑ ee presented in Fig. 14(b). Note, that the probability of this scattering mechanism in the non-adiabatic regime is close to zero (Fig. 4). According to Eq. 15, in the range in which R ↓↑ ee increases, the conductance is suppressed leading to the characteristics G(E, d CM ) demon- Fig. 13(a). In this figure the conductance suppression for the energy above ∆ 0 corresponds to green areas. Nevertheless this suppression expands also on the subgap region for which R ↓↑ ee is even grater than above ∆ 0 [ Fig. 14(b)]. The evolution of the conductance with increasing cone angle α is presented in Fig. 15. We see that the conductance in the subgap region initially decreases and then clear conductance peak for α = 30 o appears. The position of this peak is shifted on the energy scale with increasing cone angle α.
strated in
Finally, in Fig. 16 we present the conductance map G(E, d CM ) calculated for α = 90 o and the exchange field amplitude d CM = 0.2. As shown in previous subsection in the non-adiabatic regime the decrease of h CM results in decrease of the conductance in subgap region. This conductance suppression is different for different CM thicknesses (see Fig. 7) leading to the conductance peak for d CM = 2.5λ in consistency with the experimental results. 30 In the adiabatic regime the exchange field amplitude h CM affects the conductance in a different manner. It leads to the conductance decay for the thicknesses being an integer multiples of λ whereas for d CM = (N/2)λ the conductance remains almost unchanged. Therefore, the conductance in the sugap region oscillates regularly with the period λ as presented in Fig. 16.
IV. SUMMARY
We present the detailed analysis of the transport properties in the HM/CM/SC junction within the fully selfconsistent framework based on the combined BTK formalism and the BdG equations. For comparison, the calculations have been also carried out with the use of the non-self consistent scheme. One has been shown that the peaks in the CM layer thickness dependence of the conductance are significantly reduced when the selfconsistent procedure is applied (cf. Fig. 6). It is clear from our analysis that to properly determine the tunneling conductance in the HM/CM/SC heterojunctions the full self-consistent calculations including the proximity effect should be carried out.
Due to the spin band separation in the HM, the anomalous Andreev reflection mechanism which appears in structures with magnetically inhomogeneous layer (such as the CM layer) results in the nonzero conductance within the subgap region (cf. Fig. 3). Its probability strongly depends on the spin transmission in the CM layer. Therefore, we analyze the influence of the exchange field modulation λ on the behavior of spins of electrons transfered through the CM layer. With this respect, we show that one can distinguish between two regimes. In the non-adiabatic regime (low value of the exchange field modulation λ) the changes of the exchange field are so fast that the spins of electrons are not able to adopt and, as a result, they rotate around h irregularly and as an average give a small value of magnetization in comparison to the amplitude of the exchange field. On the other hand, in the adiabatic regime (high value of λ) the magnetization coming from the spins of electrons is almost identical to the exchange field position dependence within the CM layer (cf. Fig 12). The conductance behavior as function of the energy and the CM layer thickness has different behavior in the two mentioned regimes (cf Figs. 5 and 13(c)). The regular oscillations observed in the adiabatic regime are caused by the fact that anomalous Andreev reflection probability is exactly proportional to the off-diagonal elements of the Hamiltonian −h x (y)+ih y (y). This leads to constant hight of the peaks in the CM layer thickness conductance dependence, whereas for the case of the non-adiabatic regime the hight of those peaks is decreasing with increasing thickness, d CM , in consistency with the experiment observation (cf. Fig. 6). Moreover in the non-adiabatic regime the decrease of the exchange filed amplitude results in the well pronounced conductance peak for d CM = 2.5λ for which the peak of the critical current was observed in the experiment. 30 The influence of other parameters characterizing the exchange field behavior in the CM layer are also analyzed (such as the α, β and β 0 angles). It is shown that the conduc-tance is strongly affected by the geometrical structure of the exchange field determined by the cone angle α and the rotational a4.21a (PWD, AO, DPC) hacked | 2015-07-20T14:50:59.000Z | 2015-07-20T00:00:00.000 | {
"year": 2016,
"sha1": "dcdd2a37c5ea92b0ce8448300dabd54e998c8c56",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1507.05515",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "dcdd2a37c5ea92b0ce8448300dabd54e998c8c56",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
118592218 | pes2o/s2orc | v3-fos-license | Symmetries and boundary theories for chiral Projected Entangled Pair States
We investigate the topological character of lattice chiral Gaussian fermionic states in two dimensions possessing the simplest descriptions in terms of projected entangled-pair states (PEPS). They are ground states of two different kinds of Hamiltonians. The first one, $\mathcal H_\mathrm{ff}$, is local, frustration-free, and gapless. It can be interpreted as describing a quantum phase transition between different topological phases. The second one, $\mathcal H_\mathrm{fb}$ is gapped, and has hopping terms scaling as $1/r^3$ with the distance $r$. The gap is robust against local perturbations, which allows us to define a Chern number for the PEPS. As for (non-chiral) topological PEPS, the non-trivial topological properties can be traced down to the existence of a symmetry in the virtual modes that are used to build the state. Based on that symmetry, we construct string-like operators acting on the virtual modes that can be continuously deformed without changing the state. On the torus, the symmetry implies that the ground state space of the local parent Hamiltonian is two-fold degenerate. By adding a string wrapping around the torus one can change one of the ground states into the other. We use the special properties of PEPS to build the boundary theory and show how the symmetry results in the appearance of chiral modes, and a universal correction to the area law for the zero R\'{e}nyi entropy.
I. INTRODUCTION
Topological states 1,2 are quantum states of matter with intriguing properties. They include non-chiral states with topological order, such as the toric code 3 and string net models 4 , as well as chiral topological states. The latter have broken time-reversal symmetry, and possess non-vanishing topological invariants. They include celebrated examples like integer and fractional quantum Hall states, as well as Chern insulators 5 and topological superconductors 6,7 . They display chiral edge modes which are protected against local perturbations, and cannot be adiabatically connected to states with different values of the topological invariants.
Among others, a remarkable open problem in this field is to classify all topological phases; that is, the equivalence classes of local Hamiltonians that can be connected by a (symmetry preserving) gapped path. For their free fermion versions in arbitrary dimensions, a full classification has been already obtained 6,7 . For interacting spins, this goal has only been achieved in one dimension [8][9][10] , based on the fact that ground states of 1D gapped local Hamiltonians are efficiently represented by Matrix Product States (MPS) 11 . In dimensions higher than one, this problem remains open. Still, recent developments reveal that there exist deep intrinsic connections between quantum entanglement and topological states. For instance, topological order is reflected in the universal correction to the entanglement area law, also called topological entanglement entropy. A further proposal has been put forward by Li and Haldane 12 , who suggested that the entanglement spectrum, that is, the eigenvalues of the reduced density operator of a subsystem, contains more valuable information than the topological entanglement entropy.
Projected Entangled Pair States (PEPS) 13 , higher di-mensional generalizations of MPS, are a natural tool for investigating topological states. By construction, they contain the necessary amount of entanglement required by the entanglement area law. Furthermore, many known topological states, such as the toric code 3 , resonating valence-bond states 14 , and string nets 4 , possess exact PEPS descriptions [15][16][17][18] . Despite the lack of local order parameters, PEPS nevertheless provide a local description for topological states, with the global topological properties being encoded in a single PEPS tensor. For some of the above examples, the connection of topology and the PEPS tensor has been made precise as originating from a symmetry of the PEPS tensor 16 (see also Ref. 19). This symmetry only affects the virtual particles used to build the PEPS, unlike the physical symmetries of the PEPS. It can be grown to arbitrary regions, and has several intriguing consequences: (i) it leads to the topological entanglement entropy; (ii) it gives rise to a universal part 20 in the boundary Hamiltonian 21 acting on the auxiliary particles at a virtual boundary, whose eigenvalues are related to the entanglement spectrum of the subsystem; (iii) it provides topological protection of the edge modes 22 ; (iv) it gives rise to string operators that provide a mapping between the different topological sectors; (v) it can also be used to build string operators for anyonic excitations and to determine the braiding statistics; (vi) it determines the ground state degeneracy of the parent Hamiltonian.
Chiral topological states are very different from the above mentioned topological states preserving timereversal symmetry (known as nonchiral topological states), in that they necessarily have chiral gapless edge modes, which cannot be gapped out by weak perturbations due to the lack of a back-scattering channel. There have been doubts that PEPS can describe chiral topological states, until explicit examples with exact PEPS rep-resentations have been obtained very recently 23,24 . These chiral PEPS examples are topological insulators and topological superconductors characterized by nonzero Chern numbers, albeit with correlations decaying as an inverse power law. In view of all that, very natural questions arise as whether these chiral topological PEPS fit into the general characterization scheme in terms of a symmetry of a single PEPS tensor, and whether useful information characterizing topological order manifests itself in the boundary Hamiltonian.
In this work, we answer these questions in an affirmative way for a family of topological superconductors similar to the one introduced in Ref. 23. Those are chiral Gaussian fermionic PEPS (GFPEPS), which are free fermionic tensor network states 25,26 . We give general procedures to build the boundary Hamiltonians and analyze their properties (see also Ref. 24). We first show how to determine the boundary Hamiltonian for GFPEPS, and how the Chern number can be obtained by counting the number of chiral modes defined on the boundary Hamiltonian. Then we show that, as in the case of topological (non-chiral) PEPS, there exists a symmetry in the virtual modes that can be grown to any arbitrary region. We connect this symmetry to the chiral modes on the boundary, and show that it also gives rise to a universal correction to the area law in the zero Rényi entropy (although not in the von Neumann one), and to the boundary Hamiltonian.
Following Refs. 23 and 24, we also build two kinds of (parent) Hamiltonian for which the chiral GFPEPS are ground states, and analyze their properties. The first one, H ff , is Gaussian, local, frustration free, and gapless. In terms of that Hamiltonian, our states can be interpreted as being at the quantum phase transition between different phases characterized by different Chern numbers. That Hamiltonian is two-fold degenerate on the torus. We use the symmetry to build string operators that allow us to characterize the ground states of H ff , and that can be continuously deformed without changing the state. The second one, H fb , is also Gaussian, although gapped, and has a unique ground state on the torus. It is not local, possessing hopping terms decaying as 1/|r| 3 with the distance |r|. We show that it is topologically stable to the addition of local perturbations. This allows us to consider the state as truly topological, and to define a Chern number which we find equals -1. We also compute the momentum polarization 27 and show that it has the expected properties for a topological state. In addition, we provide a numerical example of a GFPEPS with two Majorana bonds (the number of Majorana bonds corresponds to twice the logarithm of the bond dimension in the normal PEPS language) and Chern number 2 having two symmetries of the above kind.
Given the variety of results obtained in this paper and the different techniques used to derive them, we start with a Section that gives an overview of all of them, and connects them to known properties of topological PEPS. The specific derivations and explicit statements and proofs are given in the following Sections. In Sec. III, a general framework for studying the boundary and edge theories of GFPEPS is developed. Their relation to the Chern number is established. In Sec. IV, we give different examples of GFPEPS, some of them topological and some of them not, in order to provide comparison between the two cases. In Sec. V we completely characterize all PEPS with one Majorana bond with topological character. It turns out that in this case the Chern number can only be 0 or ±1; for the latter case we derive necessary and sufficient criteria. In Sec. VI we prove a necessary and sufficient condition on the symmetry that a PEPS tensor has to possess in order to give rise to a chiral edge state for one Majorana bond, and show how those symmetries can be grown to larger regions and to build string-like operators.
II. DEFINITIONS AND RESULTS
This Section gives an overview of the main results of this paper. It also reviews in a self-contained way the basic ingredients that are required to derive the results, and to interpret them. It is divided in four subsections. The first one contains the definition of GFPEPS, which are the basic objects in our study. It also contains two Hamiltonians for which they are the ground state. The first one is gapped, has power-law hopping terms, and is the one that appears more naturally in the context of topological insulators and superconductors. The second one follows from the PEPS formalism, is gapless, and has a degenerate ground state. In the second subsection we present a simple family of GFPEPS, similar to the one introduced in Ref. 23, which we will extensively use to illustrate our findings. The third one contains the construction of boundary and edge theories for GFPEPS, which we explicitly use for the simple family. In the last subsection, we make a connection between the behavior observed for this family, and the one that is known for topological (non-chiral) PEPS (Ref. 16). In particular, we show that one can understand it in terms of string operators acting on the so-called virtual particles, which can be moved and deformed without changing the state.
Throughout this Section we will concentrate on the simplest GFPEPS -those which have the smallest possible bond dimension (which will correspond to one Majorana bond, see below). This will allow us to simplify the description and formulas. However, all the constructions given here can be easily generalized to larger bond dimensions, and this will be done in the following Sections. Some of the results, however, explicitly apply to one Majorana bond, so that we will specialize to that case in the following Sections too.
A. Gaussian Fermionic PEPS and parent Hamiltonians
We consider a square N v × N h lattice of a single fermionic mode per site, with annihilation operators a j , where j is a vector denoting the lattice site. We will consider a state, Φ, of a particular form, and Hamiltonians for which it is the ground state.
Gaussian Fermionic PEPS
We revise here the GFPEPS introduced in Ref. 25. We will first show how a GFPEPS, Φ, of the fermionic modes is constructed (see Fig. 1).
The basic object in this construction is a fiducial state, Ψ 1 , of one fermionic (physical) mode, and four additional (virtual) Majorana modes 28 , all of them at site j (Fig.1a).
The corresponding mode operators, c j,L , c j,R , c j,U , and c j,D (L, R, U , and D, stand for left, right, up and down, respectively), fulfill standard anticommutation relations, {c i,α , c j,β } = 2δ i,j δ α,β , are Hermitian and anticommute with the other fermionic operators. The state Ψ 1 is arbitrary, except for the fact that it must be Gaussian and have a well defined parity. This means that it can be written as where H j is a quadratic operator in all the mode operators, and the Ω denotes the vacuum of the virtual and physical modes. One can easily parametrize H, and thus Ψ 1 , but this will not be necessary here, since we will make use of the fact that the state is Gaussian, for which a more appropriate parametrization exists. The state of the physical fermions, Φ, can be obtained by concatenating all the Ψ 1 at different sites in the way we explain now and is illustrated in Fig. 1. First, take two consecutive lattice sites in the same column, j and n, and project the up virtual mode of the first and the down of the latter onto a particular state, i.e.
(see Fig.1b). Here ω jn = 1 2 (1 + ic j,D c n,U ), which ensures that c U and c D are maximally entangled (forming a pure fermionic state) 29 . Since the modes that we project on are in a well defined state after the projection, we can omit them in the following. In order to simplify the notation, we will denote by ω jn |Ψ the state obtained by applying ω jn and discarding the corresponding modes, and we will say that we have projected onto ω jn . We will also omit the indices representing the lattice sites whenever this does not lead to confusion.
We proceed in the same way, concatenating all the sites corresponding to a column by projecting out the consecutive up and down virtual modes onto the state defined by ω jn . The resulting state is Ψ Nv , since we have N v sites in a column (see Fig.1c). This state contains N v physical fermionic modes, as well as 2N v + 2 virtual Majorana modes, N v on the left, N v on the right, one up and one down. Since we will consider here periodic boundary conditions along the vertical direction, we also project out the up and down virtual modes, obtaining Φ 1 , a state that corresponds to one column (and thus the subindex). Such a state contains N v physical fermionic modes, as well as 2N v virtual Majorana modes (see Fig.1d). By construction, the state is translationally invariant along the vertical direction.
In order to obtain the state on the lattice, we have to follow a similar procedure in the horizontal direction (see Fig.1e). For that, we take the states of two consecutive columns, and project each of the right virtual modes (at site j) of one and the corresponding left virtual mode (at site n) of the other onto ω ′ jn = 1 2 (1 + ic j,R c n,L ). The resulting state, Φ 2 , contains 2N v physical fermionic modes, as well as 2N v virtual Majorana modes. We continue adding columns in the same way, until we obtain Φ N h , containing N v × N h physical fermionic modes and 2N v virtual Majorana ones (see Fig.1f).
In order to obtain a translationally invariant state in the horizontal direction too, we have to project each remaining virtual pair of modes on the left and the right onto the state defined by ω ′ jn . In this case, we will say that we have a state, Φ, on the torus. Otherwise, we can project the virtual modes on the left and the right onto some other state. If we took a product state (of left and right virtual modes) that is translationally invariant in vertical direction itself, we will still keep that property in the vertical direction and the state Φ will be defined on a cylinder. A subtle point is that, when we perform this last projection in order to generate the physical state Φ, the result may vanish. This happens, for instance, in some of the examples considered in this paper in the torus case. There, we will have to introduce a string operator in the virtual modes for those particular sizes of our system.
The state Φ on the torus is fully characterized by the fiducial state Ψ 1 (and therefore by H), since the construction is carried out by concatenating them with a specific procedure. For the cylinder, Φ also depends on the states we choose to close the virtual boundaries. From now on we will work on the torus, unless explicitly stated otherwise.
Since the fiducial state Ψ 1 is Gaussian and our construction keeps the Gaussian nature, all the states defined above will be Gaussian. For that reason, instead of expressing Ψ and Φ in the Hilbert space on which the mode operators act, we characterize them in terms of their covariance matrices (CMs). In order to do so, we write each physical fermionic mode operator in terms of two Majorana operators, fulfilling the corresponding anticommutation relations. For a (generally mixed) Gaussian state ρ in a set of Majorana modes, c l , the CM, γ, is defined through This is a real antisymmetric matrix, fulfilling γ ⊤ γ ≤ 1 1, where 1 1 is the identity matrix. The equality (γ 2 = −1 1) is reached iff the state ρ is pure. Thus, the original state Ψ 1 will have a CM with four blocks, where A, D are 2 × 2 and 4 × 4 antisymmetric matrices, respectively, B is a 2×4 matrix, and they are constrained by γ 2 1 = −1 1 (since the state Ψ 1 is pure). Hence, the state Φ is completely characterized by those matrices. Concatenating states as explained above can be easily done in terms of the CMs (see Ref. 25 and Sec. III A below).
If we consider the indices l (and m) in Eq. (4) as joint indices of the site coordinates r = (x, y) (and r ′ ) and the index of the two Majorana modes located at site r (r ′ ), the 2 × 2 block of γ of a GFPEPS for given sites r and r ′ fulfills since the construction of the GFPEPS is translationally invariant. Thus, it is convenient to carry out a discrete Fourier transform on γ. The result is, as outlined in Ref. 25, a block-diagonal matrix with blocks labelled by the momentum vector k = (k x , k y ). Due to the purity of the state, they are of the form withd i (k) ∈ R and |d(k)| = 1. The above construction can be trivially extended to more general GFPEPS, where there are 4χ virtual Majorana modes and f fermions per site. In Sec. III we will show how to carry out such a construction for that general case. The case considered in this Section, χ = 1, is much simpler to describe and already possesses all the ingredients to give rise to topological chiral states.
Parent Hamiltonians
One can easily construct Hamiltonians for which Φ is the ground state. For that, we can follow two different approaches. The first one takes advantage of the fact that Φ is a Gaussian state, whereas the second uses that it is a PEPS.
Our first Hamiltonian is the "flat band" Hamiltonian where γ is the CM of the state Φ, and e are the Majorana modes built out of the physical fermionic modes. Since Φ is pure, γ 2 = −1 1 and thus it has eigenvalues ±i.
Hence, H fb contains two bands separated by a bandgap of magnitude 2, which are flat. As γ is antisymmetric, there exists an orthogonal matrix O such that O ⊤ γO is block diagonal. Using this, one can easily convince oneself that Φ is the unique ground state of H fb . Note also that the Hamiltonian H fb will not be local in general, since γ l,m = 0 for all l, m. We also remark that for general γ the single particle spectrum of a Hamiltonian of the form (8) is given by the eigenvalues of −iγ. We transform Eq. (8) to reciprocal space and write it in terms of the Fourier transformed Majorana modes (with (r, α) corresponding to the joint index l above), so that it takes the form where G α,β (k) is given in Eq. (7). The second Hamiltonian can be constructed by invoking the general theory of PEPS (see, e.g. Ref. 16). We can always find a local, positive operator, h ≥ 0, acting on a sufficiently large plaquette, that annihilates our state, i.e. h j |Φ = 0. Here j denotes the position of the plaquette. In the case of a GFPEPS, h j can be chosen to be local. Furthermore, since the state is translationally invariant, we can take Now, this Hamiltonian is local (i.e., a sum of terms acting on finite regions, the plaquettes), frustration free (thus the subscript), and it is clear that Φ is a ground state. However, there may still be other ground states, and, additionally, H ff may have a gapless continuous spectrum (in the thermodynamic limit). For the topological states considered later on, we will see that H fb is intimately connected to the chiral properties at the edges, as it is well known for topological insulators and superconductors 31,32 . The other one, H ff will share other topological properties that makes it akin to Kitaev's toric code 3 and its generalizations.
Parameterization of the GFPEPS
Now, we review a family of chiral topological GFPEPS similar to that introduced in Ref. 23, which is characterized by a parameter, λ ∈ [0, 1]. The fiducial state Ψ 1 is given by Here, b is an annihilation operator acting on the virtual modes as follows where The corresponding CM γ 1 [Eq. (5)] is We have sorted the Majorana mode operators as e 1 , e 2 , c L , c R , c U , c D .
Later on we will consider other states, topological or not, to illustrate the properties of the boundary theories. However, the family of states given here will be a central object of our analysis, since it already possesses all the basic ingredients. As it is evident from the definition, the fiducial state Ψ 1 in Eq. (12) is an entangled state between the physical and one virtual mode, except for λ = 0, 1, whereas for λ = 1/2 it is maximally entangled. It has certain symmetries, which will be of utmost importance to understand the topological features of the state Φ it generates. Explicitly, The operators a, b, and d 1 define three fermionic modes (one physical, and three virtual). Equations (16) just reflect the fact that for a Gaussian state the physical mode can be entangled at most to one virtual mode, since we can always find a basis in which one virtual mode is disentangled. The latter is precisely the one annihilated by d 1 . In fact, (16) completely defines the state Ψ 1 .
At k = (0, 0) both the numerators and the common denominator are zero. In Appendix A, we show that due to this non-analycity, correlations in real space decay like the inverse of the distance cubed (up to possible logarithmic corrections).
Frustration free Hamiltonian: fragility
The frustration free parent Hamiltonian for this model is obtained by explicitly calculating the state Ψ 2,2 obtained when four Ψ 1 on a 2 × 2 plaquette are concatenated without closing the boundaries in horizontal or vertical direction. Thereafter, one calculates the fermionic operator a , acting only on the physical level, which annihilates Ψ 2,2 , a |Ψ 2,2 = 0 (it turns out that exactly −π −π/2 0 π/2 π −π −π/2 0 π/2 π −4 one such operator exists for any λ ∈ (0, 1)). This can be done conveniently in the CM formalism. The parent Hamiltonian, H ff , can then be obtained by setting in Eq. (11). For λ = 1/2, for instance, we have a = e a,1,1 (2 + i) + e b,1,1 − e a,1,2 (1 + 2i) + ie b,1,2 − e a,2,1 + e b,2,1 (−2 + i) + ie a,2,2 + e b,2,2 (1 − 2i), where e a,x,y denotes the first physical Majorana mode located at the site with coordinates (x, y) and e b,x,y the second one. The single-particle spectrum for that case is displayed in Fig. 3. Note that there is a band-touching point at k = (0, 0), and thus this Hamiltonian is gapless and has a continuous many-body spectrum. That is, it is exactly two-fold degenerate for finite systems, and in the thermodynamic limit it possesses a continuous spectrum right on top of the ground state. The frustration free Hamiltonian H ff does not have a protected chiral edge mode, as it is gapless in the bulk: Let us add a translationally invariant perturbation [with variable GFPEPS parameter λ ∈ (0, 1)], x,y [µ 0 e a,x,y e b,x,y + ν 0 (e a,x+1,y e b,x,y − e a,x,y e b,x+1,y + e a,x,y+1 e b,x,y − e a,x,y e 2,x,y+1 )] (23) where µ 0 , ν 0 ∈ R. Note that only µ 0 = ν 0 = 0 corresponds to a GFPEPS ground state. After carrying out a Fourier transform, the Hamiltonian can be brought into the form FIG. 4. Phase diagram of the perturbed Hamiltoniañ H ff (λ, µ0, ν0) (see text) for µ0, ν0 close to zero and λ ∈ (0, 1) arbitrary. The vertical gapless line corresponds to a quadratic band touching, whereas the horizontal gapless line (µ0 > 0) corresponds to four Dirac points. All other points in the phase diagram are gapped with the shown Chern numbers.
with σ i the Pauli matrices, the Chern number can be calculated via 33 Depending on the signs of the parameters µ 0 and ν 0 , the Hamiltonian can be driven by infinitesimally small perturbations to gapped phases with Chern number C = 0 (trivial), C = −1 or C = −2 as shown in Fig. 4. This phase diagram does not depend on the parameter λ as long as |µ 0 | and |ν 0 | are sufficiently small. Hence, with respect to the frustration free Hamiltonian, the states defined by Eq. (12) describe critical points in the transition between different topological phases with Chern numbers C = −2 and C = −1 and a topologically trivial phase (C = 0).
We conclude that the frustration free Hamiltonian is gapless and thus not topologically protected. Instead, it is at the critical point between free fermionic topological phases with different Chern numbers.
Flat band Hamiltonian: robustness
Let us now consider the stablity of the flat band Hamiltonian H fb against perturbations. First, we will show analytically that the Hamiltonian is robust even against long-ranged translationally invariant perturbations; and second, we will demonstrate numerically the stability against local disorder. This shows that the Hamiltonian is topologically protected and its Chern number is therefore a meaningful quantity.
Let us first consider translational invariant perturbations where we assume that the perturbation decays faster than 1/|r| 3 in real space (with |r| the distance). Then, it can be shown (see, e.g., Ref. 34 Let us now turn towards the stability of H fb against random disorder, which we have verified numerically. To this end, we randomly added local disorder terms j µ j a † j a j (µ j ∈ [−1, 1]) to the flat band Hamiltonian for λ = 1/2 defined on an N v × N v torus (N h = N v ) as a function of its length N v . In Fig. 5 we plot the energy gap obtained for 225 random realizations for each system size N v . As can be gathered from the figure, its gap stays non-vanishing in the thermodynamic limit, indicating that it is topologically protected against disorder.
To summarize, the gap of the flat band Hamiltonian H fb is topologically protected against the addition of onsite disorder and (small) translationally invariant perturbations whose hoppings decay faster than the inverse of the distance cubed. Its Chern number is −1.
C. Boundary and Edge Theories
In Ref. 35 a formalism was introduced for spin PEPS to map the state in some region R to its boundary. This bulk-boundary correspondence associates to each PEPS a boundary Hamiltonian, H b , that acts on the virtual particles. The Hamiltonian faithfully reflects the properties of the original PEPS. In particular, for the toric code 3 , or the resonating valence-bond states 14 , that boundary Hamiltonian features their topological character 36 . In this Section we review that theory for GFPEPS and show how one can determine H b for GFPEPS.
Chiral topological insulators and superconductors, on the other hand, are characterized by the presence of chiral edge modes, featuring robustness against certain bulk perturbations. Here, we also analyze how those features are reflected in H b , as well as the relation of that Hamiltonian with that found for the toric code.
Boundary Theories
Given the GFPEPS Φ, let us take a region R of the lattice, trace all the degrees of freedom of the complementary region,R, and denote by ρ R the resulting mixed state. As it was shown in Ref. 35, ρ R can be isometrically mapped onto a state of the virtual particles (or modes) that are at the boundary of the region R. That is, there exists an isometry V R , such that where σ R is a mixed state defined on those virtual modes.
Here we will take as region R a cylinder with N columns, see Fig. 6. There we have drawn the (red) physical fermions, as well as the (blue) virtual Majorana modes, as they appear in the construction explained above (Fig. 1).
The state σ R is Gaussian and is thus also characterized by a CM, which we will denote by Σ N . In Sec. III we will show how to determine it in terms of γ 1 . Here, we just quote the results. We can write where is the so-called boundary Hamiltonian, with c j the Majorana operators acting on the left and right boundaries, The spectrum of H b N coincides with the so-called entanglement spectrum 12 . Here we will be interested in the corresponding single-particle spectrum, i.e. that of H b N . Since H b N is translationally invariant in the vertical direction, we can easily diagonalize it by using Fourier transformed Majorana modes. It is convenient to definê separately for the left and right virtual modes, so that H b N displays a simple form in their terms. Here, the quasi-momentum is k y = 2πn/N v , with n = −N v /2 + 1, . . . , N v /2. Up to a factor of two, the operatorsĉ † ky = c −ky fulfill canonical commutation relations for fermionic operators, {ĉ ky ,ĉ † k ′ y } = 2δ ky,k ′ y , for k y = 0, π. For k y = 0, π, they are Majorana operators (i.e.,ĉ † 0 =ĉ 0 , andĉ † π =ĉ π ). This latter fact is crucial to understand the topological properties of the original state Φ, as we will discuss in Sec. V.
The single-particle spectrum (dispersion relation, since we have translational invariance) will be labeled by k y . For the GFPEPS determined by Eq. (12) for λ ∈ (0, 1) we will show that in the limit N → ∞ one can write correspond to virtual fermionic modes on the left and right, respectively, which are decorrelated from each other. For k y = 0 and k y = π, however, there is a single unpaired Majorana mode in each boundary. For the above family of chiral GFPEPS, the k y = 0 Majorana modes pair up, giving rise to an entangled state between the left and the right boundaries, which is why we obtain the structure of Eq. (30) for the single-particle boundary Hamiltonian.
The Chern number, C (up to a sign), is given by the number of right-movers minus the number of left-movers on one of the boundaries. For the simple case considered in this Section, with one Majorana bond, |C| = 0, 1. For GFPEPS with more Majorana bonds, one can build the boundary Hamiltonian in the same fashion, as we will show in the next Section. In that case, the Chern number is determined ditto, but it may be larger than one.
In Fig. 7 we plot the single-particle dispersion relation of the right boundary as a function of k y , for the state generated by (12) for different values of λ and N → ∞ (we will provide an analytical formula for that limit in Sec. V). It displays chirality, and the Chern number is −1. The mode at k y = π has zero "energy", indicating 0 π/2 π 3/2 π 2 π −10 Dispersion relation corresponding to the right boundary Hamiltonian for the chiral state defined via Eq. (12). We plot −iĤ R ∞ (ky) (which is a 1 × 1 matrix), for λ = 1/4 (blue solid line), λ = 1/2 (green dashed line) and λ = 3/4 (red dash-dotted line) and N → ∞. For convenience, we have plotted it for ky ∈ [0, 2π). Note the divergence at ky = 0, where there is a maximally entangled virtual Majorana pair between the left and the right boundary. The lines cross the Fermi level from above at ky = ±π, thus C = −1.
that the state of the left and right Majorana modes with such a momentum are in a completely mixed state. If we construct a fermionic operator using those two modes, the boundary state σ R at momentum π has infinite temperature, and thus is an equal mixture of zero and one occupation. If we do the same with the modes at k y = 0, the opposite is true, namely they are in a pure state (the vacuum mode of the fermion mode built out of the two Majorana modes from the left and the right). Thus, as anticipated, the left and right boundaries are in an entangled state, which reflects the topological properties of the state. In Sec. III we will show that all the features displayed by this example are intimately related.
As a second example, we take a state that does not display any topological features. Its explicit form is given in Sec. IV C. The dispersion relation for the right boundary is shown in Fig. 8. Since the energy band of the boundary Hamiltonian does not connect the valence and conduction band for any µ, the Chern number is zero. Furthermore, both at k y = 0, π the "energy" vanishes, showing that the right and left boundaries are unentangled.
In Sec. IV, we present further examples: We give an example of a GFPEPS displaying C = 2. We also investigate the Chern insulator presented in Ref. 23, provide a topologically trivial GFPEPS as well as the non-chiral state introduced in Ref. 25.
Edge theories
The definition of the boundary theory used above may look a bit artificial; the Hamiltonian H b N does not gener- FIG. 8. Dispersion relation at the right boundary for the nonchiral state defined via Eq. (56). We plot −iĤ R ∞ (ky) (which is a 1 × 1 matrix), for µ = 1/4 (blue solid line), µ = 1/2 (green dashed line) and µ = 3/4 (red dash-dotted line) and N → ∞. It crosses the Fermi level twice with slopes of different signs, hence C = 0.
ate any dynamics, but is just the logarithm of the density operator, and thus comes from the interpretation of the boundary operator as a Gibbs state. However, it is well known 37 that for free fermionic (i.e. Gaussian) states, its spectrum is intimately related to the one of another Hamiltonian that indeed generates the dynamics at the physical edges of the system in question. In the PEPS representation, there is a way of constructing such an edge Hamiltonian 22 , which we review here and we explicitly illustrate such a relation.
Let us consider the flat band Hamiltonian (8), but in the case of a cylinder with open boundary conditions. For that, we restrict the sum in Eq. (8) to the modes that correspond to region R (the cylinder in Fig. 6), and denote by H R the corresponding Hamiltonian. The state Φ N (see Fig. 1f) has extra (virtual) modes, which we can project onto an arbitrary state, say φ v . The energy (in absolute value) of the resulting state will typically be much smaller than the gap of the system on the torus. Thus, there is a subspace spanned by all the states resulting from this construction with a low energy. By choosing a set of linearly independent vectors φ v , and orthonormalizing the resulting state, we can project H R onto that subspace. This is precisely the procedure given in Ref. 22, and the resulting Hamiltonian, which has as many degrees of freedom as there are virtual Majorana modes, is the edge Hamiltonian, H e N . We now write and in Sec. III C we show that one obtains that H e N = Σ N . Thus, up to a scale transformation (cf. Eq. (28)), we see that the edge Hamiltonian is nothing but the boundary Hamiltonian, whenever we take the flat-band Hamil-tonian as the parent Hamiltonian of our GFPEPS. This agrees with the statement of Ref. 37, and indicates that our results on the boundary Hamiltonian can be translated to the edge Hamiltonian constructed in the outlined way.
D. Symmetries, degeneracy, and Topological Entropy
Here, we will first briefly review how the topological properties of PEPS in spin systems are reflected in the symmetries of the corresponding fiducial state Ψ 1 . Then we will show that for the GFPEPS considered in previous subsections, a similar behavior is present.
Spins
For PEPS in spin systems, all the properties are encoded in the single tensor which is used to build the state. In the language used in this paper, this tensor is equivalent to Ψ 1 , since it is given by its coefficients in a basis. In particular, for topological states like the double models 36 , there exist operators U g , where g is an element of a group G and U g a unitary representation of it, acting on the virtual particles which leave Ψ 1 invariant. Those operators can be concatenated to string operators defined on the virtual modes on the boundary, so that for any state appearing during the construction of the PEPS Φ, there exist other operators fulfilling the same property. Those operators can be built starting out from U g in a systematic way. This implies that for any region R, there exists operators U g acting on the virtual particles at the boundary, such that For double models the operators U g can be written as products of operators acting on each of the virtual particles of the boundary. From Eq. (32) it follows that σ R is supported on a proper subspace of the virtual system, that corresponding to the eigenvalue 1 of all U g , i.e., Here P is a non-local operator which projects onto that subspace. This fact has two consequences: (i) the zero Rényi entropy (which is the logarithm of the dimension of that subspace) does not coincide with the logarithm of the dimension of the Hilbert space of the virtual particles on the boundary of R; (ii) there is a non-local constraint on the boundary and edge Hamiltonian. Those two features are thus related to the topological character of the PEPS. Note that (i) may also imply in some cases that there is a correction to the area law, what is usually called the topological entropy. That is, the von Neumann entropy of σ R scales like the number of virtual particles on the boundary of R minus a universal constant, which is directly related to the topological properties of the model under study. The property (ii) acts as a superselection rule in the boundary and edge theories, since any perturbation in the bulk will not change that subspace. Additionally, in the spin lattices studied in Ref. 36, H b N is local (contains hoppings that decay exponentially with the distance) whenever the frustration free parent Hamiltonian of the state Φ is gapped.
Another consequence of (32) is apparent if we take a PEPS defined on the torus. Then, we can attach different string operators U g and U g ′ around the two different cuts of the torus (see Fig. 6, right). This means that during the construction of the PEPS, we apply those operators to the virtual particles at the position where the strings appear before applying the projections ω and ω ′ . Because of the symmetry, those string operators can be moved without changing the state. However, they cannot be discarded given the topology of the torus. The states for each pair of U g and U g ′ are ground states of the parent, frustration free Hamiltonian of the PEPS as well, and for some particular g, g ′ they are linearly independent. Thus, that Hamiltonian is degenerate and in fact all its ground states can be generated by applying the string operators on circles around the torus. Furthermore, anyonic excitations can be understood as the extreme points of open strings, and the braiding properties related to the group G.
Fermionic systems
Now we show that an analogous phenomenon is present in our chiral topological models. That is, as PEPS, they also possess a symmetry in Ψ 1 which is inherited for larger regions, and that gives rise to properties (i) and (ii). Besides that, the parent Hamiltonian H ff is degenerate on the torus, and the different ground states can be obtained by attaching to the virtual modes string operators around the torus. The strings can be deformed, without changing the state. However, there are some differences, too. First of all, the von Neumann entropy of σ R does not display a universal correction, which we attribute to the long-range properties of the parent Hamiltonian H fb of the state Φ (see Refs. 23 and 24). For the same reason, the hoppings in H b N decay according to a power law. Furthermore, the ground-state subspace of the parent Hamiltonian, H ff , is doubly degenerate on the torus, and some topologically inequivalent string configurations give rise to the same state.
Let us consider any region R, and denote by Ψ R the state obtained by projecting all the virtual modes within region R onto the state generated by ω jn or ω ′ jn , as they appear in the PEPS construction. We arrive at a state of the physical modes in R and the virtual ones sitting at the boundary of R. For instance, if we take as R a cylinder with N columns, the state Ψ R = Φ N (see Fig. 6).
We can write where ω ∂R,∂R projects out all the virtual modes at the boundaries of R and its complementR.
If a contour C encloses a connected region R, for chiral GFPEPS with one Majorana bond, there is a fermionic operator d C such that For any contour, we will say that the state is a GFPEPS with a string along the contour C. In Sec. VI D we will show how this string operator can be deformed continuously for a chiral GFPEPS without changing the state we are building. However, if a contour wraps up around one of the sections of the torus, we cannot get rid of it by continuous deformations. Let us denote by C h,v contours wrapping the torus horizontally and vertically, respectively. We show in Sec. VI D that if we build the family of chiral GFPEPS starting out from Ψ 1 according to Eq. (12), we obtain Φ = 0 after the last projection. However, the states obtained if we add a certain string along any of those contours coincide, Φ C h ∝ Φ Cv , and in the following that is the state that we will consider. We also show that if we insert string operators along the two contours C h and C v , the state Φ C h ,Cv we obtain is orthogonal to the previous one, but it is also a ground state of H ff .
The frustration free Hamiltonian has certainly very interesting properties, although we cannot determine them unambiguously given our results. It is not only at a quantum phase transition point between free fermionic (gapped) phases with Chern numbers C = 0, −1 and −2, but it furthermore carries features of states described by PEPS with long-range topological order: Its ground state manifold is obtained by inserting strings along the nontrivial loops of the torus. Hence, our results also allow to interpret the local parent Hamiltonian as being at the edge of a topologically ordered interacting phase.
The existence of the operators d C in Eq. (35) for any simply connected region R has another important consequence. It follows that we can build a unitary oper- (32) is fulfilled for the boundary operator. As a consequence, we also have Eq. (33) Note that in our case G = Z 2 is represented by {1 1, U }. Thus, we conclude that the properties of the previous paragraph (i) (topological correction to zero Rényi entropy) and (ii) (non-local constraint on boundary and edge Hamiltonian) are fulfilled as in the standard PEPS case. Note that if R lies on a cylinder as in Fig. 6, we can also give the interpretation that, as in the case of a Majorana chain, there are two Majorana modes at the boundaries building a fermionic mode in the (pure) vacuum state. As a consequence, we can write σ R for the cylinder as in Eq. (33), where P projects onto the subspace where that mode is in the vacuum.
In addition to the zero Rényi entropy S 0 (N v ), we have also numerically computed the von Neumann entropy S vN (N v ) for the example given in Eq. (12) for λ = 1/2. Both are shown in Fig. 9 as a function of N v : While the zero Rényi entropy clearly shows a topological correction of ln(2), similar to the toric code model, the von Neumann entropy does not exhibit such a correction. As we prove in Appendix B, this follows from the fact that S vN (N v ) forms a discrete approximation to the integral over the modewise entropy, which is sufficiently smooth in k y to ensure fast convergence. The same happens for all Rényi entropies S α except for α = 0. This is consistent with the result of, e.g., Ref. 38 (where, however, only non-chiral topological states have been considered).
In order to further investigate the topological properties of our model, we have also computed the so-called momentum polarization 27 (see also Refs. 39, 40 and 41), which measures the topological spin and chiral central charge of an edge 42 . For a state |ϕ on a cylinder, it is defined as µ(N v ) = ϕ|T L |ϕ , where T L is the translation operator on the left half of the cylinder. It can thus be rephrased in terms of the (many-body) entanglement spectrum λ ℓ of the left half, which implies that in the framework of PEPS, it can be naturally evaluated on the virtual boundary between the two parts of the system. In particular, for GFPEPS it can be expressed as a function of the (single-particle) spectrum of the boundary Hamiltonian H b N , as shown in Fig. 7. In Ref. 27, it has been shown that (for systems with CFT edges) µ(N v ) = exp(−αN v − 2πiτ /N v + . . . ), with a non- universal α, and a universal τ which carries information about the topological properties of the system. In Appendix B, we prove that for GFPEPS, µ(N v ) exactly follows the above behavior, and τ is indeed universal: Remarkably, it only depends on whether the boundary Hamiltonian exhibits a divergence, but not at all on its exact form. In particular, for our example, we analytically obtain a τ which corresponds to a chiral central charge of c = 1/2, independently of λ, in accordance with expectations.
Finally, an interesting behavior is also observed for the boundary Hamiltonian, Eq. (27), for N → ∞. On the right boundary we perform the Fourier transform to position space [H R ∞ ] n,m . Then, for y ≫ 1, |[H R ∞ ] n,n+y | ∝ log(y)/y+O(1/y), see Appendix C. Thus, the decay is not exponential as it is the case for gapped phases in spins, but follows a power law. We plot the hopping amplitudes |[H R ∞ ] 1,1+y | of the above chiral family for λ = 1/2 in Fig. 10.
III. DETAILED ANALYSIS
In this Section, we provide a detailed derivation of the boundary and edge theories for GFPEPS. We start in Sec. III A by formally introducing GFPEPS, and then provide the derivation of boundary theories (III B) and edge theories (III C) for GFPEPS.
A. GFPEPS
The construction of GFPEPS given in Sec. II A can be defined more generally for f physical fermionic modes per site and χ Majorana bonds between them. We again start with an N h × N v lattice, now with χ left, right, up and down Majorana modes per site, c j,L,κ , c j,R,κ , c j,U,κ and c j,D,κ , respectively, where κ = 1, . . . , χ is the index of the Majorana bonds. At each site j they are jointly with the physical modes in a Gaussian state as in Eq. (1). The procedure to construct the GFPEPS is the same, except that there are now χ virtual bonds between any two neighboring sites, i.e., here we have to set for the vertical and horizontal bonds, respectively. We will again denote by ω jn | ( ω ′ jn |) the map which applies ω jn (ω ′ jn ) and discards the corresponding virtual modes. For simplicity, in the following we will call the states generated by the operators (37) out of the vacuum maximally entangled states. The remaining procedure of how to concatenate them is the same as in Sec. II A, cf. also Fig. 1.
In this scenario the CM is likewise given by Eq. (5), just that the blocks A, B, and D now have sizes 2f × 2f , 2f × 4χ, and 4χ × 4χ, respectively. We are interested in how to determine the CMs of the different states Ψ Nv , Φ N , and Φ involved in the construction of the GFPEPS. It is based on two operations (see Fig. 1): (i) building the state of l + m modes out of two states of l and m modes, respectively, i.e., taking tensor products; (ii) projecting some of the modes onto some state (given by ω or/and ω ′ ). Apart from that, we will also extensively use in other parts of this paper: (iii) tracing out some modes.
In terms of the CM, those operations are performed as follows 43 . (i)-joining two systems: the resulting CM is a 2 × 2 block diagonal matrix, where the two diagonal blocks are given by the CM of the state of the l and m modes, respectively. The operation (ii)-projecting out some of the modes, is slightly more elaborate. Let us consider an arbitrary state (pure or mixed) with CM γ 1 with blocks A, B, D [as in Eq. (5)], and we want to project the last modes (corresponding to matrix D) onto some other state of CM ω. The resulting CM is given by 25,43 Typically, we will have to project onto the states generated by (37). Their CM is very simple, Finally, in the case of operation (iii)-tracing out some of the modes, one simply has to take the corresponding subblock of the CM. This block is the CM of the reduced state. For instance, if one traces out the physical degrees of freedom of the state described by the CM (5), one obtains a (generally mixed) state defined on the virtual degrees of freedom with CM D. Conversely, one can also build the CM of a purification of a mixed state D, as Operations (i) and (ii) can be used to build the CM of the state Φ out of that of Ψ 1 . In this Section we will extensively use all presented operations to construct the boundary and edge states and Hamiltonians.
B. Boundary Theories
Boundary Theories in GFPEPS
We will now show how to derive boundary theories in the framework of fermionic Gaussian states, by only us-ing their description in terms of CMs rather than the full state. We consider a bipartition of the PEPS Φ into two regions R andR (Fig. 11) and are interested in the reduced state ρ R = trR(|Φ Φ|). We proceed as follows. First, we consider the states where all virtual bonds within those regions have been projected out, leaving only virtual particles at the boundaries of those regions (which are denoted by ∂R and ∂R, respectively) unpaired. Hence, we are left with two states, which are defined on the physical degrees of freedom of these regions plus the virtual degrees of freedom of the respective boundaries (see Fig. 11b,c). We define their CMs as respectively, where the first (second) block corresponds to the physical (virtual) degrees of freedom. The whole GFPEPS Φ could be obtained by pairwise projecting their virtual degrees of freedom on maximally entangled states, and thus, according to Eq. (38), its CM is The CM of ρ R is given by the (1,1) block of Eq. (42), that is As explained in Sec. II C, we are interested in a state σ ∂R defined on the virtual degrees of freedom located on ∂R, which is isometric to ρ R . Naively, one could think that its CM is given by the (2,2) block of Γ, i.e., G, which corresponds to a reduced state acting on that boundary. However, this is not the case in general, since the state described by the CM G is usually not isometric to ρ R . As outlined in Ref. 35, σ ∂R is given by a symmetrized version which takes into account ∂R and ∂R. In fact, we can construct σ ∂R by first finding the appropriate purification of ρ R , and then tracing the physical modes. We will carry out that task in two steps. First, we will conveniently rotate the basis of the physical Majorana modes in region R and afterwards truncate the redundant degrees of freedom (projection). Both taken together correspond to the application of an isometry on ρ R .
We start with an orthogonal basis change in the basis of physical Majorana modes {e l } in region R. The new ones are given by an orthogonal matrix M , This obviously does not change the spectrum of ρ R . By performing this basis change, the CM Γ gets modified to Note that this CM corresponds to a pure state, as Γ does. We choose M in such a way that Γ ′ decouples into a purification of the virtual state and a trivial part on the remaining physical level. This is always possible if the region R contains more degrees of freedom than ∂R and can be done practically by using a singular value decomposition of F . Then, where Z is the CM of a pure state defined on the physical level and the remaining non-trivial part of Γ ′ corresponds to a purification of G (note that the first and second block correspond to the physical degrees of freedom and only the third block to the virtual ones). We discard the decoupled physical part and project the virtual degrees of freedom (together with those of regionR, given byḠ) on the maximally entangled state. This yields the relevant part of Eq. (43), which is the CM of σ R which is defined on the modes at the boundary. (We denote it by Σ N , since R will be typically taken to lie on a cylinder, cf. Fig. 6, with N columns. However, Eq. (46) is true for any bipartition R,R.) In order to obtain the boundary Hamiltonian A crucial point to observe in the result for the boundary theory is that Σ N only depends on the CMs G and G, which characterizes the reduced state of the virtual degrees of freedom at the boundaries of R andR. We can therefore trace the physical degrees of freedom from the beginning and only ever need to consider G andḠ. While this observation is also true for general PEPS, it is particularly useful when working with GFPEPS in terms of CMs, as it allows us to completely neglect the physical part of the CM right from the beginning.
Let us finally briefly comment on the relation of the boundary theory as given by Σ N to the construction of the boundary theory for general PEPS derived in Ref. 35. There, the part of the PEPS which describes R (corresponding to the CM Γ) is interpreted as a linear map X R from the boundary to the bulk degrees of freedom, which is then decomposed as X R = V R P R , with V R an isometry and P R = τ ⊤ R , where τ R is the reduced density matrix of R on the virtual system (corresponding to G). This is exactly identical to the decomposition (45); in particular, M describes the isometry V R , and the (2+3,2+3) block of Γ ′ describes the map ν → τ ⊤ R ν τ ⊤ R (realized by projecting the (3,3) part onto ν). Finally,Ḡ describes the analogous state τR obtained from the part R, and thus, Σ N is exactly identical to the boundary theory τ ⊤ R τR τ ⊤ R derived in Ref. 35.
Boundary Theories on the torus
We will focus now on the situation where the GFPEPS is placed on a square lattice on a long torus, where we take the length of the torus to infinity. The two regions R andR are then obtained by cutting the torus into two halves, and are thus given by (identical) long cylinders with diameter N v and length N → ∞, cf. Fig. 6. As we have seen, the central object in the description is the CM G at the boundary of region R, ∂R, obtained after tracing out the physical system (and correspondingly forR). In the case of a cylinder, R is given by the left and right boundary of the cylinder together. In the following, we will show how to determine G given the CM γ 1 defining the GFPEPS, without having to construct the CM of the whole state Φ N .
As we have seen in the preceding Subsection, the boundary theory is entirely determined by the CM of the virtual part of the initial state Ψ 1 . We thus start by decomposing the CM of the virtual system of Ψ 1 into Here, V corresponds to the vertical and H to the horizontal Majorana modes, respectively. We now concatenate one column of tensors, closing its vertical boundary, leaving us with a CM which describes the left and right virtual indices of the column (cf. Fig. 1b-d). This is done by employing Eq. (38) for the corresponding subblocks of V of each pair of (cyclically) consecutive states Ψ 1,j and Ψ 1,k . Due to translational invariance, this is conveniently expressed in the Fourier basis (with k y the quasimomentum in y-direction): In this basis the D's of one column form a block-diagonal matrix, whilê ω(k y ) = 0 e iky 1 1 χ −e −iky 1 1 χ 0 (1 1 χ denoting the χ × χ identity matrix) since the ω's of one column form a circulant matrix with the two blocks coupling the "up" and "down" indices of adjacent V 's. In Fourier space, the CM describing the left and right virtual modes of one column is thuŝ (We use the hat to denote dependence on k y in the following; the subscript N ofD N indicates the number of columns.) Taking advantage of the fact that the matrix inverse can be written in terms of determinants, one immediately finds that each entry ofD 1 is a complex ratio of trigonometric polynomials (i.e., polynomials in e ±iky ) with a degree bounded by the dimension ofω, i.e., 2χ. The matrixD 1 consists itself of four blocks, corresponding to the left and right indices, respectively. Let us now see what happens if we contract two columns. We will consider the general case where the two columns can be different -for instance, each of them could have been derived by contracting some number of single columns Φ 1 ; this will allow us to easily derive recursion relations. We thus have two columns described bŷ with a column of maximally entangled states connecting them: The CM of both blocks concatenated is then according to Eq. (38) (50) Using the Schur complement formula for the matrix inverse in the middle, this gives a recursion relation for the blocksR,Ŝ, andT , which serves several purposes. In particular, by choosingD =D ′ , we can obtain an iteration formula forD 2 ℓ describing 2 ℓ columns, which quickly converges towards the infinite cylinder limitD ∞ , thus being very useful for numerical study. Moreover, as we will see in Sec. V, in certain cases it can also be used to analyze the convergence of the transfer operator, or, by choosingD ′ =D ′′ andD =D 1 , to determine the explicit form of the fixed pointD ∞ .
Finally, given the fixed pointD ∞ , as well asD ∞ corresponding to the boundary ∂R, it is now straightforward to determine the boundary Hamiltonian using eqs. (46) and (28) for N → ∞. Note that in the particular case of a torus which we consider,D ∞ can be obtained from D ∞ by exchanging the blocks corresponding to the left and right boundary.
Derivation of edge theory
We will now turn our attention towards the edge Hamiltonian, which describes the effective low-energy physics obtained at an edge of the system.
As explained in Sec. II C, the GFPEPS Φ is the ground state of the flat band Hamiltonian H fb = − i 4 l,m γ l,m e l e m , Eq. (8), where γ is the CM of the whole state Φ, Eq. (42). The restriction of H fb to a region R of the system is then given by where the sum now only runs over modes in R, and γ R is determined by Eq. (43).
Let us now perform the basis transformation M , Eq. (44): Following Eq. (43), the CM of R, γ R , is then transformed to with Σ N given by Eq. (46), and at the same time, H R is transformed into an isomorphic Hamiltonian We thus see that the spectrum of H ′ R (and thus of H R ) consists of two parts: First, the (1,1)-block of γ ′ R corresponds to bulk modes at energy ±1. Second, the (2,2) bock Σ N corresponds to modes at generally smaller energy, which are thus related to restricting H fb to region R; those modes are related to the boundary degrees of freedom via the purification in the (2+3,2+3) block of Γ ′ , Eq. (45). We thus find that the edge Hamiltonian, i.e., the low-energy part of the truncated flat band Hamiltonian, is given by with H e N = − i The derivation of the edge Hamiltonian in this section is again identical to the edge Hamiltonian introduced for general PEPS in Ref. 22. Using the same notation as in the last paragraph of Sec. III B 1, the edge Hamiltonian for general PEPS is obtained by projecting the physical Hamiltonian onto the boundary using the isometry V R . This projection is exactly accomplished by rotating with M and subsequently considering only the (2,2) block of γ ′ R , and thus, the edge Hamiltonian obtained here is identical to the one of Ref. 22, with the bulk Hamiltonian taken to be the flat band Hamiltonian.
Localization of edge modes
In the case of a cylinder, on which we focus, the edge Hamiltonian H e N is supported on the auxiliary modes both on the left and the right edge (cf. Fig. 1f). However, as we will show in the following, the edge Hamiltonian (as well as the boundary theory) on the two edges decouples for almost all k y , and moreover, the corresponding physical edge modes are localized at the same edge as the virtual modes. An important consequence of that is that we can use the virtual edge Hamiltonian to compute the Chern number of the system, as it is known that the Chern number corresponds to the winding number of the edge modes localized at one of the edges of the system 44,45 .
In order to answer both of these questions, we will first need to demonstrate some properties of the CM Γ ≡ Γ N , Eq. (41), which describes the GFPEPS Φ N (Fig. 1f) on a cylinder of length N ≫ 1. Since the system is translational invariant in vertical direction, we can equally well carry out our analysis in Fourier space, and we will do so in the following. By combining Eqs. (5), (47), and (48) we immediately find that Φ 1 is described by a CM of the formΓ withR 1 ,Ŝ 1 , andT 1 defined in Eq. (49). The concatenation of N columns is then given by the Schur complement witĥ where we have moved the virtual modes on the left (right) boundary to the left (right) corner of the CM, as indicated by the lines above.
Let us now first show that the two virtual edges are decoupled. To this end, we consider the reduced state of the virtual system of Φ N , which is given by the CM G in Eq. (41); evidently, vanishing off-diagonal blocks in G (andḠ) imply that any coupling between the two boundaries in Σ N , Eq. (46), vanishes as well. G is given by the two outer blocks ofΓ N . Obviously, the only way in which these two blocks can couple is viaV −1 N .
We now invoke a result on the inverse of banded matrices 46 : Given a banded matrix A b , it holds that where β < 1 depends on the ratio of the largest and smallest eigenvalue of A b A † b (and β → 1 if the ratio diverges). Using this result, we find that the coupling between the two edges in G is exponentially suppressed in the length N of the cylinder, as desired, as long as the ratio of the eigenvalues ofV NV † N does not diverge. Its largest eigenvalue is clearly bounded by 4, sinceV N is the sum of two CMs. To lower bound the smallest eigenvalue, observe thatV NV † N is again a banded Toeplitz matrix, which we can regard as a subblock of a larger circulant matrix. This circulant matrix can in turn be diagonalized using a Fourier transform, and we find that it is of the form (D 1 +ω −1 (k x , k y ))(D 1 +ω −1 (k x , k y )) † . On the other hand, det D 1 +ω −1 (k x , k y ) is exactly the energy spectrum of the local parent Hamiltonian as constructed in Ref. 25, and thus,V −1 N ≡V −1 N (k y ) has exponentially decaying entries if and only if the parent Hamiltonian is gapped for the given value of k y (which is the case for almost all k y ).
As we have seen, (almost) all virtual edge modes on the left and right of the cylinder decouple. In the following, we will show that also the physical modes corresponding to these edge modes are exponentially localized around the corresponding boundary. To this end, we fix N ≫ 1 and consider the CM Γ ′ , Eq. (45), which is obtained by an orthogonal transformation from the original CM Γ ≡ Γ N . In Γ ′ , the edge modes are supported on the (2, 2) block, and we need to figure out how the inverse of the orthogonal transformation M , Eq. (44), maps these back to the physical modes.
To this end, note that in order to prepare an arbitrary state in the (2, 2) block of Γ ′ , Eq. (45), we just need to project the (3, 3) block on an (unphysical) CM X via the Schur complement formula Eq. (38). In particular, we can use this to occupy or deplete a specific mode. (We assume χ to be even; otherwise, one can simply group pairs of modes.) Consequently, by projecting the original CM Γ N onto the very same X, we will exactly occupy or deplete the corresponding physical mode. Now, we can make use of Eq. (53), together with the aforementioned result on inverses of banded matrices: Given X and X ′ such that projecting onto X (X ′ ) occupies (depletes) a certain mode at one boundary, and denoting by γ(X) [γ(X ′ )] the corresponding CMs after the projection, we have that whereŶ andẐ denote the corresponding submatrices of Γ N . Importantly,Ŷ decays exponentially in distance as it is a column ofΓ N . Since we also have that with l v l c l and l w l c l the creation/annihilation operator for the corresponding physical mode, it follows that v l and w l decay exponentially with the distance from the corresponding boundary, i.e., the physical edge mode corresponding to a given virtual edge mode is localized around that edge.
IV. FURTHER EXAMPLES
In this Section, we will present further examples for both chiral and non-chiral GFPEPS, and discuss their respective boundary theories. In Subsection A, we discuss a Chern insulator with C = −1; in Subsection B, we discuss a model with C = 2 which has entangled edge modes at incommensurate values of k y ; and in Subsections C and D, we discuss two non-chiral models.
A. GFPEPS describing a Chern insulator with C = −1 In the following, we study the family of chiral GF-PEPS presented in Ref. 23, which are particle number conserving and describe a Chern insulator with C = −1. They can be decoupled into two copies of a topological superconductor which closely related to the family of Eq. (12) 47 . This family has f = 2 physical fermionic modes per site, χ = 2 Majorana bonds, and γ 1 [Eq. (5)] is defined via where 1 1 = ( 1 0 0 1 ), W = 0 1 −1 0 and η ∈ (0, 1). The ordering of the physical Majorana modes is (c 1↑ , c 2↑ , c 1↓ , c 2↓ ), and the blocks of D are ordered according to left, right, up, and down virtual modes.
The boundaryΣ ∞ (k y ) can be computed using the results of Sec. V, and we find it to be of the form ∓1 0 , the sign depending on whether the horizontal length N of the cylinder is even or odd. In Fig. 12, we show the spectrum of the boundary Hamiltonian of the above model (top panel). Moreover, we illustrate how for N → ∞ the edge Hamiltonian for a single edge converges (middle panel) and how the coupling between the two edges vanishes (bottom panel).
The Chern number can now be determined by counting the number of times the bands of −iΣ R ∞ (k y ) ⊗ 1 1 (or, alternatively, of −iĤ R ∞ (k y )⊗1 1) cross the Fermi level. Obviously, the spectrum of the boundary and edge Hamiltonian consists of two bands lying on top of each other. In the language of topological superconductors, this would give rise to a Chern number of −2. However, since we assume particle number conservation (as we deal with a Chern insulator), the Chern number is given by the number of fermionic chiral modes of the edge or boundary Hamiltonian, respectively. There is only one such fermionic chiral mode (annihilation operatorâ ky ), which is obtained by combining the two chiral Majorana modes on the right edge,ĉ 1,ky andĉ 2,ky , with equal dispersion toâ ky = 1 2 (ĉ 1,ky − iĉ 2,ky ). In this case, combining the Majorana modes does not make the system topologically trivial, since both of them have the same chirality. Therefore, the (particle number conserving) Chern number is C = −1.
B. GFPEPS with Chern number C = 2
In the following, we provide an example of a topological superconductor with χ = 2 and Chern number C = 2. The model has been constructed numerically such that it exhibits discontinuities inΣ ∞ (k y ) and thus pure fermionic modes between the edges maximally entangled modes between the edges at k y = ±1; it thus demonstrates that for χ > 1, there is no constraint (in terms of simple fractions of π) on the possible values of k y . The CM D of the example is given by It has been obtained by numerically optimizing D such that one of the eigenvalues ofΣ R N (k y ) (where N = 2 29 ) jumps from ±i to ∓i for some k y ∈ [0.999, 1.001], while restricting half of the eigenvalues of D to be between ±0.6i such as to prevent D from converging to a pure state. AsΣ R N (−k y ) = [Σ R N (k y )] * (with * indicating the complex conjugate), this automatically yields another identical discontinuity at k y = −1. Note that D can be purified to a state with f = 2 physical fermions. The spectrum of −iĤ R ∞ (k y ) is plotted in Fig. 13. Due to the discontinuities at k y ±1, it crosses the Fermi energy twice from below, thus describing a topological supercon-−π −π/2 0 π/2 π −10 −5 ductor with Chern number C = 2. At k y = ±1, one of the eigenvalues of −iĤ R ∞ (k y ) diverges, and thusΣ LR ∞ (k y ) is non-trivial, coupling one of the two virtual Majorana modes between the left and the right end of the cylinder.
C. GFPEPS with Chern number C = 0 The following example provides a family of nontopological GFPEPS with Chern number C = 0. It has one parameter µ, and its matrix D is given by with f (µ) = 1 − 3µ 2 + µ 2 2 and µ ∈ (0, 1). (A and B can be obtained by choosing an arbitrary purification.) We find that the left and right boundary, Eq. (30), decouple for all k y . The dispersion relation for the right boundary is shown in Fig. 8. Since the energy band of the boundary Hamiltonian crosses the Fermi energy once with positive and once with negative slope for all µ ∈ (0, 1), the Chern number is always zero. D. GFPEPS with flat entanglement spectrum and C = 0 The last example we consider is taken from Ref. 25; it does not display any topological features. It is given by SinceD N =D N , and thus G =Ḡ, the entanglement spectrum and edge Hamiltonian of this model are totally flat, i.e., Σ N = 0 according to Eq. (46), and the Chern number is zero.
In this Section, we will use the recursion relation (50) to explicitly derive the boundary and edge theories for GFPEPS with one Majorana mode per bond, χ = 1. We will then use this result to show that the presence of chiral edge modes is related to the occurrence of Majorana modes maximally correlated between the two edges, i.e., a fermionic mode in a pure state shared between the two edges.
We start by deriving a closed expression for the boundary and edge Hamiltonian for χ = 1. In this case, with scalar functionsr ≡r(k y ) ∈ R,t ≡t(k y ) ∈ R, and s ≡ŝ(k y ). Note that for given k y , the eigenvalues need not to come in complex conjugate pairs. However, they are still bounded by one, which implies that forrt ≥ 0, which in turn implies that for allr andt, 1 −rt ≥ |ŝ| with equality iff |ŝ| = 1 .
In order to obtain the boundary theory, we need to combine the expression for Σ N , Eq. (46), with the fact thatĜ andĜ are given byĜ = ir ∞ ⊕ it ∞ andĜ = it ∞ ⊕ir ∞ , with the exception of the singular points in k yspace where |ŝ ∞ | = 1. In particular, the two boundaries can be described independently almost everywhere, and we obtain for the edge theory of the right with the boundary Hamiltonian given byĤ R ∞ (k y ) = 2 arctan(Σ R ∞ (k y )); for the opposite edge,r andt need to be interchanged. For the points with |ŝ 1 | = |ŝ ∞ | = 1, on the other hand, the two boundaries are in a maximally entangled state of the Majorana modes with the corresponding k y .
Clearly,Σ R ∞ (k y ) [Eq. (65)] is continuous unless the denominator becomes zero. For the latter to happen, one first needs thatr 1t1 ≥ 0, and with this,∆ 2 1 − 4r 1t1 = 0 is equivalent to 1 − r 1t1 = |ŝ 1 |, which using Eq. (59) implies that |ŝ 1 | = 1, which can only be the case for k y = k 0 y = 0, π. In order to analyze howΣ R ∞ (k y ) behaves around such a point, we expand to first order in δk y = k y − k 0 y : Then,r 1 =r ′ 1 δk y + O(δk 2 y ), t 1 =t ′ 1 δk y + O(δk 2 y ), and |ŝ 1 | = 1 + O(δk 2 y ) (since |ŝ 1 | ≤ 1). One immediately finds that this is,Σ R ∞ (k y ) exhibits a discontinuity unlessr ′ 1 =t ′ 1 . In order to relater ′ 1 andt ′ 1 , we observe that the eigenvalues ofD 1 around k 0 y are i(±1 + 1 2 (r ′ 1 +t ′ 1 )δk y + O(δk 2 y )), and thusr ′ 1 +t ′ 1 = 0, which implies that Σ R ∞ (k y ) = i sign(δk y ) sign(r ′ 1 (k 0 y )) ; this is, the edge Hamiltonian exhibits a jump between ±1, and the boundary Hamiltonian derived from the entanglement spectrum diverges, as we have seen in the examples. The case of vanishing first order terms,r ′ 1 =t ′ 1 = 0, can be dealt with using the explicit form ofD 1 for χ = 1, which yields thatr 1 =t 1 = 0 vanish identically for all k y , making the fixed point trivial; ifr ′ 1 changes its sign, this corresponds to a transition point between C = +1 and C = −1. Note that according to Eq. (48),r 1 =t 1 = 0 happens if and only if K is either diagonal or off-diagonal (as the other terms are antihermitian 2 × 2 matrices). This means that the virtual CM D does not couple the left with the down Majorana mode and the right with the up Majorana mode (or the other way round).
We thus find that |ŝ 1 (k 0 y )| = 1 at k 0 y = 0 or k 0 y = π is equivalent to having a discontinuity in the edge Hamiltonian, which jumps between ±1. Since H e N is otherwise continuous, and we will see that for χ = 1, |ŝ 1 (k 0 y )| = 1 can occur for at most one k y (see Sec. VI B), it follows that |ŝ 1 (k 0 y )| = 1, i.e., the existence of a maximally entangled mode between the left and right edge of the cylinder D 1 at k y = k 0 y is equivalent to having a chiral mode at the edge.
VI. SYMMETRY AND CHIRALITY
As we have seen in the preceding section, the existence of a chiral edge mode is equivalent to the existence of a maximally entangled Majorana mode between the left and right edge of the cylinder at k 0 y = 0 or k 0 y = π. In the following, we will show that this mode can be understood as arising from a local symmetry of the state Ψ 1 which defines the GFPEPS (Eq. (1)).
Concretely, in part A we will demonstrate that a certain symmetry of Ψ 1 leads to a maximally entangled Majorana pair between the left and right edge and thus a chiral edge state. In part B we will show the oppositethat a maximally entangled Majorana pair between the left and the right implies Ψ 1 having a certain symmetry. In part C we uncover these kinds of symmetries in the examples presented in the previous sections. In part D we consider again the example given by Eq. (12) and outline how strings of symmetry operators can be used to construct all ground states of its frustration free parent Hamiltonian H ff .
We will generally restrict the discussion in this Section to the case of χ = 1 Majorana mode per bond, though some of the results (in particular in Subsection A) directly generalize to larger χ.
A. Sufficiency of local symmetry
We start by showing how a symmetry in Ψ 1 induces a symmetry on a whole column, Φ 1 , and how this subsequently gives rise to a maximally correlated mode between the two edges of a cylinder. Since Ψ 1 is a pure Gaussian state where four virtual Majorana modes are entangled with one physical fermionic mode, there must be a virtual fermionic mode which is in the vacuum, i.e., on the virtual system which annihilates Ψ 1 , as already discussed in Sec. II D. [d 1 corresponds to the eigenvector of D, Eq. (5), with eigenvalue −i, and describes a fermionic mode]. We will refer to d 1 as a symmetry, since it corresponds to a Z 2 symmetry of Ψ 1 with On the other hand, for the virtual fermionic modes ω 12 (the indices denoting the vertical positions), Eq. (37), it holds that ω 12 |(1 − ic 1,D c 2,U ) = 0 and thus By combining Eqs. (67) and eq. (68), we can now study how the symmetry (66) behaves when we concatenate two or more sites by projecting onto ω 12 | (we assume α U = 0 for now, and define θ := iα D /α U ): with d 2 = α L (c 1,L +θc 2,L )+α R (c 1,R +θc 2,R )+α U c 1,U +θα D c 2,D the symmetry of the concatenated state Ψ 2 , Fig. 1b and Fig. 14b. The argument can be easily iterated, and we find that Let us now see what happens when we close the boundary between sites N v and 1, which yields |Φ 1 ≡ ω Nv,1 |Ψ Nv , Fig. 1d: Since ω Nv,1 |(c Nv,D + ic 1,U ) = 0, we find that First, |α U | = |α D |, and second, the momentum k 0 y of d (defined via e ik 0 y = θ) must be commensurate with the lattice size. Whenever these requirements are fulfilled, we thus find that the local symmetry d 1 , Eq. (66), gives rise to a symmetry d ∝ α LĉL,ky + α RĉR,ky , Eq. (69), on the whole column (i.e., onD 1 ), at momentum e ik 0 y = iα D /α U . Note that we only need to assume that either α U or α D is non-zero; if both are zero, the condition (67) implies that the horizontal virtual modes entirely decouple from the physical system, and the GFPEPS describes a product of one-dimensional vertical chains.
We have thus found that a certain local symmetry induces a symmetry on a column Φ 1 , which forces the Majorana modes with a specific momentum on both ends of the column to be correlated. This is equivalent to demanding that for this k y = k 0 y ,D 1 (k 0 y ) has an eigenvalue −i. For k 0 y = 0, π, this implies that |ŝ 1 (k 0 y )| = 1, as the diagonal elements ofD 1 (k 0 y ) are zero due tô D 1 (−k y ) =D * 1 (k y ). The symmetry of a single column is passed on when concatenating columns, this is, when going from Φ 1 to Φ N , Fig. 1d-f, in analogy to the arguments given before. In order for this to lead to a coupling between the two edge modes in the limit of an infinite cylinder, as observed in the examples with chiral edge modes, it is additionally required that |α L | = |α R |. Otherwise the symmetry becomes localized at a single boundary. This can be understood by exchanging horizontal and vertical directions, leading to e ik 0 x = iα R /α L (and k 0 x = 0, π, too). As we have seen in the last Section, a coupling between the left and the right edge for χ = 1 can only emerge, if k 0 y = 0, π (and analogously k 0 x = 0, π). Thus, we have to require α D /α U = ±i, α R /α L = ±i for a symmetry leading to a chiral edge state. Since there can only be one such symmetry for χ = 1 (otherwise the virtual and physical system decouple), we conclude that there can be a maximally entangled Majorana mode only for k 0 y = 0 or k 0 y = π, but not for both of them (and similarly for k x ). We thus find that d 1 must be of the form (α L , α U = 0) in order to be stable under concatenation. Let us finally show that in order to have a non-trivial Chern number, there is an additional constraint on α L and α U , namely that This can be directly verified by explicitly constructing D (givend 1 , the only remaining freedom is the eigenvalue of the non-pure mode), where one finds that the diagonal (off-diagonal) elements of K [cf. Eq. (47)] vanish exactly if arg( αL αU ) = 0, π [arg( αL αU ) = ± π 2 ]. As we have seen in the last section, this in turn is equivalent to a trivial (completely flat) edge spectrum, and thus to a trivial Chern number.
In summary, we find that we have a non-trivial Chern number whenever we have exactly one symmetry d 1 which satisfies Eqs. (71) and (72).
B. Necessity of an on-site symmetry
Let us now show the converse statement of the previous subsection: We will show that for χ = 1, a maximally entangled Majorana pair between the left and right boundary of a cylinder at k 0 y = 0, π, which is equivalent to the presence of a chiral edge mode, implies the existence of a local symmetry of the form Eq. (71).
Following the results in Sec. V, the presence of a maximally entangled Majorana pair on the boundary of a cylinder of arbitrary length is equivalent to the presence of the symmetry on a single column (i.e., a cylinder of length N = 1), that is, According to Eq. (48), we also havê where the upper sign is for k 0 y = 0 and the lower for k 0 y = π (and is unrelated to the sign in Eq. (73)). We choose in both cases the upper sign; the other cases can be treated analogously. Then, Eq. (74) tells us that ω v |Ψ 1 (with ω v | corresponding to the projection on 1 2 (1 + ic D c U )) is in a maximally entangled state of the two horizontal Majorana modes. This maximally entangled state fulfills with ω h | corresponding to the projection on 1 2 (1+ic R c L ). We now parameterize the reduced density matrix of the virtual system ρ vir in the basis {|Ω vir , |ω h , |ω v , |ω h , ω v } (|Ω vir denoting the projection on the vacuum of the virtual particles and their subsequent discard). According to Eq. (75) its matrix representation is From it, we can calculate the elements of D via D p,q = i 2 tr (ρ vir [c p , c q ]) with p, q = L, R, U, D, cf. Eq. (4), and obtain (77) The fact that ρ vir describes a Gaussian state is used by inserting this into Eq. (74), which gives Given this restriction, one can check that D in Eq. (77) has an eigenvalue −i with the corresponding symmetry After considering all possible sign cases in Eqs. (73), (74), one arrives at Eq. (71). We thus find that for a GF-PEPS with χ = 1, a (unique) symmetry of this form with arg( αL αU ) / ∈ {0, π, ± π 2 } and α L , α U = 0 is both necessary and sufficient to have a divergence in the boundary spectrum, and thus a Chern number C = ±1. The states simultaneously fulfilling Eq. (71) and arg( αL αU ) ∈ {0, π, ± π 2 }, on the other hand, are the transition points between GFPEPS with Chern number C = −1 and C = +1.
C. Symmetries in the considered examples
We will now study the symmetries in the examples given in Sec. IV and relate them to chiral edge modes in the light of the results of the previous subsections.
for any η ∈ (0, 1). Thus, the state Ψ 1 possesses two symmetries, d of operators which reveals the symmetries of the model, we start from the state Φ 1 on one column, which has zero modes at momenta k y = ±1. We first focus on the symmetry at k y = k 0 y = 1, where we find that horizontal modes of Φ 1 at momentum k 0 y are annihilated by an operator 48 for the values of the α's). This suggests to try to construct a d (+) 1 which contains the above operator: it turns out that x 0 d indeed contains an operator of this form, which at the same time acts on the vertical modes as κ α . We proceed identically for k y = −k 0 y = −1, and obtain a pair of (non-orthogonal) symmetries We thus find that also for this model, the existence of divergences in the entanglement spectrum and thus of chiral edge modes is closely related to local symmetries in Ψ 1 with the corresponding momenta. Note that since all coefficients α are different, the only way to grow this symmetries following the procedure of Sec. VI A is to concatenate either exclusively d 1 , which therefore gives rise to maximally entangled Majorana pairs between the two boundaries with definite momenta ±k 0 y and ±k 0 x , respectively.
Generic GFPEPS with Chern number C = 0
Let us now consider the non-chiral family of states discussed in Sec. IV C. As it has only one physical mode, there must be a symmetry d 1 such that d 1 |Ψ 1 = 0. It can be calculated to be As it is not of the form Eq. (71) required for chiral edge states, the Chern number of the family is zero.
GFPEPS with flat entanglement spectrum and C = 0
Let us finally consider the example of Sec. IV D, which has a flat entanglement spectrum. It has a symmetry Since it is not at momentum 0 or π, there cannot be entangled Majorana modes between the left and the right edge of a long cylinder. However, as the amplitudes are equal, the symmetry is stable under concatenation, and must therefore still be present in an infinite cylinder. The explanation is that in the limit N → ∞, a second symmetry at k 0 y = π 2 arises, such that on each edge the two modes at k y = ± π 2 can pair up locally.
D. Symmetry and ground space
The GFPEPS models discussed in this paper appear as ground states of two types of Hamiltonians: On the one hand, there is is the flat band Hamiltonian H fb , Eq. (8), which by construction has the GFPEPS Φ as its unique ground state. On the other hand, we can construct the local parent Hamiltonian H ff , Eq. (11), which is gapless for the chiral examples considered, i.e., for any finite system size, it is exactly doubly degenerate with energy splittings to higher energies that are the inverse of a polynomial in the system size. In the following, we will show how this ground space can be parametrized by using the virtual symmetry d 1 of the local state Ψ 1 . This is in close analogy to the case of conventional PEPS with topological order, where the ground space can be parametrized by putting loops of symmetry operators on the virtual bonds in horizontal and vertical direction around the torus on which the GFPEPS is defined.
In the following, we will consider the example of Sec. II B and show how to parametrize its doubly degenerate ground space in terms of strings of symmetry operators. For simplicity, we will set λ = 1/2. Let us start by recalling Eq. (16), which defines operators u, w, and d 1 such that , with a the physical mode, and b = 1 Eqs. (14) and (17).
Let us now consider a lattice of size N h × N v , and concatenate all the Ψ 1 in this region by projecting onto ω jn | and ω ′ jn | on all the horizontal and vertical links, respectively, but without closing either of the boundaries, resulting in a state Ψ N h ×Nv . Following the arguments given in Sec. VI A, projecting onto the maximally entangled states concatenates the symmetry operators u, w and d 1 , which gives rise to three symmetries for the In summary, we find that it is possible to parametrize the two-dimensional ground state subspace of the model using the string operators given by the virtual symmetry of Ψ 1 : One of the ground states is obtained by inserting a single string (either horizontally or vertically), while the other ground state is obtained by inserting both a horizontal and a vertical string.
VII. CONCLUSIONS AND OUTLOOK
In this paper, we have established a framework for boundary and edge theories for Gaussian fermionic Projected Entangled Pair States (GFPEPS), and applied it to the study of chiral fermionic PEPS, and in particular their underlying symmetry structure.
We have introduced two different kinds of Hamiltonians, the boundary Hamiltonian H b N and the edge Hamiltonian H e N . The former reproduces the entanglement spectrum of the reduced density matrix of a region as a thermal state exp(−H b N ), while the latter contains the low energy physics of the truncated flat band Hamiltonian H fb . We have shown that in the context of GF-PEPS, both of these Hamiltonians act on the auxiliary degrees of freedom at the boundary, which naturally imposes a one-dimensional structure, and that they are related in a simple way. As the physical edge modes corresponding to H e N are localized at the same edge of a cylinder, the number of chiral edge modes and thus the Chern number of a GFPEPS can be read off the virtual boundary and edge Hamiltonian. We have also provided constructive methods for analytically and numerically determining H b N and H e N for general GFPEPS, and in particular on infinite cylinders and tori.
We have subsequently provided a full analysis of the edge and boundary Hamiltonian for the case of GFPEPS with one Majorana mode per bond, χ = 1. We have put particular emphasis on the case of GFPEPS with chiral edge modes, where we have shown that the presence of chiral edge modes is equivalent to a maximally entangled state between the virtual Majorana modes at the two boundaries of a cylinder, which leads to a divergence in the entanglement spectrum at the corresponding momentum. Subsequently, we have related this global virtual symmetry in the GFPEPS to a local virtual symmetry in the PEPS tensor Ψ 1 . Identifying such symmetries has proven extremely powerful in the case of non-chiral topological models, where it has allowed for a comprehensive understanding of ground state degeneracy, topological entropy, excitations, and more from a simple local symmetry and the strings formed by it. We have shown that the virtual symmetry of chiral GFPEPS is similarly powerful, as it explains the origin of chiral edge modes, the topological correction to the Rényi entropy, and it allows to parametrize the ground state space of the gapless parent Hamiltonian using strings formed by the symmetry. It is an interesting question to understand further implications of the symmetry, such as the excitations obtained from open strings, or the role played by symmetries for fermionic PEPS with higher bond dimension χ. Our numerical results indeed suggest that the same type of symmetries underlies chiral edge modes for χ > 1.
Understanding the local symmetries underlying chiral topological order is of particular interest when going to interacting models, since these local symmetries will still give rise to maximally entangled Majorana modes between distant edges even for interacting models; keeping the symmetry structure of the local PEPS tensor untouched thus seems to be a crucial ingredient when adding interactions. This can in particular be achieved by taking several copies of a chiral GFPEPS and coupling the copies on the physical level without changing the auxiliary modes, for instance by a Gutzwiller projection (cf. Ref. 24), similar to the way in which fractional Chern insulators are constructed; we are currently pursuing research in this direction.
An important fact which we will need in the proof is the following relation between the decay of Fourier coefficients and the smoothness of the corresponding Fourier series, stated for the relevant case of two dimenions: Given that the Fourier coefficients decay faster than |r| −(2+d) (i.e., they are upper bounded by a constant times |r| −(2+d+δ) for some δ > 0), it follows that the Fourier series is d times continuously differentiable (continuous if d = 0); see, e.g., Proposition 3.2.12 in Ref. 34. Let us start by considering the behavior ofd(k) around the non-analytical point k = (0, 0). For simplicity, we again restrict ourselves to λ = 1/2, but the arguments for other λ are the same. We expand the numerators and denominators in Eqs. (18) and (19) to second order and those in Eq. (20) to fourth order around k = (0, 0) to obtaind This shows that thed x,y (k) are continuous, but not continuously differentiable at k = (0, 0), whereasd z (k) is both (and only its second derivative is non-continuous). This implies that thed x,y cannot asymptotically decay faster than 1 |r| 3 in real space, since otherwise their Fourier transform would be continuously differentiable. This demonstrates the claimed lower bound bound on the decay of the correlations.
The upper bound is obtained by formally carrying out the Fourier transform and bounding the terms obtained after partial integration: To simplify notation we suppress the index x or y ind x,y (k), respectively (the result applies to both of them and also to the overall hopping amplitude of the Hamiltonian). Let us assume that the site coordinates fulfill |x| ≥ |y| (x = 0); in the opposite case the line of reasoning is the same. We integrate its Fourier transform twice with respect to k x by parts, (r = (x, y)) where BZ denotes the first Brillouin zone, that is, (−π, π] × (−π, π]. Let us first show that the last double integral is defined, although its integrand might diverge at k = (0, 0): For that, we will demonstrate the bounds with c, c ′ > 0. In order to show the first bound, we realize that ∂ 2d (k) ∂k 3 x (k 2 x + k 2 y ) cannot diverge anywhere but at k = (0, 0). We expand them ford(k) =d x (k) around this point by setting k = (|k| cos(φ), |k| sin(φ)) and obtain (A7) Therefore, the limit |k| → 0 exists for all φ and is uniformly bounded, and as a result, the expressions on the left hand side of Eqs. (A6) and (A7) are bounded for any k ∈ BZ. The same thing is encountered ford(k) =d y (k).
Since the left hand sides of Eqs. (A6) and (A7) do not diverge for any k and are defined for a finite region (the first Brillouin zone), the bounds (A5) are correct. The first bound implies that the double integral (A4) is defined (and finite). It will be convenient to split the integral (A4) into two parts, one with range over the full circle C ǫ of radius ǫ centered at k = (0, 0) and the rest. The first part is bounded in absolute value by Cǫ c |k| d 2 k = 2πcǫ. Thus, employing another partial integration |d r | < 2πcǫ We use the bounds on the second and third derivative of d(k), x 2 + 1 |x| 3 2πc + 2πc ′ (ln( √ 2π) − ln(ǫ)) .
After realizing that |x| ≥ |r| √ 2 , this leads to |d r | < a + b ln(|r|) |r| 3 (A12) (a, b > 0). The decay ofd z in real space is faster, since its derivatives start diverging at a higher order. Hence, the hoppings decay at least as fast as ln(|r|) |r| 3 and, therefore, for large |r| as the inverse distance cubed. In this Appendix, we derive analytical expressions for two quantities which probe topological order based on the entanglement spectrum, namely the momentum po-larization and the topological entropy, for the case of noninteracting fermions, i.e., Gaussian states. First, we will prove that the universal contribution to the momentum polarization 27 is exactly determined by the number of divergences in the entanglement spectrum (Ĥ b N (k y ) in the case of GFPEPS); and second, we will prove that there is no additive topological correction to the von Neumann entropy S vN of the entanglement spectrum. Let us stress that both of these arguments rely only on few properties of the entanglement spectrum and the corresponding boundary Hamiltonian, and are thus not restricted to the case of GFPEPS.
Both these proofs are based on the Euler-Maclaurin formulas, which for our purposes say the following: Given a function f : [0, 2π] → C which is 3 times continuously differentiable, it holds that Let us now first discuss how to compute the momentum polarization; for clarity, we will focus on two copies of the superconductor defined in Sec. II B, but the arguments can be readily adapted. For a state |ϕ on a long cylinder which is partitioned into two cylinders A and B, the momentum polarization 27 is µ(N v ) = ϕ|T A |ϕ , where T A translates part A of the system around the cylinder axis, and N v is the circumference of the cylinder; and it is expected to scale as exp[−αN v + 2πi Nv (h a − c 24 )], where c is the chiral central charge and h a the topological spin, and α ∈ C is non-universal. It is immediate to see that this definition is equivalent to evaluating µ(N v ) = ℓ λ ℓ e ik ℓ , where λ ℓ is the entanglement spectrum of A, i.e., |ϕ = ℓ |λ ℓ ||ϕ A ℓ |ϕ B ℓ , and k ℓ is the momentum of |ϕ A ℓ . In PEPS, the entanglement spectrum corresponds to a state on the boundary degrees of freedom, and therefore this expression can be evaluated directly at the boundary. Concretely, in the case of two states with one fermion per bond (i.e., χ = 2), such as two copies of the superconductor of Sec. II B, the entanglement spectrum corresponds to the thermal state of the non-interacting Hamiltonian H b N , so that the momentum polarization is given by log(µ(N v )) = k log e −ω k + e ik+ω k e −ω k + e ω k =:f (k) , where ω k is the energy of the boundary mode with momentum k ≡ k y , as shown in Fig. 7. To evaluate the sum (B3), we use the Euler-Maclaurin formulas, where f (k) is defined via the summand in (B3) on the open interval (0, 2π), and continuously extended to [0, 2π]. In order to ensure continuity of f , we follow the different branches of the logarithm (i.e., we add 2πi as appropriate). Moreover, for examples with a gapless mode at k = π (such as the examples of Sec. II B) f (k) diverges, which can be fixed by replacing e ik by e 2ik above (and subsequently correcting for the factor of 2 obtained in the scaling).
For the examples considered, the functions f obtained this way are indeed 3 times continuously differentiable. Which of the two Euler-Maclaurin equations we use depends on whether the sum in (B3) runs over k = 2πn/N v or k = 2π(n + 1 2 )/N v (n = 0, . . . , N v − 1), which is connected to the choice of boundary conditions. We will focus on the case k = 2π(n + 1 2 )/N v , but let us note that the difference in the relevant subleading terms is merely a factor of −2 in the 1/N v term (which in the examples relates to a non-zero topological spin h a ) and a trivial additive term proportional to f (2π) − f (0) which relates to the treatment of the branches of the logarithm.
With this choice of k, using (B2) we find that where α = 1 2π f (x) dx is non-universal, and τ = 1 24i (f ′ (2π) − f ′ (0)). It is now easy to check that for k 0 = 0, 2π, f ′ (k 0 ) = lim k→k0 i e 2ω k 1 + e 2ω k + O(k − k 0 ) and thus a divergence in the entanglement spectrum at k 0 = 0, such as for the example of Sec. II B, implies that f ′ (2π) − f ′ (0) = ±i. We thus find that τ is universal, with its value only depending on the presence of a divergence in the entanglement spectrum, but not on the exact form of ω k . In particular, with τ = c/24, we find a chiral central charge of c = 1 for two copies of the superconductor, which amounts to c = 1/2 for a single copy of the topological superconductor. Note that the Euler-MacLaurin formulas can be easily adapted to deal with more discontinuities and with different values of k, by expanding f (k) in terms of Bernoulli polynomials; thus, the outlined approach allows for the analytical calculation of the momentum polarization for general free fermionic systems with several boundary modes and arbitrary fluxes through the torus. Let us conclude by discussing the scaling of the topological entropy, which is given by S vN (N v ) = k g(k), g(k) = −p k log p k − (1 − p k ) log(1 − p k ), p k = e −ω k /(e −ω k +e ω k ) (in particular, g(k) → 0 for k → 0, 2π). For the cases discussed in the paper, g ′ (k) is continuous and periodic, but its second derivative diverges; thus, the error term in the Euler-Maclaurin formula can be of order o(1/N v ). Yet, this is sufficient as we are only interested in constant corrections to the entanglement entropy, and one immediately finds that both for periodic and antiperiodic boundary conditions, S vN (N v ) = aN v +o(1/N v ), with a non-universal a = 1 2π 2π 0 g(k) dk, and no constant topological correction.
Appendix C: Polynomial decay of the boundary Hamiltonian hoppings
In this part of the Appendix, we prove that the hopping amplitudes |[H R ∞ ] 1,1+y | of the boundary Hamiltonian of the example of Sec. II B, shown in Fig. 10, decay as ln(y)/y. We start by calculating the single-particle entanglement spectrum on the right boundary: For that we employ Eq. (48) to calculateD 1 (k y ) for the topological superconductor defined by Eq. (12) and from itΣ R ∞ (k y ) via Eq. (65) as a function of λ. The result iŝ Σ R ∞ (k y ) = i 2λ 2 sin(k y ) g 2 (ky) |1−λ−e iky | 4 + 4λ 4 sin 2 (k y ) (C1) with g(k y ) some second order polynomial in cos(k y ). For λ = 0 this function is analytic as long as k y is not an integer multiple of π. One can check that g(π) = 0 for any λ ∈ (0, 1), soΣ R ∞ (π) = 0 and the only possible nonanalytical point is k y = 0. As shown in Sec. V, these are the only k y -points where |Σ R ∞ (k y )| = 1 is possible and where hence the spectrum of the boundary Hamiltonian can diverge: One can check from the explicit function g(k y ) that g(δk y ) = g(−δk y ) = g 0 δk 2 y (1 + O(δk 2 y )) for λ ∈ (0, 1) (where g 0 depends on λ). Therefore, Σ R ∞ (δk y ) = i Owing to Eq. (28) for N → ∞, the single-particle spectrum is given by Henceforth, we can expand −iĤ R ∞ (δk y ) = ln 16λ 4 (2 − λ) 4 g 2 0 − 2 ln(δk y ) − ln(1 + O(δk 2 y )) sgn(δk y ), and we see that the non-analycity is only due to the term 2 ln(δk y ), the other ones being analytical around k y = 0. The Fourier coefficients of an analytical function defined on (−π, π] decay exponentially. Thus, the algebraic decay of |[H R ∞ ] 1,1+y | is due to the diverging term we singled out, with the prefactor of the 1/y contribution being depending on whether y is even or odd but constant otherwise. | 2014-08-25T20:02:21.000Z | 2014-05-02T00:00:00.000 | {
"year": 2014,
"sha1": "de17bfd16d4b302d9c987b7f8ce41d74f09f0e02",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1405.0447",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "de17bfd16d4b302d9c987b7f8ce41d74f09f0e02",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
167210275 | pes2o/s2orc | v3-fos-license | Mechanism of Action and Clinical Attributes of Auryxia® (Ferric Citrate)
Chronic kidney disease (CKD) is a major cause of morbidity and premature mortality and represents a significant global public health issue. Underlying this burden are the many complications of CKD, including mineral and bone disorders, anemia, and accelerated cardiovascular disease. Hyperphosphatemia and elevated levels of fibroblast growth factor 23 (FGF23) have been identified as key independent risk factors for the adverse cardiovascular outcomes that frequently occur in patients with CKD. Auryxia® (ferric citrate; Keryx Biopharmaceuticals, Inc., Boston, MA, USA) is an iron-based compound with distinctive chemical characteristics and a mechanism of action that render it dually effective as a therapy in patients with CKD; it has been approved as a phosphate binder for the control of serum phosphate levels in adult CKD patients treated with dialysis and as an iron replacement product for the treatment of iron deficiency anemia in adult CKD patients not treated with dialysis. This review focuses on Auryxia, its mechanism of action, and the clinical attributes that differentiate it from other, non-pharmaceutical-grade, commercially available forms of ferric citrate and from other commonly used phosphate binder and iron supplement therapies for patients with CKD. Consistent with the chemistry and mechanism of action of Auryxia, multiple clinical studies have demonstrated its efficacy in both lowering serum phosphate levels and improving iron parameters in patients with CKD. Levels of FGF23 decrease significantly with Auryxia treatment, but the effects associated with the cardiovascular system remain to be evaluated in longer-term studies.
Introduction
Chronic kidney disease (CKD) represents a major global public health concern [1]. The United States Renal Data System estimated that in 2017, 30 million American adults had CKD [2]. CKD is associated with serious comorbidities, most importantly cardiovascular disease [1,2]. Furthermore, the financial burden of CKD is high; in the USA, with total Medicare expenditures exceeding $64 billion in 2015 [2]. Magnifying this burden is the undertreatment of CKD [3], especially in the earlier stages of the disease, highlighting the need for better awareness of all therapeutic options with regard to their efficacy, safety, tolerability, and cost effectiveness.
Perturbations of Bone and Mineral Metabolism and Iron Parameters in Chronic Kidney Disease
Alterations of bone and mineral metabolism and anemia are common and occur early in the course of CKD and when left untreated carry an increased risk for adverse outcomes [4,5]. Approximately 36% of hemodialysis patients in the USA have hyperphosphatemia (when defined as a phosphate concentration > 5.5 mg/dL) [6], although older surveys cite higher percentages [7][8][9]. Hyperphosphatemia is an independent risk factor for cardiovascular events, and has been shown to be associated with a higher mortality rate [10,11]; however, it is important to note that the evidence supporting the association between phosphorus levels and mortality is derived from observational studies, which are potentially subject to confounding. A National Health and Nutrition Examination Survey estimated that 15.4% of patients with CKD in the USA are anemic, defined as having blood hemoglobin levels ≤ 12 g/dL in women and ≤ 13 g/dL in men, representing 4.8 million affected individuals [4]. As with hyperphosphatemia, patients with CKD who also have anemia are at increased risk of cardiovascular disease and death, among other comorbidities [4]. It is expected that, in theory, successful treatment of hyperphosphatemia and anemia in patients with CKD could help avoid adverse clinical outcomes; however, with the exception of renal replacement, no interventions have yet been proven to improve outcomes. The pathophysiologic networks that regulate bone and mineral metabolism and iron distribution are complex. An increasing number of studies support the idea that fibroblast growth factor 23 (FGF23), a key phosphate-regulating hormone, is associated with adverse outcomes in CKD [12]. Along with parathyroid hormone (PTH), FGF23 is an important factor involved in the disordered bone and mineral metabolism that contributes to CKD morbidity and mortality [5,12]. Normally, FGF23 regulates phosphate and vitamin D metabolism through a negative endocrine feedback loop it shares with PTH; however, production of 1,25-dihydroxyvitamin D 3 is inhibited by FGF23 but stimulated by PTH, whereas both FGF23 and PTH share the phosphaturic effect [12][13][14]. In patients with early CKD, as renal function begins to decrease, compensatory increases in FGF23 and PTH may help maintain normal phosphate balance, mainly by stimulating greater urinary phosphate excretion [13,15]. These increases in FGF23 and PTH in early CKD precede frank hyperphosphatemia, which is observed only with advanced renal disease after the compensatory mechanisms have been overwhelmed [13,15]. In a cross-sectional study of patients representing a spectrum of CKD severity, increased FGF23 levels were significantly associated with deteriorating renal function [15]. For reasons that are not yet clear, FGF23 also may be related to the development of iron deficiency anemia; in a recent cohort study, elevated concentrations of FGF23 were associated with a higher incidence of anemia, particularly in patients with iron deficiency [16]. Importantly, increased serum FGF23 concentrations are associated with increased risk of cardiovascular events and death in CKD [12]. Several studies have highlighted the relationships of inflammation and iron deficiency anemia, both commonly present in CKD, with FGF23 levels that rise early in the course of the disease [17][18][19]. Thus, interventions that decrease levels of FGF23 may favorably affect several important outcomes in patients in CKD, although this has yet to be established in long-term clinical studies.
Treatment of Hyperphosphatemia and Iron Deficiency Anemia in Chronic Kidney Disease
With the high prevalence of hyperphosphatemia among hemodialysis patients in the USA, restoration of phosphate balance using phosphate binders has long been a therapeutic goal in CKD management [7][8][9]. In the 1970s to 1990s, aluminum-based and then calcium-based phosphate binders were often used for the control of hyperphosphatemia in patients with end-stage renal disease [7,20]. However, associations with aluminum toxicity for aluminum-based binders and hypercalcemia and metastatic calcification for calcium-based binders motivated the development of calcium-free, aluminum-free phosphate binders [7,[20][21][22]. In fact, the Kidney Disease: Improving Global Outcomes (KDIGO) 2017 guidelines for CKD-mineral bone disorder recommend restricting the dose of calcium-based phosphate binders in adult patients with stage 3-5 CKD receiving phosphate-lowering treatment [23]. However, the Dialysis Clinical Outcomes Revisited trial, which compared mortality among hemodialysis patients treated with calcium-based phosphate binders and sevelamer hydrochloride, failed to demonstrate the superiority of the non-calcium-containing sevelamer hydrochloride with respect to death and hospitalization rates [24]. Ultimately, improvements in clinical outcomes in patients with hyperphosphatemia may not be achieved by improving phosphate balance alone. Iron deficiency anemia in CKD has long been treated with iron supplementation using either oral or intravenous (IV) therapy [25]. The decision between oral and IV iron should involve an assessment of benefit and risk. Most oral iron formulations may be less expensive but are often associated with adverse gastrointestinal (GI) effects, whereas IV iron may be more effective than oral iron but carries the risks of adverse reactions during IV administration and potential for long-term iron overload [25][26][27][28]. IV administration may be the preferred route of administration of iron for hemodialysis patients partly because it is easily administered during hemodialysis; however, this is not necessarily true for patients that are not on dialysis (or are on peritoneal dialysis) [29].
Auryxia: an Iron-Based Treatment for Hyperphosphatemia and Iron Deficiency Anemia in Chronic Kidney Disease
Iron-based compounds, such as ferric citrate (Auryxia ® ; Keryx Biopharmaceuticals, Inc., Boston, MA, USA), represent a new category of phosphate binders [30]. Auryxia has the advantage of dual functionality (i.e. controlling serum phosphate levels and treating iron deficiency anemia) [31].
In the USA, Auryxia is indicated as a phosphate binder for the control of serum phosphate levels in adult patients with CKD treated with dialysis and as an iron replacement product for the treatment of iron deficiency anemia in adult patients with CKD not on dialysis [31]. Similar ferric citrate products are approved for use in other countries under different brand names. In the European Union, ferric citrate coordination complex (Fexeric ® Keryx Biopharma UK Ltd, London, UK) is indicated for the control of hyperphosphatemia in adult patients with CKD [32]. In Japan, ferric citrate hydrate (Riona ® Torii Pharmaceutical Co., Ltd., Tokyo, Japan) is indicated to treat hyperphosphatemia in patients with CKD [33,34]. In Taiwan, ferric citrate (Nephoxil ® , Panion & BF Biotech Inc., Taipei, Taiwan) is indicated for controlling hyperphosphatemia in adult patients with CKD undergoing hemodialysis [35]. This review focuses on Auryxia, its mechanism of action, and the clinical attributes that differentiate it from other, non-pharmaceutical-grade, commercially available forms of ferric citrate and from other commonly used phosphate binder and iron supplement therapies for patients with CKD.
Chemical Composition of the Active Pharmaceutical Ingredient (API) in Auryxia
The API of Auryxia is not a single compound but rather a solid mixture of ferric citrate coordination complexes (FCCCs) with the following chemical formula: iron (+ 3), x (anion of 1,2,3-propanetricarboxylic acid, 2-hydroxy-), y (H 2 O), where x ranges from 0.70 to 0.87 and y ranges from 1.9 to 3.3 [31,36]. As opposed to iron salts, which readily dissociate into their component ions in water, the bonds that coordinate the central metal atom with the surrounding ligands allow the complex to retain its identity as a unit with properties different from those of its components. Furthermore, Auryxia has different properties from those of commercial-grade ferric citrate [36], which was the material investigated in early studies. Most important among these differences among ferric citrate preparations is that the API of Auryxia has a defined molar ratio of ferric iron (Fe 3+ ) to citrate anions [37], whereas ferric citrate from commercial sources may have variable molar ratios of ferric iron, citric acid, and associated hydrates [36]. It is important to first establish what is known about ferric citrate in general before understanding more about the specific physicochemical profile of the pharmaceutical-grade ferric citrate that constitutes the API of Auryxia.
Structural Chemistry of Ferric Citrate
As already noted, "ferric citrate" is a common name for a large group of metallo-complexes comprising ferric ions (Fe 3+ ) and citrate ligands with various degrees of protonation. These complexes are characterized by different iron nuclearities, ratios of iron to citrate, and ligand coordination modes. Naturally occurring ferric citrate complexes play an important role in iron solubilization, mobilization, and utilization in all forms of life [38]. Citric acid is an α-hydroxy tricarboxylic acid ( Fig. 1), capable of binding ferric ions and forming a series of stable species in aqueous solution over a wide pH range [39]. This action prevents the hydrolysis of ferric ions that leads to the formation of insoluble ferric oxides and ferric hydroxides under physiological pH. It is known that iron levels in physiological (i.e. biological, living) systems are regulated by citric acid chelation of ferric ions or through redox reactions of ferric citrate [40,41]. In medicine, it has long been known that citric acid enhances the bioavailability of iron [42], and several iron citrate preparations are commercially available for use as iron supplements in foods. Because ferric citrate can form a series of interrelated complexes, it is soluble over a broad range of pH in the stomach and intestine (where phosphate binding occurs) and in the duodenum (the primary site of oral iron absorption) [43].
Although ferric citrate complexes have been investigated and used clinically as iron supplements [44], no structural data were available until 1994. High solubility in aqueous solution and the many diverse species of ferric citrate complexes had been the main obstacles to obtaining single crystals for structural elucidation by x-ray crystallography. By introducing a variety of cations into solutions prepared by mixing ferric salts and sodium citrate or citric acid in water, several crystalline compounds were obtained and subjected to structural studies. To date, the crystal structures of five ferric citrate complexes are known in the literature ( [49,50]. Earlier work had shown the remarkable kinetic stability of complex 2, as demonstrated by the recrystallization of its pyridinium salt after the dissolution of the crude material in warm pure water; crystals usually grew slowly at room temperature over a period of approximately 3 to 6 days [46]. Such kinetic stability suggests that the complex may be stable in aqueous solution over long time frames. Later mass spectrometry experiments in aqueous solution likewise found a spectrum of complexes, among which again a dinuclear species was prominent [37].
Ferric Citrate as API of Auryxia
The API of Auryxia has a defined molar ratio of ferric iron to citrate anions, predominantly in the molar ratio of 2:2, whereas ferric citrate from commercial sources may have variable molar ratios of ferric iron and citric acid [36]. The API in Auryxia [36]. The major component in the API corresponds to the dinuclear complex 2 (Fig. 2).
Unlike simple iron salts, the FCCCs in Auryxia help maintain ferric iron in solution at the varying conditions in different portions of the GI tract (i.e. at various pH levels) [36]. The relatively high solubility of the pharmaceuticalgrade ferric citrate in Auryxia contrasts with some commercial iron salts, particularly at high pH; for example ferrous (Fe 2+ ) sulfate, an iron supplement commonly used to treat iron deficiency anemia, is practically insoluble at high pH [36,43,44]. An additional distinguishing characteristic of the FCCCs in Auryxia is their large surface area, which is at least 16 times that of commercial-grade ferric citrate [51]. High surface area generally favors rapid disintegration and therefore dissolution [55]. In the case of Auryxia, the rate of dissolution of its API at pH 8 is 3.08 times the rate of commercial-grade ferric citrate; the solubility of the Auryxia FCCCs is vital to the absorption of their iron [36,51].
Absorption of Iron from Auryxia
Once the ferric citrate in Auryxia is ingested, the exact chemistry and structure of the resulting mixed citrate and ferric phosphate compounds in the stomach and intestines is unknown. It has long been thought that the low pH of the stomach is important for delivering soluble iron into the intestinal tract [43], although the solubility of Auryxia over a broad range of pH might make that factor less critical. Auryxia is believed to use the conventionally described and highly regulated enterocytic pathway of iron absorption [56], in which ferric iron (such as in the API of Auryxia) is enzymatically reduced to the ferrous state, absorbed primarily in the duodenum, and finally transported into plasma and made available for erythropoiesis (Fig. 3) [57][58][59]. In contrast, when ferrous sulfate is used for iron supplementation, the reductive step in iron absorption may be bypassed, potentially leading to more rapid absorption, transferrin saturation and the release of substantial amounts of iron not bound to transferrin [60,61]. Preliminary data from ongoing work suggest that the "conventionally" described pathway is the main route for iron absorption, but some contributions from other pathways (e.g. paracellular pathway [62], Fig. 2 Chemical structures of ferric citrate coordination complex anions determined by X-ray crystallography. Green-blue spheres represent iron atoms, red spheres represent oxygen atoms, and gray spheres represent carbon atoms. Cit citrate. Based on data from Matzapetakis et al. [45]; Shweky et al. [46]; Bino et al. [47]; and Tenne et al. [45][46][47][48] transcellular pathway [63], gut microbiota [64]) have not been excluded (T. Ganz, private oral communication, March 4, 2019 [manuscript in preparation]). Of further interest is that ferric iron, such as in Auryxia and unlike ferrous iron, is not easily oxidized [65]. Ferrous iron, during oxidation, can catalyze the formation of free radicals, causing GI mucosal cell damage and erosions of the GI mucosa that likely account for the reported increased incidence of GI adverse effects with ferrous iron compared with ferric iron products (Fig. 4) [65][66][67][68][69][70][71].
Phosphate Binding Capacity of Auryxia
Although some details of the iron chemical species present in the GI tract after ingestion of Auryxia are unclear, the subsequent impact of the FCCCs on phosphate levels is well established. The ferric iron from the API of Auryxia binds dietary phosphorus in the GI tract to form insoluble ferric phosphate, which precipitates and is excreted, thus decreasing intestinal phosphorus absorption and lowering blood phosphate levels [31,72]. Multiple clinical studies have demonstrated the efficacy of Auryxia in lowering serum phosphate levels across the spectrum of CKD [72][73][74][75][76]. For example, results from a Phase 3, randomized, controlled trial in patients with CKD treated with dialysis showed that Auryxia effectively reduced serum phosphate compared with placebo (analysis of covariance adjusted least squares mean treatment difference: − 2.18 mg/dL [95% CI: − 2.59, − 1.77]; p < 0.0001; Fig. 5) [72]. These findings were confirmed in a recent retrospective chart review that showed that Auryxia reduced and maintained serum phosphate levels [73]. Although the main focus of the studies was on control of hyperphosphatemia, improvements in iron parameters, such as increases in ferritin levels and transferrin saturation (TSAT), also were noted [72][73][74]77]. Another consequence of the precipitation reaction between ferric iron (in Auryxia) and dietary phosphate is the release of citrate, which can be absorbed and converted to bicarbonate, in theory helping to correct metabolic acidosis, a common complication of CKD [78,79].
A non-calcium, iron-based phosphate binder, sucroferric oxyhydroxide (Velphoro ® ; Fresenius Medical Care North America, Waltham, MA, USA), was shown to effectively lower and maintain serum phosphate levels for over 1 year in patients receiving dialysis [80]. Treatment with sucroferric oxyhydroxide did not significantly affect iron-related parameters such as TSAT, serum iron, and blood hemoglobin concentrations, which remained unchanged over the long-term [80].
Iron Supplementation with Auryxia
The efficacy of Auryxia for the treatment of iron deficiency anemia in patients with non-dialysis-dependent CKD (NDD-CKD) was tested in a Phase 3 placebo-controlled trial [56]. In that study, treatment with Auryxia significantly improved hemoglobin levels, TSAT, and serum ferritin levels versus placebo (p < 0.001; Fig. 6). Diarrhea was the most common adverse reaction leading to discontinuation of Auryxia (2.6% of patients) in the 16-week, placebo-controlled trial; 12 patients (10%) in the Auryxia treatment group discontinued Auryxia because of an adverse reaction, compared with 10 patients (9%) in the placebo control arm [31]. Tolerability in the GI tract with Auryxia is consistent with the expected low level of reactive oxygen species, anticipated based on chemical considerations and regulated absorption, relative to ferrous iron products. Furthermore, the chelate effect (such as provided by the coordination of ferric iron by citrate) Fig. 6 Iron parameters in patients with iron deficiency anemia and non-dialysis-dependent chronic kidney disease. *Between-group difference of 0.84 g/dL (95% CI 0.58, 1.10; p < 0.001); † between-group difference of 18.4% (95% CI 14.6, 22.2; p < 0.001); ‡ between-group difference of 170.3 ng/mL (95% CI 144.9, 195.7; p < 0.001). Adapted from Fishbane et al. [56] has been observed to improve GI tolerability in a study of a different oral iron product [81]. Notably, although citrate (a component of Auryxia) is known to promote absorption of aluminum in the GI tract, no changes in mean serum aluminum levels were seen after treatment with Auryxia [56,82,83].
Iron Overload
As mentioned previously, long-term administration of medicinal iron raises a concern about eventual iron overload, and the risk is higher with IV iron products than with oral products, because IV administration bypasses the physiologic regulation of iron absorption [25][26][27][28]. Indeed, with the exception of one patient who also received IV iron, iron overload has not been observed in patients after treatment with Auryxia in clinical studies [31]. Furthermore, in patients with CKD treated with dialysis, treatment with Auryxia led to few serious adverse events in organ systems usually affected by iron overload [72]. Ferric iron, such as in Auryxia, is delivered relatively slowly and consistently, likely allowing the iron-regulatory peptide hormone, hepcidin, to slowly upregulate, as reflected in maintenance of clinically appropriate levels of iron markers [44,84].
The iron-storage protein, ferritin, and the iron-carrier protein, transferrin, are both critical for iron homeostasis and have traditionally served as markers of iron status; however, there are caveats for their use as indicators of iron overload [57,[85][86][87][88]. Neither serum ferritin nor TSAT alone is accurate as a standalone measure of iron overload; serum ferritin is severely affected by inflammation, and TSAT, which is proportional to serum iron, is affected by the timing of the sample relative to IV or oral iron administration [86][87][88]. However, waiting at least 48 hours to draw blood after iron administration [89] allows a more accurate picture of iron overload using TSAT; TSAT tends to stabilize within 2 weeks after beginning IV iron administration, so that high TSAT values at this point or later suggest iron overload or the inability to regulate iron; this timeline has not been studied extensively with oral iron [90][91][92]. In patients with iron deficiency anemia and NDD-CKD who were treated with Auryxia, 17.9% had transient elevations of TSAT ≥ 70%, yet none had iron overload [56]. Persistent increases in serum ferritin in the absence of clinically identifiable episodes of inflammation, but accompanied by increased TSAT, together may suggest iron overload. The only definitive outcome that indicates clinically significant iron overload is dysfunction of the end organs.
Auryxia and Riona share the same API, although the dosage forms and strengths differ (210 mg ferric iron per pill of Auryxia; approximately 62 mg ferric iron per pill of Riona) [31,93]. The United States Food and Drug Administration (FDA) label for Auryxia carries a warning for iron overload recommending that iron parameters (e.g. serum ferritin and TSAT) should be assessed prior to initiating Auryxia and should be monitored while on therapy [31]. In a 56-week, Phase 3, randomized, controlled trial in patients with CKD treated with dialysis in which concomitant use of IV iron was permitted, 19% of patients treated with Auryxia had at least one measurement of serum ferritin > 1500 ng/mL, as compared with 10% of patients treated with the active controls (sevelamer carbonate and/or calcium acetate) [72]. Therefore, the FDA label includes a recommendation that patients receiving IV iron may require a reduction in dose or discontinuation of the IV iron therapy [31]. Because elevations in serum ferritin have been observed in previous studies of Auryxia and Riona [72,94], a recent retrospective analysis of a Japanese Phase 3 trial in hemodialysis patients with hyperphosphatemia investigated the factors that may be associated with these elevations; the factor most strongly associated with elevations in serum ferritin, second only to the dose of Riona, was reduction in the dose of erythropoiesis-stimulating agent (ESA), presumably causing decreased utilization of iron for erythropoiesis [82,84]. Importantly, no studies of Auryxia and Riona supported an association with clinically significant iron overload despite the observed elevations in serum ferritin levels [72,82,84,94,95].
Effects on Iron Levels When Used to Treat Hyperphosphatemia
The chemical attributes of Auryxia and its biological interactions are reflected in its clinical characteristics. For example, when used to treat hyperphosphatemia, it may also have beneficial effects on measures of iron status in disease settings where functional or true iron deficiency and hyperphosphatemia coexist. In patients with CKD treated with dialysis, studies have shown that treatment with Auryxia reduces serum phosphate and improves iron parameters [72,95]. In a Phase 3, 56-week, placebo-and activecontrolled trial in patients with dialysis-dependent CKD, Auryxia (dosed and titrated to maintain serum phosphate control) significantly improved iron parameters compared with active controls (sevelamer carbonate and/or calcium acetate) as early as Week 12 (ferritin: mean difference of 281.8 ± 42.9 ng/mL at week 52, p < 0.001; TSAT: mean difference of 9.55% ± 1.58%, p < 0.001) [96]. Additionally, two studies have suggested that Auryxia may reduce the need for IV iron when used as a phosphate binder in patients with CKD treated with dialysis [72,96,97]. Lower percentages of patients required IV iron in the Auryxia group compared with an active control group at all time points over the 52-week active control period; at the end of this period, 85.4% and 69.0% of patients did not receive IV iron at week 52 in the Auryxia and active control groups, respectively (p < 0.001) [96]. The cumulative dose of IV iron was less in the Auryxia group as compared with the active control group over the entire 52- [96].
The effects of treatment with Auryxia could lead to clinical outcomes that are different from those with other therapies. Of note, treatment to a higher hemoglobin target has been associated with increased risk of vascular thrombosis, and higher ESA doses have been explored as a plausible mechanism for this increased risk [98]; therefore, a reduction of ESA dose may lower risk of cardiovascular events, thereby decreasing hospitalization. Infections are also an important driver of readmission after hospitalization in patients with CKD who are receiving hemodialysis [99]. In the aforementioned 56-week, Phase 3 trial in patients with end-stage renal disease on dialysis, treatment with Auryxia was associated with reductions in overall hospitalizations, cardiac-related hospitalizations, and infectionrelated hospitalizations as compared with active control [100]. Treatment with Auryxia also reduced health-care costs; the cost savings were attributed to a decrease in use of IV iron and ESA and to a reduction in the number of hemodialysis sessions that were missed as a result of hospitalizations [101,102].
Thus, the ability of Auryxia to deliver iron, because of its chemical composition, is an added benefit when it is already being used to control phosphate levels and may improve clinical outcomes that are unrelated to hyperphosphatemia. The benefit of Auryxia as a therapy for the treatment of iron deficiency may be particularly important in light of recent research that has implicated IV iron therapy in non-alcoholic fatty liver disease (NAFLD) [103,104]. In a longitudinal study of 7 patients undergoing IV iron therapy, hepatic proton density fat fraction and liver iron concentration levels increased significantly, suggesting that iron overload in these dialysis patients may have led to or exacerbated NAFLD [103].
Effects on Serum Phosphate Levels When Used to Treat Iron Deficiency Anemia
Additional studies have shown that treatment with Auryxia improves iron parameters in patients with CKD and iron deficiency anemia without negatively perturbing serum phosphate when kidney function is sufficient to maintain normal phosphate levels [56,105]. A post hoc analysis of the Phase 3 trial in patients with iron deficiency anemia and NDD-CKD showed that, when given as an iron replacement product, Auryxia did not significantly reduce serum phosphate compared with placebo among patients with baseline serum phosphate concentrations within the population reference range of 2.5-4.5 mg/dL (Fig. 7) [105]. The effect of Auryxia on serum phosphorus depends on baseline phosphate levels, declining only when the initial levels are excessive; the greatest reductions in serum phosphate were seen in patients with the highest baseline serum phosphate concentrations [105]. Therefore, as a converse of the situation in which Auryxia is used primarily as a phosphate binder, treating patients with iron deficiency anemia using Auryxia may have the additional benefit of reducing high serum phosphate and FGF23 levels. The evident lack of hypophosphatemia is interesting; one possibility is that patients with residual renal function may avoid hypophosphatemia during Auryxia treatment by retaining more phosphate in the kidneys to compensate for losses by excretion in the gut. In fact, in a Phase 2 study, when patients with CKD stages 3-5 were given Auryxia as a phosphate binder, both mean serum phosphate and 24-hour urinary phosphate significantly decreased compared with control [75].
Effects on FGF23 Levels
The dual function of Auryxia in reducing serum phosphate and treating iron deficiency simultaneously decreases circulating levels of FGF23, a key phosphate-regulating hormone whose plasma concentration increases as CKD progresses [12,56,105]. In a Phase 3 study in patients with NDD-CKD and iron deficiency anemia, levels of FGF23 were reduced significantly more between baseline and Week 16 with Auryxia treatment compared with placebo ( Fig. 8) [56]. This was true both for the intact bioactive form of FGF23 (iFGF23) and for its carboxy-terminal cleavage product (cFGF23), whose biological activities are under investigation [106]. Similarly, an open-label Japanese study in patients on hemodialysis showed that treatment with Riona compared with lanthanum carbonate led to significantly lower levels of iFGF23, independent of phosphate levels (change from baseline in iFGF23: − 6160 vs − 1118 pg/mL, respectively; p = 0.026) [97]. Although this conjecture is still speculative, reduction of FGF23 levels could have a beneficial effect on the progression of CKD and cardiovascular disease. In mice treated with Auryxia, the concentrations of FGF23 decreased and CKD disease progression was slowed [107]. More definitive clinical trials will be needed to validate the hypothesis that ferric citrate slows CKD progression by lowering FGF23 [108].
Summary
Consistent with its chemistry and mechanism of action, Auryxia is effective both in reducing serum phosphate levels in patients treated with dialysis and in improving iron parameters in CKD patients not on dialysis. Overall, across both CKD patient populations, treatment with Auryxia was considered to have good safety and was similarly well tolerated compared with other study treatments. In dialysis patients, 21% discontinued Auryxia due to an adverse reaction versus 14% who discontinued active control (which patients had tolerated before enrollment); in patients with NDD-CKD, 10% discontinued Auryxia due to an adverse reaction versus 9% who discontinued placebo [31]. The balance of these characteristics distinguishes Auryxia from commercial-grade ferric citrate and differentiates it from other common therapies used to treat hyperphosphatemia and iron deficiency anemia in patients with CKD. Fig. 8 Changes in FGF23 levels. cFGF23 carboxy-terminal cleavage product of fibroblast growth factor 23, iFGF23 intact fibroblast growth factor 23, IQR interquartile range. Based on data from the study reported in Fishbane et al. [56] studying the chemical structure of Auryxia, and IBS studies the use of Auryxia in children.
Compliance with Ethical Standards
Conflict of interest TG is a scientific cofounder and shareholder of Intrinsic LifeSciences and Silarus Pharma. He is a consultant for Ablynx, Ambys, Akebia Therapeutics, Gilead, Global Blood Therapeutics, Keryx Biopharmaceuticals, La Jolla Pharmaceutical Company, Sierra Oncology and Vifor. He has received Grant support from Akebia Therapeutics and Keryx Biopharmaceuticals. AB was a consultant to Keryx and received support for characterizing the structural composition of Auryxia. IBS is a consultant for Akebia Therapeutics, Keryx Biopharmaceuticals, Amgen, and Ultragenyx. He has received Grant support from Amgen and AbbVie.
Funding Support was provided by Keryx Biopharmaceuticals, Inc., which is now a wholly owned subsidiary of Akebia Therapeutics, Inc.
Open Access This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. | 2019-05-28T13:55:30.132Z | 2019-05-27T00:00:00.000 | {
"year": 2019,
"sha1": "d621605ce8e46c27764581d0e310c261d7300637",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40265-019-01125-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d621605ce8e46c27764581d0e310c261d7300637",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233303595 | pes2o/s2orc | v3-fos-license | Gene network analyses unveil possible molecular basis underlying drug-induced glaucoma
Background Drug-induced glaucoma (DIG) is a kind of serious adverse drug reaction that can cause irreversible blindness. Up-to-date, the molecular mechanism of DIG largely remains unclear yet due to the medical complexity of glaucoma onset. Methods In this study, we conducted data mining of tremendous historical adverse drug events and genome-wide drug-regulated gene signatures to identify glaucoma-associated drugs. Upon these drugs, we carried out serial network analyses, including the weighted gene co-expression network analysis (WGCNA), to illustrate the gene interaction network underlying DIG. Furthermore, we applied pathogenic risk assessment to discover potential biomarker genes for DIG. Results As the results, we discovered 13 highly glaucoma-associated drugs, a glaucoma-related gene network, and 55 glaucoma-susceptible genes. These genes likely played central roles in triggering DIGs via an integrative mechanism of phototransduction dysfunction, intracellular calcium homeostasis disruption, and retinal ganglion cell death. Further pathogenic risk analysis manifested that a panel of nine genes, particularly OTOF gene, could serve as potential biomarkers for early-onset DIG prognosis. Conclusions This study elucidates the possible molecular basis underlying DIGs systematically for the first time. It also provides prognosis clues for early-onset glaucoma and thus assists in designing better therapeutic regimens. Supplementary Information The online version contains supplementary material available at 10.1186/s12920-021-00960-9.
Background
Glaucoma is a severe disease that affects more than 70 million people worldwide. As a leading cause of irreversible blindness, glaucoma's clinical manifestations include optic disc atrophy, visual field defect, and visual acuity loss; it brings great inconvenience and desperate darkness to the patients [1]. Drug-induced glaucoma (DIG) is a special kind of glaucoma, which may be caused by many commonly used drugs, such as dexamethasone, paclitaxel, fluoxetine, and perphenazine in normal chemotherapy [2,3]. Generally, the DIGs can be classified into two types: the open-angle glaucoma (OAG) with elevated intraocular pressure (IOP) which is mainly caused by the steroids, such as betamethasone, prednisolone, and dexamethasone; and the closed-angle glaucoma (CAG) with blocked trabecular network outflow path which is mainly caused by the non-steroidal drugs such as anticholinergic drugs and epinephrine drugs. Some antitumor drugs, such as docetaxel and paclitaxel, can also induce OAG via unclear internal mechanisms [2][3][4]. As of December 2019, a total of 13,961 cases (including 12,195 serious cases) of DIGs have been reported in the FDA Adverse Event Reporting System (FAERS) of the United States.
In recent years, extensive attention has been paid to discovering the molecular mechanisms beneath glaucoma. Both the animal model and the Genome-Wide Association Studies (GWAS) were adopted to explore susceptible pathways or genes beneath glaucoma and obtained some encouraging achievements. For instance, Weinreb linked glaucoma with multiple pathways of lipid metabolism, inflammation, autoimmunity, and oxidative stress [1]. Razeghinejad attributed forward movement of the iris-lens diaphragm to the closed-angle glaucoma [3]. Clark proposed glucocorticoid-induced OAGs likely led by inhibition of matrix metalloproteinases [5]. Danford suggested the primary OAGs related to MYOC, OPTN, CAV1, and CAV2 [6]. Regretfully, these efforts failed to identify a strong genetic factor in glaucoma development. The surgery-simulated glaucoma in the animal model experiment is to some extent different from the real clinical case, which may bring unpredictable errors. Taking the advantages of high-throughput technology, the GWAS studies discovered a list of genes and genomic mutations susceptible to DIG on a large scale [7]. We summarized these efforts in Additional file 1: Table S1, in which information of four potential DIG-associated genes (MYOC, GPNMB, HCG22, and TSPO) were given in brief. Regretfully, the GWAS applications are largely limited by sample size and financial budgets. Moreover, many of the susceptible genes identified by the GWAS studies have relatively small penetration to glaucoma because of the inherent limitations of GWAS itself, for instance, the inability to fully explain the genetic risk of common diseases and the difficulty in figuring out the true causal associations [8]. Therefore, more susceptible genes are awaiting discovery from the larger DIG cohorts, and at the same time, well-defined phenotypes are expected to delineate the complex genetic architecture of glaucoma [9].
In the present study, we attempt to integrate and analyze the heterogeneous data of clinical records and transcriptomes to identify the drugs that are highly associated the DIGs, unveil the gene interactions underlying glaucoma development, extract the pivot pathways and genes underlying glaucoma, and eventually discover the potential gene biomarkers for DIG prognosis.
Data source and preprocessing The drug-ADR relations
The drug-ADR relations were derived from the Adverse Drug Reaction Classification System (ADReCSv2.0) [10].
Overall, 173 distinct glaucoma-inducing drugs were extracted from the ADReCS, as well as their comprehensive drug information and ADR information. The drug indication information was subject to the Anatomical Therapeutic Chemical Classification System (ATC Code).
The drug-treated gene expression profiles
The drug-treated gene expression profiles were downloaded from the LINCS project [11]. Overall, 1,319,138 gene expression profiles were obtained from the LINCS database, covering 25,149 chemicals, 12,328 genes, 76 cell types, and 5,420 combinational conditions (cell line, drug dosage, and exposure time). For every drug-treated profile, the gene expression perturbation to the control was determined by the LINCS L1000 project using the Characteristic Directions (CD) geometrical approach [12] and as well the significance (p value) of perturbation was calculated [11]. In addition, the top perturbed genes were elected as the differentially expressed genes (DEGs). The profiles were then preprocessed as following: (1) excluded the experiments without biological replicates; (2) for every drugs, chose the mostly perturbed profiles (with minimum p value and p < 0.01) as the representatives. After the data preprocessing, overall 1114 gene expression profiles were retained for later analysis, covering 299 chemicals (272 approved, 4 experimental, and investigational drugs), 12,328 genes, 9 cell types, and 27 combinational experiment conditions (drug dosage and exposure time) (Additional file 2: Table S2).
Mining drug-glaucoma association
To obtain reliable drug-glaucoma associations, we downloaded 9,772,360 adverse drug event (ADE) reports, dating from Jan 2004 to Dec 2018, from the FDA Adverse Event Reporting System (FAERS) of the United States. Excluding the reports from unreliable sources and nonsingle active ingredient drugs, a total of 2,069,653 reports were used for association analysis, including 1095 distinct single-ingredient drugs and 10,287 adverse reactions. The ADR terms were standardized by referring to the ADReCS. Subsequently, we calculated the odds ratio (OR) for every drug-ADR pairs following the standard procedure. This analysis involved 106 distinct drug-glaucoma associations after ADR standardization.
Tissue propensity analysis
We manually mapped the nine studying cell types against the System Organ Class (SOC) of the ATC system, each cell type corresponded to only one organ. As the result, four organs were involved (Additional file 3: Table S3). Of these organs, the ATC code of a drug at the SOC level were taken as its expected site-of-action (SOA). For every drugs involved in this analysis, we calculated the average expression changes of its DEGs in response to the drug treatment. The average perturbations (the average expression changes of DEGs) of drugs in SOA and non-SOA groups in each cell type were statistically analyzed.
The weighted gene co-expression network analysis (WGCNA)
The weighted gene co-expression network analysis (WGCNA) was conducted on 13 selected drug-treated gene expression profiles using the R package "WGCNA" [13], and the soft threshold for network construction was set to 5 (with scale-free coefficient R 2 = 0.88). Upon the gene network, the eigengene dendrogram was constructed using the average linkage hierarchical clustering and the dynamic tree-cutting algorithm [14]. A dissimilarity threshold of 0.3 (or similarity threshold of 0.7) was set for the eigengene dendrogram to reduce network complexity by merging similar modules. The genes in the same module were taken as the co-expressed genes. For each gene module, the module eigengenes (MEs) were the first principal component of the expression matrix.
Identification of glaucoma-associated genes
For the selected gene module, we quantitatively measured the module membership (MM) between MEs by the Pearson Correlation analysis. The highly connected genes (MM > 0.95) were selected as core genes for subsequent protein-protein interaction (PPI) analysis. The PPI data for highly connected genes were obtained from the STRING database (v10.5, species: Homo sapiens) by satisfying PPI enrichment significance p < 0.01 [15]. Upon the PPI data, we re-constructed the PPI network using the Cytoscape software (version: 3.7.1) [16], from which we extracted the hub genes (Maximal Clique Centrality ≥ 5 and Betweenness centrality > 0) as the potential glaucoma-associated genes via the CytoHubba plugin.
Functional enrichment analysis
Gene Ontology (GO) and KEGG pathways enrichment analysis were conducted on the genes of interest using the R package ClusterProfiler [17]. Fisher's exact test was used to evaluate the significance of the biology processes or pathways. The false discovery rate (FDR) correction method (Benjamini and Hochberg) was used to evaluate the multiple tests in the enrichment analysis.
The permutation test
We performed the permutation test to measure the significance of gene expression differences under the treatment of 13 glaucoma-associated drugs and 63 nonglaucoma-inducing drugs. The drugs of both drug groups all belonged to the ATC category "S" (Sensory organs). For each of the studying genes, the permutation test was demonstrated independently on a null hypothesis (i.e. no gene expression difference between two drug groups). We simulated the drug group composition by resampling the group members 10,000 times randomly, calculated the gene expression difference between drug groups, and determined the distribution of gene expression difference. At the same time, we calculated the real gene expression difference between two drug groups for each gene. Subsequently, we located the real difference at the random expression difference distribution, upon which the significance p value was determined by counting the proportion of samples less than the real difference value. A p < 0.05 will reject the null hypothesis, indicating there exists a significant gene expression difference between the two drug groups.
Pathogenic risk assessment
The pathogenic risk assessment was demonstrated in this study by calculating the odds ratio (OR) value of the selected gene to glaucoma. The odds ratio analysis took the 77 gene expression profiles as the independent variables x and the binary consequence of glaucoma (1 for glaucoma and 0 for non-glaucoma) as the response variable Y [12]. We assumed there existed a linear relationship between the independent variables x and the log-odds of Y = 1 , denoted as p = P(Y = 1) . This linear relationship can be determined by: where ℓ stands for the log-odds, β i stands for the parameter of the Logistic Regression model. According to the relationship of the response variable Y and independent variable x, we drew the receiver operating characteristic curve (ROC curve) and calculated the area under the curve (AUC) for each variable x. In this study, the logistic regression was performed using the R function glm (R version v3.60), and the AUC was calculated by R function colAUC from caTools (v1.18.0).
Identify the glaucoma-associated drugs
Overall, 173 glaucoma-inducing drugs were derived from the ADReCSv2.0 database. These drugs can induce glaucoma at different occurrences in patients as estimated in the drug label. To determine the drug-glaucoma associations quantitatively, we analyzed 9,772,360 real-world historical ADEs of the FAERS using the odds ratio analysis and the Fisher test. A list of 25 significant (OR > 2 and p < 0.05) drug-glaucoma associations were summarized in Additional file 4: Table S4. In the list, 13 drugs exhibited strong associations with glaucoma (ORs > 10 and p < 0.01). Additional analysis found that drugs of the ATC code "S" (Sensory organs) category were more significantly associated with glaucoma compared to other non-S category drugs (p < 0.02) (Fig. 1). The average ORs for S category drugs and the non-S category drugs were 21.13 and 14.74, respectively. Furthermore, we conducted a comparative analysis of genome-wide gene expression changes over different cell lines in response to drug treatment. The result manifested that the gene expressions were more perturbed in the constituted cells of the site-of-actions (SOAs) of drugs than in the cell lines of other tissues/organs (Fig. 2). This suggested that many high occurrence ADRs were likely caused by the augment effects of gene expression perturbation. Therefore, we speculated that DIGs might tend to occur on the sensory organs, particularly the eyes. Accordingly, 13 drugs were selected for later mechanistic analyses by satisfying both criteria of belonging to the S category and having a strong association with glaucoma (Additional file 5: Table S5).
Before moving forward, we made a structural similarity measurement of the selected 13 glaucoma-associated drugs using the Tanimoto coefficient, in which the chemical structures were represented by the binary fingerprint value. The measurement revealed a low similarity between drugs (the average Tanimoto Coefficient = 0.43). As well, we evaluated the dispersion degree of physicochemical properties for the 13 drugs using the coefficient of variation (CV). The results also supported the high variability of drugs (average CV = 0.358). Both analyses manifested that the 13 glaucoma-associated drugs were highly diverse in structure and physicochemical properties; and as a consequence, it is unlikely for the drugs to induce glaucoma in a simple way of targeting the same protein.
Unveil the gene interactions underlying the glaucoma development
To unveil gene interactions underlying the DIGs, we carried out WGCNA analyses on gene expression profiles under the treatment of the 13 selected glaucoma-associated drugs as described in "Methods" section. The analysis extracted 72 gene modules (subnetworks) from the whole gene co-expression network via average linkage hierarchical clustering and dynamic tree-cutting algorithm. To reduce the network complexity, we set a dissimilarity threshold of 0.3 (or similarity value = 0.7) to merge gene modules in the dendrogram of module eigengenes (MEs) and eventually obtained 27 modules (Additional file 7: Figure S1). These 27 gene modules or subnetworks represented the functional gene groups in response to the treatment of 13 glaucoma-associated drugs. Subsequent function enrichment analyses on these 27 gene modules identified one module (involving 5,968 genes) potentially linked with DIG. This gene module contained the biological function terms related to sensory organ such as "visual perception" (GO:0007601), "sensory perception of The drug-glaucoma association strength were determined upon 2,069,653 historical ADEs of the FARES database using the odds ratio analysis. One-sided Wilcox test was used to determine the significance of the difference between these two groups. The star symbol (*) stands for p value < 0.05. The drugs of category S had higher association strength than that of other drug categories light stimulus" (GO:0050953), and "Phototransduction" (hsa04744). We further compared the functional enrichment results to two prior studies of glaucoma [6,18]. As the results, six GO terms (in top 15) and 14 KEGG pathways (in top 20) were found mutual to both prior studies and this study, which affirmed that the selected module was the potential DIG-related gene module (Additional file 8: Figure S2).
Identify glaucoma-associated genes
To identify potential DIG-associated genes/proteins in the DIG-related gene module, we carried out serial network analyses step by step. Firstly, we calculated the intra-module network connectivity for all genes in the DIG-related module. The calculation determined 326 highly connected genes by satisfying the criteria of connectivity > 0.95. These highly connected genes likely played centralized roles in the DIG-related gene interaction network. Subsequently, we searched these 326 genes against the STRING database, which identified 495 additional gene/protein interactions by meeting the criteria of the PPI enrichment significance p < 0.01. Upon the 326 genes and corresponding 495 interactions, we constructed a PPI network using the Cytoscape software. On basis of the PPI network, we further extracted a core subnetwork of the most enrichment (having the most connectivity) using the Cytoscape tool CytoHubba. This core subnetwork included 55 hub genes, which were most likely associated with DIG. Of them, 13 genes were previously reported to be associated with the sensory functions (Fig. 3). These 13 glaucoma-associated genes can be divided into three functional groups: phototransduction-related genes (GNGT1, OPN1SW, PDE6A, PDC, CRX, CNGB1, GUCY2F), glutamate receptor (GR) genes (GRIN2B, GRIK3, GRIK4), and 5-hydroxytryptamine receptor (HTR) genes (HTR1A, HTR1E, HTR7). Their connections with sensory functions and glaucoma were concluded in Table 1. It is a reasonable assumption that the 55 glaucoma-associated genes may react more actively to the treatment of the 13 glaucoma-associated drugs than that of the 63 non-glaucoma-inducing drugs. To prove the assumption, we carried out the permutation test. As the results, all of the 55 genes showed lower, 50 significantly (p < 0.05) and 5 insignificantly, gene expressions in response to the glaucoma-associated drugs. Furthermore, we observed a positive correlation (Pearson correlation coefficient = 0.56, p = 1.04e−5) between the differential significance (−log 10 p_value) and glaucoma-gene association strength (OR) of these 55 genes (Additional file 9: Figure S3). For the 13 reported glaucoma-associated genes, 12 genes had significantly (p < 0.05) lower expressions under the treatment of the glaucoma-associated drugs than that of non-glaucomaassociated drugs. The remaining one PDE6A showed a clearly lower expression but statistically insignificant (p = 0.061) (Fig. 4). All these results further supported that the down-regulation of the 55 genes likely answered for the DIGs. The tissue propensity of drug-induced glaucoma. The X-axis stands for the nine different cell types and Y-axis stands for average perturbation of the DEGs. The pink box stands for the expected Site-of-Action (SOA) of the drugs. The blue box stands for the tissues/organs other than the SOA of the drugs. One-sided Wilcox test was used to determine the significance of the difference between SOAs and other tissues/organs. The star symbol (*) stands for p < 0.05. The results manifested that the drugs intended to interfere gene expression mostly at its SOA The down-regulation of these genes by glaucoma-associated drugs were also reported in previous studies. For instance, dexamethasone and triamcinolone are commonly used in clinical practice to treat a wide range of retinal pathologies and both can induce glaucoma [26]. In mice, OPN1SW, GRIN2B, HTR1A, and HTR7 were found downregulated after retina postintravitreal injections of triamcinolone [27]. CNGB1, HTR1A, HTR7, OPN1SW, and CRX were downregulated by dexamethasone in the retinal pigment epithelium of mouse [28].
Discover potential biomarker genes for DIG
A good gene biomarker should be able to indicate disease incidence and as well be highly sensitive to the right drug treatment. The above analyses have confirmed the association of 55 genes with glaucoma. Here, Fig. 3 The glaucoma-related gene interaction network. This network was constructed in basis of 326 highly connected genes and corresponding 495 protein-protein interactions using the CytoScape tool. The purple nodes in the middle circle were 42 glaucoma-associated genes, and the innermost nodes were 13 DIG-associated genes supported by literature surveillance we further quantitatively examined the pathogenic risk of these genes to glaucoma and their sensitivity to the glaucoma-associated drugs. For this purpose, we calculated the odds ratio (OR) for every 55 genes to glaucoma. And meanwhile, we determined the AUC for every gene alone in differentiating glaucoma-associated drugs and non-glaucoma-inducing drugs. As illustrated in Fig. 5 and Additional file 6: Table S6, 9 out of 55 genes exhibited highly risky to glaucoma and highly sensitive to glaucoma-associated drugs (OR > 6 and AUC > 0.7). These nine genes (OTOF, CRX, GLRA1, XCR1, TAS2R13, PDC, GIPR, GNAT3, and TACR1) could constitute a gene panel for diagnosing DIG. In particular, the otoferlin gene (OTOF) exhibited the riskiest to glaucoma with OR = 9.727, and it also showed remarkable ability in discriminating the glaucoma-inducing drugs from the nonglaucoma-inducing drugs with an AUC = 0.755. Hence, OTOF may serve as a potential biomarker gene for indicating DIGs.
Discussion
The drug-induced glaucoma (DIG) has been encountered in normal chemotherapy for a long time but has not yet been fully understood. One of the major reasons is that glaucoma-induced drugs cannot provide suggestive chemical or protein target clues. In this study, we identified 13 strong glaucoma-associated drugs via mining 9,772,360 historical ADEs in the FAERS. With no surprise, these 13 drugs did not show significant structural or physicochemical similarity, which made it hard to explain the DIGs in a conventional way of drug-target interactions. Alternatively, we analyzed the mutual gene expression patterns treated by the 13 DIG-associated drugs using the WGCNA method, via which we uncovered a potential DIG-related gene module (network) and 55 potential glaucoma-associated genes. Subsequent network analyses and expression difference analysis further consolidated that the 55 genes might play central roles in the glaucoma development. Of these glaucoma-associated genes, 13 genes were literature-reported and can be divided into three functional gene groups according to their possible roles in glaucoma development, including seven phototransduction-related genes, three GR genes, and three HTR genes. According to the functional gene groups, we proposed here three possible routes, but not limited to, to induce glaucoma: phototransduction dysfunction, intracellular calcium homeostasis disruption, and retinal ganglion cell (RGC) death.
In clinical practices, glaucoma was always accompanied with several symptoms, such as elevated intraocular pressure (IOP), visual field defect, and optic nerve damage [1,3]. Phototransduction is one of the main processes in vision formation. Of the seven phototransduction-related genes, GNGT1 (the guanine nucleotide-binding protein G (T) subunit gamma-T1) is specifically expressed in the retinal rod outer segment cells, participating in the signal transduction process of cyclic GTP-specific phosphodiesterase as a modulator or transducer of light transduction. Down-regulation of GNGT1 may result in blockage of light conduction, leading to visual field defect and vision loss. A previous study has shown that elimination of GNGT1 could cause a dramatic decrease of the detected light signal in the intact mouse rods, a striking decline in the rod visual sensitivity, and eventually severe impairment of the nocturnal vision [29]. For other phototransductionrelated genes CNGB1, CRX, GUCY2F, OPN1SW, PDC, and PDE6A, they may participate in the regulation of the state transition and the phototransduction cascade via interacting with GNGT1 closely. Glutamate receptors (GRs) and 5-hydroxytryptamine receptors (HTRs) are pivot proteins in maintaining the homeostasis of various neurotransmitters, the transmission of visual information, and the activity of nerve cells. A prior study found that abnormal increase of glutamate concentration in the vitreous bodies of humans and monkeys with glaucoma [30]. Over-activation of NMDA glutamate receptors (GRIN2B) affected the intracellular calcium homeostasis and induced cell Fig. 4 The permutation test of 13 glaucoma-associated genes. The test was conducted on 13 glaucoma-associated drugs and 63 non-glaucoma-inducing drugs. The density plot illustrated the distribution of gene expression difference after 10,000 resamplings. The red line stands for the position of real expression difference on the distribution curve. The boxplot illustrated the average gene expression changes under the treatment of two drug groups. The symbol * stands for the significance p value < 0.05 and ** stands for p value < 0.01 apoptosis via caspases and PARP [31,32], particularly the death of RGCs. The RGC was thought to be the most susceptible cell type to clinical glaucoma [33], and the RGC apoptosis was often observed in human and experimental animal models of glaucoma [34]. For the kainic acid (KA) glutamate receptors (GRIK3 and GRIK4), they were acknowledged to mediate glutamate-mediated neural signals between photoreceptors and bipolar cells [24,35]. Abnormal regulation of GRIKs may lead to the defect transmission of visual information. For the three HTRs (HTR1A, HTR1E, and HTR7), limited works have been undertaken to link them with glaucoma. However, prior work used HTR2 (serotonin-2 receptor) agonists to reduce ocular hypotension in the chronic glaucoma model [36], which hint the HTRs might be involved in the IOP and glaucoma.
For the 55 glaucoma-associated genes, we further measured their pathogenic risks to glaucoma, as well as their sensitivity to the glaucoma-associated drugs. These intentions quantitatively ranked gene susceptibility to glaucoma, which finally suggested a nine-gene panel (OTOF, CRX, GLRA1, XCR1, TAS2R13, PDC, GIPR, GNAT3, and TACR1), which showed remarkable capability in differentiating glaucoma onset or not. OTOF encodes an integral membrane protein, which is implicated in a late stage of exocytosis and plays a ubiquitous role in synaptic vesicle trafficking. It was known to be a cause of neurosensory nonsyndromic recessive deafness via modulating γ-aminobutyric acid (GABA) activity, the metabolite of an excitatory neurotransmitter glutamate [37]. However, it is suspected that defective OTOF activity would markedly interfere GABA and glutamate metabolism [37], for example, causing abnormal accumulation of glutamate in the retina. The excessive glutamates would likely induce glutamate excitotoxicity to retinal neurons by overstimulation of glutamate receptors [38,39]. Both the cone-rod homeobox (CRX) gene and the phosducin (PDC) gene are phototransduction-related genes. CRX is a photoreceptor-specific transcription factor that is necessary for the maintenance of normal cone and rod function. PDC encodes a phosphoprotein in the rod cells of retina, which may participate in the regulation of visual phototransduction or in the integration of photoreceptor metabolism. Several previous studies indicated that defects of CRX or PDC might answer for photoreceptor degeneration, leading to cone swelling and loss of photoreceptor cells, and eventually causing blindness [40][41][42][43]. The gastric inhibitory polypeptide receptor (GIPR) can stimulate insulin release in the presence of elevated glucose; variants of GIPR were found linked with diabetes-related primary OAG [44]. For the remaining five biomarker genes, no substantial evidences have Fig. 5 The pathogenic risk assessment of 55 glaucoma-associated genes. The bar length is positively proportional to the odds ratio (OR) value of the gene-to-glaucoma; the bar color is proportional to the AUC value for the gene ability in differentiating the glaucoma-associated drugs and non-glaucoma-inducing drugs. The color of gene symbol corresponds to different gene group: green stands for the phototransduction-related gene, yellow stands for the Glutamate receptor gene, red stands for the 5-hydroxytryptamine receptor gene, and black stands for the others been raised till now to link with glaucoma directly yet, which desires further experimental investigation. This work has its limitations. First of all, the gene network study takes the advantage by bypassing the conventional challenge in discovering drug-target interactions beneath the DIGs; at the meanwhile, it ignores the diverse upstream mechanisms to trigger the DIGs, for instance, the genetic characteristics, the pathogenic conditions, and the drug types such as corticosteroids and anticholinergic drugs. Moreover, the results of this work could be limited by the data availability that some drugs were excluded in the WCGNA analysis due to the lack of sufficient drug-treated gene expression profiles. Nevertheless, this work provides a new insight into the systematic understanding of drug-induced glaucoma from a gene interaction perspective. The same strategy can be easily applied to mechanistically investigate other severe adverse drug reactions. It will also help prevent druginduced glaucoma in clinical practice by suggesting the potential biomarkers for glaucoma prognosis.
Conclusions
In summary, the development of DIG is a sophisticated process that involves multiple genes and pathways. Current molecular understanding of DIGs is mostly focused on sporadic genes or pathways, which show undetermined impacts on glaucoma. In this work, we develop a gene network analysis strategy to illustrate three possible molecular routes to induce glaucoma. These three routes are not isolated but connected in a glaucomarelated gene network. For the first time, we propose a list of genes highly susceptible to glaucoma. In particular, OTOF shows promising prognostic potential to druginduced glaucoma in clinics. | 2021-04-20T13:53:27.878Z | 2021-04-19T00:00:00.000 | {
"year": 2021,
"sha1": "d816f8fb84cdf99c764349b0129c7069160cb6ba",
"oa_license": "CCBY",
"oa_url": "https://bmcmedgenomics.biomedcentral.com/track/pdf/10.1186/s12920-021-00960-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "024d91159b4ed4fa48d9e6cd7b9da21756cbef4f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
57042872 | pes2o/s2orc | v3-fos-license | Left Ventricular Assist Device Implantation as Bridge to Heart Transplantation
Heart transplantation is a method of choice for the surgical treatment of a terminal stage of cardiac insufficiency. The lack of donors that all health systems in the world are experiencing has led to the intensive development of devices for permanent mechanical support of circulation. Implantable devices, such as the LVAD circulation pump, are widely accepted as a therapeutic option for improving the quality of life and survival of patients with terminal heart failure. Indications for incorporation include bridge to transplant (BTT), bridge to candidacy (BTC) and destination therapy (DT). The article presents a case of successful surgical treatment of terminal heart failure. The patient was implanted with a circular support device for left heart chamber for the maintenance of vital parameters and bridging the period to heart transplantation.
Introduction
Devices for permanent mechanical circulatory support (MCS) are widely used today due to the limited number of available donors and the limited effectiveness of conservative clinical methods in order to provide an adequate therapeutic response in terminal cardiac failure. In practice, the left ventricular support devices are mostly used, but the MCS concept also includes devices for supporting the right chamber, devices for biventricular support and total artificial heart. 1
Case Report
A left ventricular assist device was implanted in a 29-yearold patient with a medical history of cardiomyopathy of an unknown cause, diagnosed eight years ago. Five years earlier, the patient underwent radiofrequency ablation, and four years earlier, CRT-P was implanted. Ten days prior to the admission, the patient was hospitalized at the regional health center after a heart failure due to ventricular fibrillation and a successful reanimation procedure. At the time of the admission to our clinic, the patient showed signs of dyspnea and anxiety, he was in a state of cardiac decompression with low impact volume and progressive multiple organ dysfunction despite all the therapeutic measures taken. The patient was categorized as NYHA Class IV, INTERMACS Class 3.
The echocardiographic finding upon the admission showed a normal diameter of aorta in a bulb with an aortic valve which has three cusps and a trace of AR. A dilated left ventricle (7.8/7.0cm), globally hypocontractal, with normal wall thicknesses, decreased total systolic function (EF by Teiholc 22%, by Simpson 10%) and paradoxical septum movements was verified. The mitral apparatus had a regular morphology, with a mild MR 1+ in the left atrium of regular dimensions. The right chamber was of regular size, with good systolic and longitudinal functions, TAPSE 25 mm, FAC 38%. A TR trace was registered through the tricuspid valve. The right heart pressure of 21 mmHg was indirectly assessed.
After prioritized diagnostic procedures, it was found that the patient was a candidate for heart transplantation. Taking into consideration that at that moment there was no available donor, a decision was made to implant a left ventricular assist device (LVAD).
After medial sternotomy and heparinization, an extracorporeal blood flow-cardiopulmonary bypass was used up to the time of achieving an adequate active coagulation time (480 s). The LVAD device was prepared in sterile conditions for implantation (fused components and rinsing and de-irrigation). (Figure 1) The Heart Ware device was implanted without the aortic banding and heart stopping. The inlet cannula was installed at the top of the left ventricle by the pre-fixed fixation ring. The power supply cable was in the form of a double tunnel through the skin in the left upper quadrant of the abdomen and connected to the source of energy. The outlet graft pump was fastened to the ascendant aorta. The device was completely placed in the pericardial and there was no need for opening the pleural or peritoneal space. After the deaeration, the pump was started and the patient was gradually separated from the extracorporeal bloodstream machine. (Figure 2) The pump speed was set at 2500 rpm, the pump flow up to 4.9 L / min, the pump power at 3.4 W and Hematocrit was 32%. The alternate controller was also set at a speed of 2500 rpm.
A month later, the patient was discharged. Anticoagulant therapy upon the discharge included Wafarin with clavulan INR of 2-3, as well as a preparation of Acetylsalicylic acid with a dose of 100 mg per day. His other medications (from the last check-up four months ago) included Ramipril 5mg daily, Amlodipine Besylate 5 mg daily, spironolactone 25 mg daily, Furosemide 20 mg twice a day, Pantoprazole 40 mg twice a day, Amiodarone 200mg daily, Levothyroxine 50 mcg daily, Atorvastatin 20 mg daily.
During the check-ups after a month, two months, six months and a year, the patient did not show signs of heart failure and the LVAD parameters on the controller were stable. The patient is on the heart transplant waiting list in the Republic of Serbia.
Discussion
Implantation of the LVAD, apart from being a life-saving treatment for patients, allows a large number of patients in the terminal stage of heart failure to live long enough and to have a good quality of life prior to the heart transplantation (bridge to transplantation in cases when patients are candidates for Htx) or to have an acceptable quality of life with a built-in device as a definitive therapy (for patients who are not candidates for HTx). In fewer cases, cardiac function occurs after the installation device and then the device is explanted (bridge to recovery).
Implantation of the LVAD, on the basis of previous results, has been recognized as a valuable alternative to cardiac transplantation. Furthermore, the need for heart transplantation as a first-choice therapy can be reduced when taking into account the post-transplant survival results in a group of patients who had the LVAD implanted as a bridge to transplantation. 2 The number of LVAD devices implanted in the world has been steadily increasing due to a significant improvement in survival rates in recent years. New systems are easier to implant, longer-lasting, patients have a "customized" normal life in their homes while they wait for a cardiac transplantation or they carry the LVAD as a definitive therapy. 3 The Heart Ware support system of the left ventricle is a centrifugal pump of a miniature design that achieves a continuous blood flow of up to 10 liters per minute. An optimized blood flow is accomplished using a hybrid magnetic hemodynamic, hemocompatible and centrifugal system. The Heart Ware pump that is implanted at the top of the left ventricle and whose outlet graft is connected to the aorta is connected to the controller and the source of energy (batteries) through a thin flexible power cable. The cable is most commonly carried out through the skin in the left upper quadrant of the abdomen. The pump controller is a device that enables precise estimation of a flow and memorizes significant hemodynamic parameters on the basis of which the pump operation is adjusted. 4 (Figure 3) The challenge that multi-disciplinary teams involved in the diagnosis and treatment of terminal heart failure are faced with lies in the identification of patients who can benefit from the implantation of the LVAD, taking into account the possibilities of a heart transplant and appropriate timing of implantation. The decision is made on the basis of clinical parameters, conducted examinations according to protocols and operational risk assessments. 5 Conservative treatment measures in the case of patients who is the subject of the case report, with heart indulgence along with multiple organ dysfunction in the progression, have not yielded satisfactory results. Echocardiographic analysis showed a weakened function of the left ventricle with a preserved function of the right ventricle of the heart, which is one of the prerequisites for successful post-implantation outcome. Since there was no adequate donor, in order to maintain vital parameters, it was decided to implant the LVAD device as a bridge to heart transplantation. The procedure continued without any complications and the patient was discharged after one month. All check-ups during the period of one year after the implantation showed a regular hemodynamic status, the patient is fully physically active and on the heart transplant waiting list.
Conclusion
The use of left ventricular assist device as a bridge to transplantation continues to demonstrate a high rate of one-year survival with satisfactory post-implantation quality of life. The incidence of complications is reduced compared to previous devices evaluated in earlier studies, although the period of use of the pump is prolonged. | 2018-12-29T14:11:59.165Z | 2017-10-25T00:00:00.000 | {
"year": 2017,
"sha1": "2a45bbb737f862c71a84b5077f1f8c55638ebace",
"oa_license": "CCBY",
"oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/2490-3329/2017/2490-33291702141T.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a8184d1c666e73643c30562410e5e05a5a422051",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
51708238 | pes2o/s2orc | v3-fos-license | Conserved and differential transcriptional responses of peroxisome associated pathways to drought, dehydration and ABA
Peroxisome biogenesis, peroxisome β-oxidation and antioxidant enzymes show conserved upregulation in response to ABA and water deficit-inducing treatments, highlighting an overlooked role for peroxisomes in response to drought.
Introduction
Water deficiency is a severe constraint on crop production world-wide (Boyer, 1982). For example drought regularly limits wheat production in almost 50% of the cropped area. This issue is of increasing concern and is amplified by climate change, population growth and urbanization, impacting on water availability for agriculture and therefore global food security (Godfray et al., 2010). Consequently new insights into the molecular mechanisms of response to drought is an important but challenging goal for improvement of drought tolerant plant varieties (Claeys and Inzé, 2013) Abscisic Acid (ABA) is a major player in coordinating the adaptation of plants to adverse conditions as well as functioning in many plant developmental processes (Leung and Giraudat, 1998;Fujita et al., 2011;Stevenson et al., 2016). ABA mediates physiological processes such as stomatal closure, osmolyte accumulation, and the synthesis of stress-related proteins, as well as compounds associated with the scavenging of reactive oxygen species that are implicated in desiccation related membrane damage (Ingram and Bartels, 1996;Hoekstra et al., 2001). ABA is required for the induction of genes as a response to dehydration stress (Nakashima et al., 2009). Moreover, exogenous application of ABA induces a number of genes that respond to dehydration and cold stress (Zhu, 2002;Shinozaki et al., 2003;Cuming et al., 2007) However, not all genes that are induced by dehydration and cold stress respond to the exogenous application of ABA (Zhu, 2002;Yamaguchi-Shinozaki and Shinozaki, 2006). This suggests the existence of ABA-independent and ABA-dependent signal transduction pathways that convert the initial stress signal into cellular responses (Zhu, 2002).
The recruitment of ABA to regulate responses to water stress emerged with the evolution of land plants, which are monophyletic in origin, descending from a single successful colonisation of terrestrial habitats by a charophyte algal ancestor ca. 470Ma (Delwiche and Cooper, 2015). The conquest of land necessarily required adaptations enabling these ancestral plants to survive the highly variable conditions characteristic of the terrestrial habitats, most notably exposure to ultraviolet radiation, salinity, dehydration and temperature variation. Lacking the anatomic adaptations characteristic of extant tracheophytes, survival of the earliest colonisers must have been cellular and biochemical in nature. A common cellular consequence of these environmental stresses is the formation of reactive oxygen species (ROS). Consequently, possession of antioxidant mechanisms must have ranked highly in the suite of adaptations that supported the transition from aquatic to terrestrial habitats, enabling both ROS signaling and defence against ROS toxicity. Such adaptations remain important today, being widespread and highly conserved in nature among all classes of land plant and central to many environmental stress responses (Mittler et al., 2011;Noctor et al., 2014).
Peroxisomes are both major sources of ROS and sites of important anti-oxidant defences (Noctor et al., 2002). They contain antioxidant molecules such as ascorbate and glutathione, and some antioxidant enzymes, including ascorbate peroxidase, dehydro-and monodehydroascorbate reductase, glutathione reductase and catalase. Changes in activities of these enzymes are regulated by various stress conditions (del Rio et al., 1998). Accordingly, peroxisomes have been suggested to play important roles in defence against abiotic and biotic stress in plants (Willekens et al., 1997;del Rio et al., 1998). They are involved in lipid mobilization through β-oxidation and the glyoxylate cycle, photorespiration, nitrogen metabolism, synthesis and metabolism of plant hormones . Peroxisomes import membrane and soluble proteins from the cytosol to maintain and modulate their functions (review (Cross et al., 2016).The biogenesis of peroxisomes requires a group of protein factors referred to as peroxins encoded by PEX genes (Distel et al., 1996). Two types of targeting signals have been identified for peroxisomal matrix enzymes: PTS1, a C-terminal tripeptide and PTS2, an N-terminal nonapeptide (Reumann et al., 2016). Peroxisome membrane proteins are inserted post translationally by the action of chaperone/ receptor PEX19 and its docking factor PEX3. Some membrane proteins may also be targeted to peroxisomes via the ER in a process that also requires PEX3 (Cross et al., 2016).
Peroxisomes are remarkably dynamic, responding to environmental and cellular cues by alterations in size, number, and proteomic content. As well as importing proteins from the cytosol, peroxisomes proliferate by division in a process dependent upon the PEX11 family (Orth et al., 2007;Kamisugi et al., 2016). Plant peroxisome proliferation has been reported in response to hydrogen peroxide, pathogens or ozone (Morré et al., 1990;Lopez-Huertas et al., 2000;Oksanen et al., 2004), and during senescence (Pastori and Del Rio, 1997).
To investigate the evolving roles of peroxisomes in perception and response to abiotic stress we focused on drought and its consequences: dehydration stress, ABA production and ROS metabolism. We have taken a genome wide cross species approach, utilising information gained from a modern angiosperm and from a bryophyte-the most ancient group of land plants-to compare transcriptional responses of PTS1 targeted peroxisome proteins, antioxidants and PEX genes. We benefit from the plethora of genomic resources available for the well characterised angiosperm and bryophyte models, Arabidopsis thaliana and Physcomitrella patens and extend these studies to the globally preeminent crop species, wheat (Triticum aestivum), for which comparable resources are only now being developed (Uauy, 2017). Due to its large hexaploid genome wheat is a much more challenging species to study than haploid Physcomitrella and diploid Arabidopsis thaliana, therefore we used the rich data and extensive information from these two model species to demonstrate that genes encoding peroxisome targeted proteins are disproportionally upregulated and that upregulation of peroxisomal β-oxidation is a conserved response to drought, dehydration and ABA. Additionally peroxisome biogenesis appears to be upregulated with increased expression of isoforms of PEX3 and PEX11 seen in both moss and wheat with clear differences between drought sensitive and drought tolerant cultivars. Interestingly increased expression of glyoxylate cycle enzymes ICL and MS is seen in moss and wheat but not in Arabidopsis.
Compiling Arabidopsis peroxisomal genes and identification of homologs in moss and wheat
Arabidopsis proteins predicted to be targeted to peroxisomes were retrieved from AraPerox 1.2 . The antioxidant genes, their description and localization information were compiled manually from The Arabidopsis Information Resource (www.arabidopsis.org). Those Arabidopsis antioxidant enzymes annotated as peroxisomal were used to identify non-peroxisomal isoenzymes and some additional known non-peroxisomal components of the anti-oxidant network from Arabidopsis were also added. This resulted in a list of 51 Arabidopsis proteins, representing 10 families.
To identify homologs for genes encoding PTS1-containing proteins, PEX proteins, and antioxidant enzymes ('PTS1, PEX and Antox' genes respectively) in P. patens and wheat, the whole protein sequence content of Arabidopsis thaliana was obtained from TAIR and Arabidopsis proteins were used to search the Physcomitrella and wheat genomes at http://phytozome.jgi.doe.gov/ (E-value<1e-10 and <1e-5, respectively) by TBLASTN to identify homologs. Then, all the sequences of unique hits in wheat or moss were used for reciprocal BLASTP search of the Arabidopsis proteome. Due to the large hexaploid wheat genome, wheat homologs for PTS1-containing proteins and antioxidant enzymes were obtained by separate queries using BioMart (http://www.gramene.org/) (Gupta et al., 2016), and the TAIR Arabidopsis dataset. Data were filtered to obtain corresponding homologs in Triticum aestivum, then gene stable IDs were converted to corresponding Ensembl gene ID manually by TBlastN analysis of protein sequences against the wheat genome at Phytozome ( -value<1e-5) to identify the best blast match for each locus. PredPlantPTS1 (http://ppp.gobics.de/) was used for the prediction of PTS1 signals in moss and wheat homologs of genes of putative PTS1 proteins. All candidate homologs were verified with the help of CDD and Expasy databases (https://www.expasy.org/) (Gasteiger et al., 2003) to confirm the presence of expected conserved domains. All the moss and wheat proteins identified from BLAST searches were accepted only if they contained the corresponding Arabidopsis domains; then multiple sequence alignments were used to confirm the conserved domains of identified sequences. Retrieved sequences in wheat were corrected when a portion of protein was missing due to incorrect gene model prediction. Sequences showing large truncations and that could not be completed by further BLAST searches were excluded.
Peroxisomal gene expression in moss under ABA, dehydration and mannitol
The gene expression profiles of the Physcomitrella peroxisomal (PTS1, PEX) and Antox genes responding to ABA, osmotic-and dehydrationstress were obtained using the RNA-seq data deposited in the Gene Expression Omnibus database under accession number GSE72583 and then to the NCBI Sequence Read Archive (accession number SRP063055; BioProject PRJNA294412) (Stevenson et al., 2016). To assess statistical significance, hypergeometric probabilities were evaluated for the number of genes in the data set of interest (eg. PTS1, PEX or Antox) up-regulated ≥2-fold change (FC) by the experimental treatment compared to the total number of genes up-regulated ≥2 FC in the entire gene set for that treatment. The heatmaps were drawn using the Morpheus software (https://clue.io/morpheus/) (Minguet et al., 2015).
Peroxisomal gene expression in Arabidopsis in response to ABA treatment
Arabidopsis RNAseq expression data were downloaded from the Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/) (GEO accession number: GSE65739 and SRA accession number: SRP053346) and 4 samples, two biological replicates of 10-day-old Arabidopsis seedlings mock treated (GSM1603932 GSM1603936) or treated with 50 μM ABA, (GSM1603933, , GSM1603937) (Weng et al., 2016) were selected to study expression of our candidate genes. Processed data files were downloaded.
Peroxisomal gene expression in Triticum aestivum under drought stress
Wheat transcriptome profiling and gene expression data were retrieved from the Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih. gov/geo/) (GEO accession number: GSE30436) (Kadam et al., 2012). Twelve samples were selected to study expression of candidate genes in two bulked populations of wheat recombinant inbred lines which differed in their susceptibility to drought 'drought sensitive Bulk' and 'drought tolerant Bulk'. The sample accession numbers are as follows: GSM754878, GSM754879, GSM754880, GSM754884, GSM754885, GSM754886, GSM754890, GSM754891, GSM754892, GSM754896, GSM754897, GSM754898, three samples were used as a biological replicate for each treatment. CEL files were downloaded and processed data values for the selected samples were used to calculate FC in tolerant and sensitive genotypes. Probesets corresponding to PTS1, PEX and Antox genes were searched using an online PLEXdb Blast tool available at Affymetrix (http://www.affymetrix.com/).
Plant materials and growth conditions
The Egyptian wheat (Triticum aestivum) variety, Giza 168 was obtained from the Agricultural Research Centre; ARC, Giza, Egypt. The British variety Oakley was obtained from KWS, UK, Ltd. To test osmotic tolerance, seeds were exposed to 20% (w/v) PEG-6000 as an osmotic-stress inducing medium. Thirty seeds were germinated on filter paper in petri dishes wetted with 7 ml of distilled water or 20% PEG solution using three replicates for each variety and treatment, then the number of germinated seeds was counted to calculate germination percentage (see Supplementary Fig. S1 at JXB online). Germination was scored when radicles reached 5mm in length.
To analyse ABA responses of wheat plants seeds were germinated in pots containing compost in a growth chamber at 20 °C, 16 h photoperiod, 60% RH and watered twice per week. ABA (100 µM) was applied as foliar sprays at 9 days after sowing (DAS). Gas exchange parameters were determined for control and ABA-treated plants 24 h following ABA application using a commercial, open-flow gas exchange measurement system (LI-6400P, LI-COR Inc., Lincoln, NE). Biochemical methods were used for measuring osmolyte concentrations one week after ABA treatment as follows: soluble sugars were extracted according to (Schortemeyer et al., 1997) and determined according to (Schlüter and Crawford, 2001), proline was determined according to (Bates et al., 1973), glycine-betaine was determined according to (Grieve and Grattan, 1983) and amino acids concentrations were determined according to (Sircelj et al., 2005).
For moss, WT protonemal tissue was sub-cultured at weekly intervals on cellophane overlays on solid BCD medium containing 5mM diammonium tartrate and trace elements (BCDAT) (Knight et al., 2002). ABA-treated (BCDAT supplemented with 10 −5 M ABA, 1 h). Tissue was harvested and squeezed dry before freezing in liquid nitrogen and storage at -70 °C before RNA isolation. Protonemal tissue from a line expressing a peroxisomal targeted mRFP was sub-cultured on cellophane overlays on solid BCD medium containing 1mM CaCl 2 5 mM diammonium tartrate. Seven days old protonemal tissue on the cellophane discs was transferred to petri dishes containing BCDAT with or without 10 −5 M ABA for 6 h before counting peroxisomes by fluorescence microcopy. The number of peroxisomes per cell was determined for at least 23 randomly selected cells. Analysis of variance (ANOVA) was performed to identify significant differences between the treatments with a level of significance of a P≤0.05. For determining the significant effects between the treatments, comparison was made using the least significant difference (LSD) test with a P≤0.05.
RNA extraction and qPCR
About 0.1g wheat leaf tissue was harvested 24h after ABA treatment and homogenized with liquid nitrogen and total RNA was extracted using an RNeasy mini kit (Qiagen) and treated with RNase-Free DNase (Qiagen) following the instruction protocol. For moss samples, RNA was extracted according to (Knight et al., 2002). For rt-PCR, RNA (10 µg) was digested with 1 unit of RQ1 DNase (Promega) for 10 min at room temperature and purified by phenol-chloroform extraction and ethanol precipitation. RNA for all samples was quantified by nanodrop spectrophotometry then the purity and integrity of total RNA was assessed by Agilent BioAnalyzer. Complementary DNA was synthesized from 1 µg of RNA using the BioRad iscript select cDNA synthesis kit. The reaction mixture was diluted 30-fold with water, and 2 µl aliquots were used for PCR amplification. Quantitative real-time polymerase chain reaction (qPCR) was performed with the diluted cDNA samples in a 20 μl reaction mixture containing 10 µl BioRad iQ SYBR 2X mix and 300 nM PCR primers. PCR was performed using a BioRad Cfx Manager as follows: denaturation for 2 m at 95 °C, followed by 40 cycles of 10 s at 95 °C, and 30 s at 60 °C. The PCR amplification efficiency was determined for each primer combination automatically calculated by the BioRad CFX Manager software using the input information of standard concentrations and dilutions used into the program. The standard curve was 5 serial dilutions of a mixture of all sample cDNAs. The PCR efficiencies ranged from 87 to 110%. Three biological replicates and three technical replicates were used for each treatment. No signals were detected in any reaction without template that had been used as a negative control (NTC). The relative transcript levels were calculated using the 2 −∆∆Ct method, with the wheat glyceraldehyde3-phosphate dehydrogenase (GAPDH) and moss Clathrin Coat Assembly Protein AP50 (CAP50: Pp3c27_2250V3.1) genes as internal controls. Primer pairs are listed in Supplementary Table S1.
Identification of peroxisome associated genes in wheat and moss
Three sets of genes associated with peroxisome biogenesis and function were identified in wheat and moss using the corresponding Arabidopsis proteins as queries. These 3 gene sets coded for i) proteins carrying a predicted peroxisome targeting signal type 1 (PTS1) sequence at the C-terminus ('genes of putative PTS1 proteins') ii) enzymes involved in the cellular antioxidant network that had been described as peroxisomal, their non-peroxisomal homologs and some additional non-peroxisomal antioxidant enzymes ('Antox proteins') and iii) PEX genes involved in peroxisome biogenesis. Table 1 summarises the number of genes in these gene sets in Arabidopsis and their corresponding homologs in moss and wheat. In total 340 genes of putative PTS1 proteins from Arabidopsis, identified 1052 homologs in wheat and 282 homologs in moss. Some of these homologs were present in multiple copies and only 185 gene products were predicted to contain PTS1 in wheat and 108 in moss Supplementary Tables S2 and S3 show the identified genes of putative PTS1 proteins in moss and wheat respectively.
The 51 'Antox' genes from Arabidopsis identified 94 homologs in wheat and 49 in moss (Table 2). For the Arabidopsis proteins the known/predicted location according to SUBA (the subcellular localization database of Arabidopsis proteins (Hooper et al., 2017) and for the identified moss and wheat homologues the favoured subcellular location predicted using Plant-mPloc (Chou and Shen, 2010) is given in Supplementary Tables S4 and S5 respectively. The PEX gene complement of wheat and moss has been described (Cross et al., 2016).
Expression of peroxisome related genes in moss in response to ABA, osmotic stress and dehydration
In order to investigate the response of peroxisome related pathways under ABA and drought stress conditions in moss we analyzed the changes in expression of the putative PTS1, Antox and PEX genes listed in Supplementary Tables S2 and S4 and (Cross et al., 2016) in the Physcomitrella RNAseq datasets. The genes upregulated by >2-fold change for each of these gene sets under conditions of 10 µM ABA treatment, osmotic stress (10% mannitol) or dehydration (70% loss of fresh-weight) are shown in Fig. 1 and gene identification numbers of each set are indicated in Supplementary Table S6. While there are commonalities, each stress has its own distinct signature (Fig. 1A). The numbers of up-and down-regulated genes under the 3 conditions is shown in Fig. 1B. Using hypergeometric probability, statistically significant numbers of genes of putative PTS1 proteins were up-regulated >2FC in response to ABA, dehydration and mannitol compared to the total number of up-regulated genes in the whole dataset under each condition, whereas the number of up-regulated Antox genes and PEX genes were statistically significant only under osmotic stress by mannitol.
To validate the effect of ABA treatment on gene expression, rt-QPCR was carried out. Catalase was upregulated more than three orders of magnitude upon ABA treatment (Fig. 3). Acyl CoA oxidase 1 (ACX1) and AIM1, markers for the β-oxidation pathway, were upregulated ~3-fold and >20fold respectively. Malate synthase, marker for the glyoxylate cycle was upregulated >5-fold by ABA treatment (Fig. 3). *For 46 PTS1 proteins of Arabidopsis the corresponding homologs in wheat could not identified due to major differences in protein sequences in Gramene and phytozome databases as described in the Methods. † Catalase (CAT) genes are included in the antioxidant genes set but they also possess a non-canonical PTS1 which although functional deviates from the PTS1 consensus Induction of peroxisome biogenesis in Physcomitrella patens by ABA Fig. 4 shows expression of the 3 members of the PEX3 gene family and 5 members of the PEX11 family in response to ABA treatment. PEX3-3 was the most highly induced showing ~10-fold increase in transcript level although both PEX3-1 and PEX3-2 showed an increase in transcript level in response to ABA. The PEX11 family members showed strikingly different responses to ABA. PEX11-1 was unresponsive, PEX11-3 and PEX11-4 are down-regulated whilst PEX11-5 and PEX11-6 were upregulated ~5 and ~2.5-fold respectively, consistent with the RNA-seq data. As PEX3 and PEX11 both have roles in peroxisome proliferation/division, peroxisomes were counted in P. patens chloronemata expressing a peroxisomal targeted RFP that had been ABA-treated for 6 hours (Fig. 5A, B). The mean peroxisome number per cell in ABA-treated samples was 24 compared to 16 in untreated cells which was significant at the P=0.05 level (Fig. 5C). Fig. 2. Diagram showing metabolic relationships between selected peroxisomal proteins and enzymes upregulated by drought or ABA. Protein names are shown in black typeface and metabolites in green. 4CL1 and AAE17 activate (unknown) substrates for β-oxidation, which utilizes enzymes acylCoA oxidase (ACX)1 and multifunctional proteins MFP2 and/or AIM1. ECH1a is a poorly characterized member of the enoylCoA hydratase/isomerase family. ACX (and other peroxisomal enzymes) produce hydrogen peroxide that is broken down by catalase (CAT). Acetyl CoA produced can enter the glyoxylate pathway; malate synthase (MLS) and isocitrate lyase (ICL) are the unique enzymes of this pathway which produces malate that is exported as a substrate for gluconeogenesis. Phosphoenolpyruvate carboxykinase (PEPCK) is the first committed step of this pathway, leading to production of sucrose and/or compatible osmolytes. Succinate produced by the ICL reaction is exported to mitochondria for further metabolism. In Arabidopsis, citrate can be exported directly for respiration (not shown). On the right hand side of the diagram a simplified representation of the ascorbate glutathione cycle is shown. Asc; ascorbate; DHA dehydroascorbate; GSH reduced glutathione; GSSG; oxidized glutathione. DHAR dehydroascorbate reductase. GR1 glutathione reductase; ICDH NADP + dependent isocitrate dehydrogenase. Ascorbate peroxidase (APX) and monodehydroascorbate reductase (MDAR) are membrane bound proteins that participate in removal of hydrogen peroxide and regeneration of ascorbate. PEX3 is involved in import of peroxisome membrane proteins (including the components of the import machinery for matrix proteins) and PEX11 is involved in peroxisome division.
Expression of peroxisome related genes in Arabidopsis thaliana in response to ABA
For comparison with the ABA responses seen in moss, RNAseq data for ABA-treated 10-day old Arabidopsis seedlings was examined. The results for PTS1, antioxidant and PEX gene sets are summarised in Supplementary Tables S7, S8 and S9 respectively). Genes which showed upregulation by all 3 treatments in moss (see Supplementary Table S6 at JXB online) and which are also upregulated >2FC by ABA in Arabidopsis are listed in Table 3. These include two adenylate activating enzymes (AAE17; At5g23050 and AAE12; At1g65890) and the β-oxidation enzyme ACX1 (At4g16760). A glutathione reductase (GR1 At3g24170) and a dehydroascorbate reductase (DHAR1 At1g19570) associated with the peroxisomal ascorbate-glutathione cycle also showed up-regulation of >2FC. However, unlike moss, glyoxylate cycle genes (ICL and MS) were not upregulated by ABA in Arabidopsis (Supplementary Table S7 and Table 3).Of the PEX11 family only PEX11d (At3g61070) showed up-regulation >2FC under ABA treatment.
Expression of peroxisome related genes in Triticum aestivum in drought tolerant and sensitive genotypes under drought stress
To extend the comparison of peroxisomal responses to wheat, homologs for PTS1, antioxidant enzymes and PEX proteins were identified using the corresponding Affymetrix probe ID (see Supplementary Tables S10, S11 and S12 at JXB online). A microarray data set of two sets of bulked recombinant inbred lines which differed in their drought tolerance (Kadam et al., 2012) were analysed for differences in expression of these 3 sets of genes. The values of FC calculated for treated samples compared to control samples are presented Fig. 6. Of the upregulated genes of putative PTS1 proteins 9 are shared, 3 are unique to the tolerant and 14 unique to the sensitive genotype (Fig. 6A). A total of 23 genes of putative PTS1 proteins were significantly up-regulated ≥2FC in the drought sensitive genotype with hypergeometric probability of 0.01 and 8 genes were down-regulated (Fig. 6B). In the drought tolerant genotype 12 genes of putative PTS1 proteins were upregulated and 7 were down-regulated. It was notable that ICL and MS and AAE were commonly up-regulated in sensitive and tolerant genotypes but with a higher FC in the sensitive genotype although the tolerant genotype had a higher level of expression under the control condition (Supplementary Table S10 and Table 3). Out of 34 Affymetrix wheat probes for Antox genes (Supplementary Table S11), 3 genes were upregulated in the tolerant samples (Fig. 6A). APX5 and CAT were the common up-regulated Antox genes under drought conditions in both cultivars. The wheat array contained only 17 probes for PEX genes (Supplementary Table S12).
Only PEX11d was up-regulated ≥2FC in the drought tolerant wheat while PEX11a and PEX11d were upregulated in the sensitive cultivar. No PEX genes were down-regulated ≥2FC in either cultivar (Fig. 6B). Table 3 shows the genes in wheat that are upregulated by drought where homologues are also up-regulated by ABA in Arabidopsis and by ABA, dehydration and mannitol in moss.
Gene expression of peroxisome related genes in drought tolerant and sensitive cultivars of wheat in response to ABA
To further explore the relationship between expression of peroxisome related genes in response to drought tolerance, two wheat genotypes differing in their performance under drought stress were selected based on a germination tolerance test using PEG-6000 and calculating water content percentage (see Supplementary Fig. S1 at JXB online). These data suggest Giza is less tolerant to drought than Oakley. Carbon dioxide assimilation was measured in both cultivars with and without 100 µM ABA treatment after 24 hours (Fig. 7A). In both cultivars, CO 2 assimilation decreased as a result of closure of stomata by ABA, and this is reflected by the decrease in stomatal conductance and transpiration in both cultivars. However, both cultivars maintained stable levels of internal carbon. Giza, the more sensitive variety, is the most affected in terms of CO 2 assimilation (Fig. 7A). Osmolyte accumulation after 7 days in both cultivars showed variation (Fig. 7B). Soluble sugars significantly decreased in Giza upon ABA treatment, presumably as a result of decreased photosynthesis. However, in Oakley the soluble sugars did not significantly change compared to the untreated control. Proline and glycine betaine accumulated in both cultivars in response to ABA. Free amino acids were elevated in response to ABA in Giza which may reflect proteolysis; however this response was much less marked in Oakley (Fig. 7B). Expression of PEX3 and the PEX11 family was studied in these leaf samples 24 hours after ABA treatment by rtQPCR (Fig. 8). PEX3 (the conserved region in 3 PEX3 genes; Traes_5DS_BB388ED7C, Traes_5AS_6B30155C7 and Traes_5BS_8560EC011) was strongly upregulated by ABA treatment in the tolerant variety Oakley but not in the sensitive cultivar Giza. Three PEX11d isoforms d-1; Traes_4AL_D9FFAAA1A, d-3; Traes_7AS_D51B7852F and 4; Traes_7DS_DA11E7020 were also upregulated in Oakley but Fig. 5. Increased peroxisome biogenesis by ABA in P. patens chloronemata. A: control, B: ABA-treated, C: peroxisome number±SE, 23 cells were used to count the peroxisomes and analysis of variance (ANOVA) with a level of significance of a P≤0.05 to identify significant differences between the treatments. For determining the significant effects between the treatments, comparison was made using the least significant difference (LSD) test with a P≤0.05.
Discussion
This study has collated an inventory of predicted peroxisomal PTS1 targeted proteins (genes of putative PTS1 proteins) from moss and wheat. It has also collated information on the network of anti-oxidant enzymes (Antox genes) from these species, both probable peroxisome isoforms and their non-peroxisomal homologues. Using this information, along with previously collated information on the PEX gene complement of these species we have examined transcriptional responses of these gene sets to drought, dehydration and ABA response with the intention of identifying evolutionary conserved responses
The peroxisome proteome of moss and wheat
The results summarized in Table 1 show that not all PTS1 proteins in Arabidopsis have obvious homologues in wheat and moss, whilst among the homologous proteins identified in these two species, some appear to lack a PTS1. In some cases this could be due to a false positive prediction of peroxisome targeting of the corresponding Arabidopsis protein, as not all have been experimentally validated. Also possible is potential false negative predictions arising from yet unknown variation in PTS1 usage in moss and wheat, and the potential for 'piggy back import' (Kataya et al., 2015) where a protein lacking a PTS1 can be co-imported with a partner that has a PTS1. Nevertheless despite these caveats the data suggest that the peroxisome proteome is quite variable between organisms. This fits with the proposal that peroxisome targeting by the PTS1 pathway can evolve relatively rapidly through alternative splicing, point mutation and stop codon read through (Reumann et al., 2016). The overrepresentation of PTS1 gene transcripts amongst those upregulated across species points to the importance of peroxisome processes in response to drought and ABA.
Evidence for conserved upregulation of peroxisomal β-oxidation
Members of the acyl adenylate activating family of enzymes were upregulated in all 3 species. These enzymes activate diverse substrates for entry into β-oxidation. AAE17 in Arabidopsis is the closest relative of AAE18 which activates the synthetic pro hormone 2,4-dichlorophenoxy butyric acid (2,4DB) for β-oxidation but the natural substrates are not known for either enzyme (Wiszniewski et al., 2009). rtQPCR confirmed significant induction of ACX1 and AIM1 by ABA treatment in moss (Fig. 3). ACX1 is also induced by ABA in Arabidopsis (see Supplementary Table S7 and Table 3 at JXB online) and by wounding, dehydration and Jasmonic acid (JA) (Castillo et al., 2004). Moss genes up-regulated by ABA, dehydration and mannitol >2FC. Expression of these genes in Arabidopsis by ABA (Weng et al., 2016) or in wheat by drought (Kadam et al., 2012) x √ √ Abnormal inflorescence meristem 1, fatty acid multifunctional protein, AIM1 x ND ND Enoyl-CoA hydratase like protein a, ECHIa ND = not detected. ND means that this gene is not found in the array or seq-data. √ and X means shares up-regulation (≥2) with moss or not, respectively. AAE17, 12, 7 and 10 and PEX11D and E: are AAE and PEX family members respectively, expressed in Arabidopsis and wheat.
Jasmonates have been reported to interact with ABA signaling in drought stress (see (Riemann et al., 2015) for recent review). As drought is proposed to lead to a block in OPDA conversion to JA, and OPDA is an important signal for guard cell closure which regulates guard cell aperture both co-operatively with, and independently of, ABA (Savchenko et al., 2014), it therefore seems unlikely that ABA increases JA production via upregulation of peroxisomal β-oxidation. Neither OPR3 or OPCL1, which are key enzymes in the pathway, change in Arabidopsis on ABA treatment (see Supplementary Table S7 at JXB online). Physcomitrella patens is reported to contain cyclopentanones (OPDA) but not jasmonates (Stumpe et al., 2010). Why then should peroxisomal β-oxidation be upregulated? Stress responses trigger lipid dependent signaling (Hou et al., 2016) and peroxisomal β-oxidation could be involved in turnover of some of these molecules or in degradation of peroxidated membrane lipids formed as a result of oxidative stress. Metabolism of the yet unknown substrates activated by the AAEs may also be important signals or mitigators of stress responses.
The glyoxylate cycle-upregulation for production of carbohydrates?
Fatty acid breakdown produces acetyl CoA which can enter the glyoxylate cycle (Eastmond et al., 2000;Eastmond and Graham, 2001). Induction of isocitrate lyase and malate synthase, provides a route for synthesis of malate which can be converted to oxaloacetate, the starting point for gluconeogenesis (Fig. 2). Consistent with this hypothesis, the moss gene Pp3c4_25090 encoding a PEP carboxykinase (the first committed step of gluconeogenesis) was upregulated 3.2, 4.4 and 2.7-fold under ABA, mannitol and dehydration treatment. Gluconeogenesis can provide soluble sugars for respiration or osmotic balance under water deficit conditions, and sucrose accumulates to high levels in P. patens following ABA treatment (Oldenhof et al., 2006). Strikingly, the glyoxylate cycle enzymes were not induced upon drought or ABA treatment in Arabidopsis ( (Li and Hu, 2015) and see Supplementary Table S7 at JXB online). Non coordinate induction of β-oxidation and glyoxylate cycle was seen during starvation or senescence in Arabidopsis (Charlton et al., 2005a) supporting the notion that lipid is broken down and respired (Pracharoenwattana et al., 2005) in contrast to other species where it feeds into the glyoxylate cycle.
Branched chain amino acid metabolism
Another source of substrates for β-oxidation in non-lipid storing tissue is degradation of branched chain amino acids (BCAAs). The expression of BCAT5-the first key enzyme in degrading BCAAs-and IVDH (Isovaleryl-CoA-Dehydrogenase), the enzyme that converts acyl CoA to enoyl CoA (two copies of BCAT5 and two copies of IVDH in moss), both show strong upregulation under ABA, dehydration and mannitol treatment, as does ECH1a, an enoylCoA dehydratase. In Arabidopsis the pathway for degradation of BCAAs is predominantly mitochondrial (Binder, 2010) but at least one step of valine degradation catalyzed by hydroxyisobutryl CoA hydrolase (encoded by the CHY1 gene) is peroxisomal (Zolman et al., 2001). Intriguingly chy1 mutants are defective in cold responses, are more sensitive to dark induced damage and accumulate ROS; phenotypes which can be suppressed by exogenous application of sucrose suggesting a role in osmoprotection and/or maintenance of carbohydrate levels (Dong et al., 2009).
A role for peroxisomes in stomatal movement
An early response to water stress is stomatal closure and recent data points to a role for peroxisome metabolism in guard cells in regulating stomatal movement. Peroxisomal β-oxidation of stored lipids contributes to ATP production for stomatal opening in both Arabidopsis and the lycophyte Selaginella (McLachlan et al., 2016). Furthermore, an Arabidopsis mutant defective in peroxisomal NADP + dependent isocitrate dehydrogenase showed deficiency in stomatal opening which was rescued by ascorbate. This led to the proposal that loss of peroxisomal NADP + dependent ICDH activity impacts the ascorbate glutathione cycle leading to increased cytosolic H 2 O 2 (Leterrier et al., 2016). An ICDH isoform (Pp3c20_22810) with a putative PTS1 sequence was upregulated by all treatments in moss.
Peroxisomes and ROS
Increased photorespiration as a result of stomatal closure under water deficit leads to increased production of hydrogen peroxide as a consequence of increased photorespiratory flux. Interestingly though, none of the candidate glycolate oxidases were upregulated >2-fold. Antioxidant responses are complex and sometimes contradictory (Noctor et al., 2014). One significant complication is the likely compartment-specific production of ROS and antioxidants; spatial information which is lost upon biochemical extraction. In the current study, focusing on the response of likely peroxisomal targeted enzymes provides an alternative approach to looking at a compartment specific response. Considering the genes commonly upregulated across the 3 stress treatments in moss, Acyl CoA oxidase and the copper amine oxidases generate H 2 O 2. Copper-containing amine oxidases (CuAOs) are involved in oxidative de-amination of polyamines, ubiquitous polycationic compounds involved in crucial events in the life of the cell (Tavladoraki et al., 2012). Two catalase isoforms and virtually the complete ascorbate glutathione cycle was upregulated. Glutathione reductase 1 in Arabidopsis is dual targeted to the cytosol and peroxisomes (Kataya and Reumann, 2010), and a peroxisomal isoform of NADP + dependent isocitrate dehydrogenase contributes to the ascorbate glutathione cycle in peroxisomes (Jimenez et al., 1997;Reumann et al., 2007). Measurement of changes in organelle redox using roGFP showed drought primarily affected chloroplast and mitochondrial redox potential whereas peroxisome redox potential was more affected in the dark and was exacerbated by 3-amino triazole treatment which inhibits catalase (Bratt et al., 2016). This suggests an effective peroxisomal antioxidant defence under drought conditions. Upregulation of catalase was seen in moss and wheat under all conditions whilst upregulation of ACX1, GR and DHAR2 showed common upregulation between moss and Arabidopsis (Table 3). Wheat (Ford et al., 2011) and moss (Cui et al., 2012;Wang et al., 2009) proteomic studies also support upregulation of antioxidants and polyamine biosynthesis (Cheng et al., 2015) under drought stress.
Peroxisome biogenesis
Peroxisomes multiply by division of preexisting peroxisomes in a process requiring the PEX11 family of membrane proteins and may also form de novo from the ER in a pathway that requires PEX3. PEX3 in addition recruits peroxisomal membrane proteins including membrane bound enzymes and components of the import machinery for peroxisome matrix proteins All three PEX3 genes in moss were upregulated by ABA treatment, and different members of the PEX11 gene family were differentially expressed suggesting specialization of function. Consistent with this, ABA triggered peroxisome proliferation in protonemal tissue (Fig. 5).
In Arabidopsis PEX11b, c and d are upregulated under hypoxia and biotic stress and only PEX11b and PEX11d are upregulated by ABA whereas PEX11c is down regulated (Li and Hu, 2015). PEX11e induction in response to salt stress requires components of the ABA signaling pathway (Charlton et al., 2005b). Salt stress, like drought, imposes a dehydration stress but also an ionic stress and triggers peroxisome proliferation (Mitsuya et al., 2010). Arabidopsis PEX11b stimulates peroxisome proliferation in response to light (Desai and Hu, 2008). High light, drought, salt and ABA all trigger ROS production which transcriptionally activates some PEX genes (Lopez-Huertas et al., 2000). Exogenous application leads to formation of peroxules, and PEX11a is involved in this process (Rodríguez-Serrano et al., 2016). In wheat, striking differences were seen between drought sensitive cultivar Giza and drought tolerant cultivar Oakley with respect to PEX gene expression. PEX3 and PEX11d1, d3 and d4 were strongly upregulated by ABA in the resistant cultivar but not in the sensitive one. Conversely PEX11b was upregulated in the sensitive cultivar (Fig. 8).
Conclusions
Peroxisome biogenesis and genes of putative PTS1 proteins are upregulated in response to drought, dehydration and ABA across evolutionary distant plant species. While the specifics of the responses differ, core pathways of PEX3/11 and β-oxidation are conserved. This suggests an important and evolutionarily ancient role for peroxisomes in stress perception and response. As differential regulation of PEX3 and PEX11 family members is correlated with better drought tolerance, the accumulation of multiple gene copies has perhaps allowed elaboration in the control of peroxisomal biogenesis in response to stress. Collectively our findings give new insights into the role of peroxisomes and peroxisome associated processes in response to drought and ABA across a wide evolutionary distance and suggest that the role of peroxisomes in perceiving and responding to drought stress is worthy of further investigation.
Supplementary Data
Supplementary data are available at JXB online. Supplementary Fig. S1. Germination % of two wheat varieties, Giza 168 and Oakley under 20% PEG-6000.
Supplementary Table S1. Primer sequences used in this study.
Supplementary Table S2. List of Physcomitrella patens homologues of proven and predicted Arabidopsis PTS1 proteins. Supplementary | 2018-08-06T13:17:57.005Z | 2018-07-19T00:00:00.000 | {
"year": 2018,
"sha1": "2dfa5d413d34912828cd557ec14288521432bb7a",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jxb/article-pdf/69/20/4971/25736046/ery266.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "2dfa5d413d34912828cd557ec14288521432bb7a",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
2538988 | pes2o/s2orc | v3-fos-license | An optimization of the FPGA based wavelet trigger in radio detection of cosmic rays
For the observation of ultra high-energy cosmic rays (UHECRs) by the detection of their coherent radio emission an FPGA based wavelet trigger is being developed. Using radio detection, the electromagnetic part of an air shower in the atmosphere may be studied in detail, thus providing information complementary to that obtained by water Cherenkov detectors which are predominantly sensitive to the muonic content of an air shower at ground. For an extensive radio detector array, due to the limited communication data rate, a sophisticated self trigger is necessary. The wavelet trigger investigating online a power of signals is promising, however its implementation requires some optimizations. The digitized signals are converted from the time to frequency domain by a 32-point FFT procedure, then multiplied by wavelet transforms. Altera® FFT routines convert ADC data as blocks of 2N samples. FFT coefficients are provided in a serial stream in 2 time bins. An estimated signals power strongly depends on relatively positions of the FFT(data) and the wavelet transforms in a frequency domain. Additional procedure has to calculate a most efficient selection of the sample block to reach a response corresponding to a maximal signal power. If a set of FFT coefficients were available in each clock cycle, the signal power could be estimated also in each clock cycle and additional tuning procedure would not be necessary. The paper describes an implementation of the 32-point FFT algorithm into Altera® FPGA providing all 32 complex DFT coefficients for the wavelet trigger in each clock cycle as well as a resource occupancy, timing and a power consumption for several variants implementing up to 12 wavelet engines. Measurements on an Altera®'s development kit fully confirmed our expectation based on simulated configurations. The presented results give a green light for a development of the Front-End Board prototype based on the newest Cyclone® V FPGA with the wavelet trigger for radio detection of cosmic rays.
Zbigniew Szadkowski,
Abstract-For the observation of ultra high-energy cosmic rays (UHECRs) by the detection of their coherent radio emission an FPGA based wavelet trigger is being developed. Using radio detection, the electromagnetic part of an air shower in the atmosphere may be studied in detail, thus providing information complementary to that obtained by water Cherenkov detectors which are predominantly sensitive to the muonic content of an air shower at ground. For an extensive radio detector array, due to the limited communication data rate, a sophisticated self trigger is necessary. The wavelet trigger investigating online a power of signals is promising, however its implementation requires some optimizations. The digitized signals are converted from the time to frequency domain by a 32-point FFT procedure, then multiplied by wavelet transforms. Altera R FFT routines convert ADC data as blocks of 2 N samples. FFT coefficients are provided in a serial stream in 2 N time bins. An estimated signals power strongly depends on relatively positions of the FFT(data) and the wavelet transforms in a frequency domain. Additional procedure has to calculate a most efficient selection of the sample block to reach a response corresponding to a maximal signal power. If a set of FFT coefficients were available in each clock cycle, the signal power could be estimated also in each clock cycle and additional tuning procedure would not be necessary. The paper describes an implementation of the 32-point FFT algorithm into Altera R FPGA providing all 32 complex DFT coefficients for the wavelet trigger in each clock cycle as well as a resource occupancy, timing and a power consumption for several variants implementing up to 12 wavelet engines. Measurements on an Altera R 's development kit fully confirmed our expectation based on simulated configurations. The presented results give a green light for a development of the Front-End Board prototype based on the newest Cyclone R V FPGA with the wavelet trigger for radio detection of cosmic rays.
I. INTRODUCTION
IR ESULTS from various cosmic rays experiments located on the ground level, point to the need for very large aperture detection systems for ultra-high energy cosmic rays. With its nearly 100% duty cycle, its high angular resolution, and its sensitivity to the longitudinal air-shower evolution, the radio technique is particularly well-suited for detection of Ultra High-Energy Cosmic Rays (UHECRs) in large-scale arrays. The present challenges are to understand the emission mechanisms and the features of the radio signal, and to develop an adequate measuring instrument. Electronpositron pairs generated in the shower development are separated and deflected by the Earths magnetic field, hence introduce an electromagnetic emission [1], [2]. During shower development, charged particles are concentrated in a shower disk of a few meters thickness. This results in a coherent radio emission up to about 100 MHz. Short but coherent radio pulses of 10 ns up to a few 100 ns duration are generated with an electric field strength increasing approximately linearly with the energy of the primary cosmic particle inducing the extended air showers (EAS), i.e. a quadratic dependence of the radio pulse energy vs. primary particle energy. In contrast to the fluorescence technique with a duty cycle of about 12% (fluorescence detectors can work only during moonless nights), the radio technique allows nearly full-time measurements and long range observations due to the high transparency of the air to radio signals in the investigated frequency range.
The radio detection technique will be complementary to the water Cherenkov detectors and allows a more precise study of the electromagnetic part of air showers in the atmosphere. In addition to a strong physics motivation, many technical aspects relating to the efficiency, saturation effects and dynamic range, the precision for timing, the stability of the hardware developed, deployed and used, as well as the data collecting and system-health monitoring processes will be studied and optimized [3].
One of the currently developed technique is an radio signals power estimation based on the wavelet transforms.
II. WAVELETS
Fourier transform is useful to analysis the frequency component over the whole time, but it can not catch the change in frequency response with respect to time. Short-Time Frequency transform (STFT) uses a window function to catch the frequency component in a time interval. Although STFT can be use to observe the change in frequency response with respect to time, there is still a problem that the fixed width of the window function lead to the fixed resolution. On the other hand, from the Uncertainty Principle, we know that the product of the time domain resolution and the frequency resolution is constant, so we cannot obtain high resolution at both the time domain and frequency domain at the same time. The wavelet transform is one of the solutions to the above problem: by changing the location and scaling of the mother wavelet, we can introduce a multi-resolution concept. By an implementation of several windows with variable width, the wavelet transform can capture both the short duration, high frequency and the long duration, low frequency information simultaneously. It is more flexible than STFT and particularly useful for the analysis of transients, aperiodicity and other non-stationary signal feature.
Let us investigate a time series X, with values of x n , at time index n. Each value is separated in time by a constant time where the asterisk (*) denotes complex conjugate. The above sum can be evaluated for various values of the scale s (usually taken to be multiples of the lowest possible frequency), as well as all values of n between the start and end dates. It is possible to compute the wavelet transform in the time domain according to (2). However, it is much simpler to use the fact that the wavelet transform is the convolution between the two functions X and ψ, and to carry out the wavelet transform in Fourier space using the Fast Fourier Transform (FFT). In the Fourier domain, the wavelet transform is : Unlike the convolution, the FFT method allows the computation of all n points simultaneously, and can be efficiently coded using any standard FFT package. Wavelets coefficients allow an estimation of signal power. The global wavelet spectrum, defined as the time average over a series of p-wavelet powers, can be expressed as [10]: A sum of products of Fourier coefficients calculated in a FFT32 routine for ADC data data (x n ) in each clock cycle with pre-calculated Fourier coefficients of reference wavelet gives an estimation of power for selected type of the wavelet. Only a single FFT32 routine for on-line calculation of Fourier coefficients for data is needed. Fourier coefficients for various wavelets can be calculated earlier and be available for final power estimation as constants.
A fundamental limitation for wavelet analysis on-line in the FPGA is an amount of embedded DSP multipliers. Multiplication by an utilization of logic elements is rather inefficients.
There are many functions chosen to be the mother wavelet: Haar, Mexican Hat, Morlet, Daubechies, Meyer and others [4], [5]. A selection of the mother wavelet should be correlated with a most frequent shapes of radio signals to get the maximal efficiency of the wavelet trigger approach. A preliminary analysis suggests that the potentially most promising wavelet families are Morlet (4) and Mexican Hat (5) ones.
III. ALTERA R FFT LIBRARY FUNCTIONS
Commercially offered FFT processors for FPGA applications require several clock cycles to accomplish calculation of all complex DFT coefficients. Each stage of the decomposition typically shares the same hardware, with the data being read from memory, passed through the FFT processor and written back to memory. Each pass through the FFT processor is required to be performed log r N times. Popular choices of the Radix are r = 2, 4, and 16. Increasing the Radix of the decomposition leads to a reduction in the number of passes required through the FFT processor at the expense of device resources. Such an approach is very widely useful for many applications, where timing is not crucial. However, there are areas, where the FFT coefficients (based on a new set of samples) have to be known in each clock cycle. Commercial FFT processors, unfortunately, cannot be used. This approach requires special algorithms optimized for a particular solution [6], [7], [8].
Quartus R II environment for an Altera R FPGA programming provides parametrized FFT routines with various architectures: streaming, variable streaming, burst and buffered burst. For the variable streaming provides also fixed or floating-point algorithms with natural or bit-reversed order. However, all routines deliver the FFT coefficients in a serial form. No any Altera R routine allows calculating all FFT coefficients simultaneously.
If FFT coefficients are spread in time, the wavelet transform can be also calculated in a serial way (in a single clock cycle only a single pair ofX n is multiplied by a single pair of ψ * ), however, a product will strongly depend on a relative position ofX n and ψ * . If the variables are shifted between themselves, even strong signal may give a negligible final contribution. Some additional procedure is needed, which could tune a wavelet transform regarding to the Fourier transform of ADC samples.
This problem can be automatically solved if all Fourier coefficients were provided simultaneously in each clock cycle. A synchronous multiplication with Fourier coefficients of wavelets would give required power estimation independently of any relatively configurations of these variables. The Fourier coefficients of selected wavelets are fixed, a sliding window of N ADC samples gives all Fourier coefficients in each clock cycle. This assures that for some set of samples (if a signal appears) the product of both transforms may give a significant contribution and may be used as a trigger.
The radio signal is spread in time interval of an order of couple hundred nanoseconds, most of registered samples gave a time interval below 200 ns. The frequency window in the atmosphere, where a signal suppression is on an acceptable level (the atmosphere is relatively transparent) is ca. 30-80 MHz. According to the Nyquist theorem the sampling frequency should be twice higher than the maximal frequency in an investigated spectrum. The anti-aliasing filter should have the cut-off frequency of ca. 85 MHz. Taking into account some width of the transition range for the filter (from pass- x n e −iπkn/16 where x n as samples from an ADC chip are real. The formula (6) can be split on two or more parts by rearranging of the sum and indices. The standard approach of a formula simplification is a Radix-2 Decimation-in-Time (DiT) (Fig. 1) or Decimation-in-Frequency algorithm (DiF) (Fig. 2) one. For Radix-2 DiT, we get : x n e −2iπkn/N = (7) N-point DFT can be easily split on two N/2-point transforms. Outputs from DFT procedures are complex. So, a calculation of final DFT coefficients by using DiT algorithm requires complex multiplication for final merging data from parallel DFT procedures with lower order (i.e. multiplication of twiddle factors W k N : by G[k] and H[k] in Fig. 1. Altera R provides a library routine of a complex multiplication in the FPGA (Fig. 3), however, for i.e. 16x16 bits operation requires 6 DSP embedded 9x9 multipliers even in most economical (canonical) mode. Generally, a complex multiplication in the FPGA is rather resource-spendthrift and if possible it should be replaced by a multiplication of real variables. For Radix-2 DiF, we get : The standard Radix-2 Decimation-in-Frequency algorithm (DiF) rearranges the DFT equation (6) into two parts: computation of the even-numbered discrete-frequency indicesX (k) for k=[0,2,4,. . .,30] and computation of the odd-numbered indices k=[1,3,5,. . .,31]. This corresponds to a splitting N-point DFT into two k = N/2-point routines. The first corresponding twiddle factor is e −i 2π N N 2 = −1. The first operations are simple sums and subtractions of real variables. Each operation related to the consecutive twiddle factor will be performed in a single clock cycle. The algorithm of Decimation in Frequency used for the 32-point DFT allows splitting eq. 6 as follows: where for n = 0,1,...,15 The next twiddle factors are: where γ = cos(π/4) (18) α = cos(π/8) β = sin(π/8) (19) ξ = cos(π/16) η = sin(π/16) (20) σ = cos(3π/16) ρ = sin(3π/16) The scheme developed on a pure Radix-2 Decimation in Frequency algorithm is presented on Fig. 5. The algorithm takes into account only FFT coefficients with indices k = 0,...,15. Due to real input data (x 0,...31 ) the higher FFT coefficients have well known symmetry : ReX 32−n = ReX n and ImX 32−n = −ImX n (n > 0). Calculation ofX 0,...15 according the pure Radix-2 DiF algorithm requires 8 pipeline stages. ForX 0,4,8,12,16 2 pipeline stages are necessary only for a synchronization.
According to the eq. (10) allX 0,2,4,...,14 with even indices could be calculated by the algorithm presented in [6]. Variables x n in Fig. 2 in [6] were replaced by variable of A n according to eq. (12). An application of a modified algorithm reduces an amount of 9 × 9 multipliers from 12 to 10 only and shorten a pipeline chain on stages (the last 2 stages are simple registers for synchronization) .
Let us notice that for the odd indices stages B and C for k=16,...,19 and k = 24,...27 are pure delay lines, while for neighboring indices k=20,...,23 and k = 28,...31 mathematical operation are performed in a cascade. Let us multiply A 16,...19 and A 24,...27 by the factor λ = γ −1 . Then to adjust variables in the C stage for odd FFT coefficients (for k = 20,21,22,23 and k = 28,29,30,31) So, by such a redefinition C stage for odd FFT indices is a pure pipeline stage. It can be removed with one of pipeline stage for the even FFT indices. In order to come back to the correct values coefficients in F stage can be simple redefined but for indices k = 16,20,24,28 we have to use additional 4 multipliers. Nevertheless, at this cost we save one pipeline stage and depending on a width of buses in final FFT coefficients we should save ca. 640 logic elements (32 registers with 20-bit width). In order not to lose an accuracy in calculation, the width of the variables (and registers) increase by one in successively pipeline stages. Assuming 12-bit ADC data, we get 12-bit data bus in shift registers x k (Figures 5 -6). The bus width increases to 13, 14,..., 20 in A, B,.., last routine, respectively. We can save a next pipeline stage and more logic elements but again at the cost of additional utilized multipliers. The algorithm used for indices k = 2,6,10,14 is neither Decimation in Time nor Decimation in Frequency. The eq. (11) can be rewritten as follows: Development of the algorithm according to eq. (22) would allow a reduction of the next pipeline stage, but unfortunately at the cost of additional 16 ALTMULT ADD routines (64 DSP blocks) (see Fig. 4).
If the speed is not a factor, sums of products in the E bin routine can be performed in a single clock cycle instead of two cycles as shown on Fig. 6. Thus, D 16,20,24,28 shift registers are not necessary and can be removed. A shorter chain for the odd indices allows removing also the last pipeline chain for Fig. 4. The ALTMULT ADD procedure provided by Altera. For a calculation of |W k | 2 , dataa 0 = datab 0 and dataa 1 = datab 1. This routine requires 4 DSP 9 × 9 multipliers. It is used in E bin pipeline stage for odd FFT indices (Fig. 6). Inputs dataa 0, 1 are used for C k , datab 0, 1 for constants α, β, ξ, η, σ and ρ. The routine requires two clock cycles. Sub-products are registered in MULT0 and MULT1 DSP blocks, respectively. Thus, the sum in the next register stage. even indices and saving totally more than 1000 logic elements without the cost of additional multipliers. However, we should be aware, that a registered performance significantly decreases from ca. 220 MHz to only 158 MHz for EP3C120F780C7.
V. WAVELET POWER CALCULATION AND MEASUREMENTS
The reference wavelets are real, however, their Fourier transform are already complex. Elementary product from eq. (3) is a product of two complex numbers: Fourier coefficients of data and Fourier coefficient of a reference wavelet. The simplest way is to use the Altera R routine from Fig. 3. However, due to a fact that the wavelet Fourier coefficients are predefined constant and finally we are going to calculate a module of a complex product as well as |W × Ψ| 2 = |W | 2 × |Ψ| 2 , we can calculate only |W | 2 and next as real number multiply by a next real |Ψ| 2 .
The biggest FPGAs from the Cyclone R III (EP3C120F780C7) and Cyclone R IV (EP4CE115F29C7) families with 576 and 532 DSP multipliers, respectively, allow implementation FFT32 routine (96 DSP blocks) + "Module" block (60 DSP blocks) + 14 or 11 "engines" (30 DSP blocks each) simultaneously for a power estimation of 14 or 11 various reference wavelets, respectively. Table I shows results calculated and measured in the Altera R 's development kit DK-DSP-3C120N for various variants for Cyclone III EP3C120F780C7 (a heart of this development kit). Results do not fully agree with our expectations.
A reduction of a single pipeline stage decreases a resource occupation on ca. 410 (not 640) logic elements. This may be due to optimization processes performed by the Quartus R II compiler to achieve the maximal registered performance. Nevertheless, for all comparisons the speed in "optimized" design is higher than for the "pure DiF". For a development of wavelet engines the "optimized variant has been selected as potentially faster.
The Quartus R II compiler estimated a power consumption for the core, a static mode and for the I/O sector. As possible, the output of registers were multiplexed to reduce an amount of output pins (all pins were achieved to HSMC connectors on the development board). According to expectation the power for I/O increase ca. linear with number of used pins. The static power consumption is on a level ca. 100 mW. It is a reasonable level. In comparison the Stratix R III chips have a huge power consumption in a static mode of ca. 600 mW, which significantly limited their application in systems supplied from solar panels. The power consumption for the "optimized" variant is ca. 35 mW higher than for the "pure DiF" solution. The additional 35 mW is not a factor, if it allows an improvement of the safety margin for the register performance. The EP3C120F780C7 allows implementation of 14 wavelet engines. A design with 12 engines has been tested. The power consumption is on a level of ca. 100-110 mW per the wavelet engine. It gives ca 2 W for 12 engines. This may be a challenge for an autonomous system supplied from solar panels.
Measurements of the power consumption for all considered variants show some discrepancies with simulations. Measures power consumption for the core increases slower with new wavelet engines than simulations show. Almost 300 mW lower Fig. 5. An internal structure of the FFT32 FPGA procedure. The algorithm uses 14 single clock-cycle multipliers (i.e. F 7 = γD 7 -each utilizes two 9x9 DSP multipliers) and 16 two clock-cycles multipliers (i.e. N 7 = βG 7 − αH 7 -each utilizes four 9x9 DSP multipliers). Totally, the algorithm needs 92 9x9 DSP multipliers. power taken by the FPGA (in comparison to simulations) for 12 engines gives optimistic predictions for the future applications. Power consumption for the core seems to be ca 15% overestimated in simulations. On the other hand, power consumption for the I/O section is unpredictable much higher than for simulations. However, differences decrease with higher amount of active pins. This, actually, is not a problem, I/O pins have been attached for test only. In real applications almost all variables are utilized as internal nodes. A power optimization is highly recommended. Designs have been also implemented into EP4CE115F29C7 from the Cyclone R IV family of Altera R used in a development kit DE2-115 (Terasic). According to the Altera R 's specification, the power consumption for the Cyclone R IV family is 30% less than for the Cyclone R III one. However, the Terasic's development kit does not contain any system allowing a measurement of the power consumption on the board.
For the Cyclone R IV EP4CE115F29C7 timing shows a pretty good safety margin.
VI. SPECTRAL LEAKAGE
For serial FFT processing the input data have to be chopped into blocks to be processed by the FFT routine. If signal pulses are located close to the border of a block, aliasing occurs. It manifests by a spurious contribution in the opposite border of the block and in the neighboring block as well. This effect may cause spurious pulses and has to be eliminated. The problem can only be solved, without introducing dead time between the blocks, by using an overlapping routine. Therefore the FFT engines have to be overclocked. Practically for 1024length blocks aliasing is reduced to a negligible level, when two blocks are overlapped during 64 time bins [3]. For parallel data processing, when all set of Fourier coefficients is available for each clock cycle aliasing can be eliminated by a selection of a set of these coefficients not significantly affected.
If a reduced set of Fourier coefficients is taken for data analysis, there is a possibility to increase an amount of wavelet engines for simultaneously analysis of more reference wavelets.
VII. DESIGN IMPROVEMENT
The new Altera's FPGA family -Cyclone R V provides the industry's lowest system cost and power, along with performance levels that make the device family ideal for differentiating your high-volume applications. A total power consumption compared with the previous generation (Cyclone R IV) is reduced up to 40%.
The biggest FPGA from the Cyclone R V E family 5CEA9 (with logic only without ARM-based hard processor system (HPS) contains 684 DSP 18 × 18 multipliers + 342 variableprecision DSP blocks (DSP blocks include three 9x9, two 18x19, and one 27x27 multiplier). Assuming roughly a single 18 × 18 multiplier is equivalent to two 9 × 9 ones, 5CEA9 could implement FFT32 + 18 engines for various 18 reference wavelets. However, the 5CEA9 FPGA is not yet available even for compilation (latest Quartus R II version 12.0). An estimation for 12 wavelet engines for 5CGXFC7 FPGA shows the scarcity of DSP blocks. Fast multipliers are replaced by logic elements, which significantly reduced the register performance for slow models, below our requirements. Nevertheless, if all multiplication all implemented in the fast DSP blocks (see Table II Cyclone R V for 4 wavelet engines only), timing is perfect. This allows anticipating also a perfect timing for the 5CEA9 chip. Expected total 58% less power consumption (30% and next 40% of reduction of power consumption from Cyclone R III to Cyclone R V) gives an estimation of 840 mW for 12 and 1260 mW for 18 wavelet engines, respectively. It is acceptable level of the power consumption for currently used supply systems in cosmic rays experiments.
VIII. CONCLUSION
The FFT32 routine has been successfully and costeffectively implemented into the powerful FPGA EP3C120F780C7 from the Cyclone R III family used in a development kit DK-DSP-3C120N (Altera R ) [11] and EP4CE115F29C7 from the Cyclone R IV family of Altera R used in a development kit DE2-115 (Terasic) [12], [13] .
Nevertheless, both FPGAs from Cyclone R III and IV families were treated as an engineering test platform for a development of the algorithm and a timing verification. The prototype targeted for real detection of radio signals coming from air showers developing in the atmosphere will be built on a basis of Cyclone R V family. | 2015-06-05T01:59:53.000Z | 2014-09-30T00:00:00.000 | {
"year": 2012,
"sha1": "1d7dc8d33cf38d4859aa6b8213e1bc5bfc3f9bee",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13538-014-0243-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1d7dc8d33cf38d4859aa6b8213e1bc5bfc3f9bee",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
} |
252094642 | pes2o/s2orc | v3-fos-license | Acceptance of insect foods among Danish children Effects of information provision, food neophobia, disgust sensitivity, and species on willingness to try
The growing global population and rising demand for meat increasingly pressures the world ’ s resources. Edible insects are a promising alternative protein source to unsustainable conventional meat. Despite this, disgust and neophobia are cited as significant barriers to the adoption of these novel foods in Western diets. The primary aim of this study was to assess the effects of providing three types of information — the taste, health, and sustainability benefits of entomophagy (i.e. the practice of eating insects) — on the willingness to try and hedonic response to insect-based foods among children. In addition, the differences between insects (buffalo worms and cricket) in unprocessed form and in various food applications were examined. Food disgust sensitivity, food neophobia, willingness to try, familiarity, and hedonic response to insect foods were measured. The implications of the appropriateness (as a food ingredient and to be raised as livestock) of two different insect species on acceptance were also explored. The data were collected through an online questionnaire administered in school classrooms from a sample of Danish children (n = 181). Results showed that communicating information about the benefits of entomophagy did not increase the willingness to try insect foods, irrespective of the type of information. Food neophobia was found to be a strong predictor of willingness to try insect foods, whereas food disgust sensitivity had no effect. There was no correlation between food disgust and food neophobia scores. Furthermore, certain types of insect products were found to be better liked than others (e.g. cookies over falafel). There was a species effect on hedonic response when presented as a whole insect although not when presented as processed products made with insect flour.
Introduction
The global population is expected to increase by two billion people within the next 30 yearsfrom 7.7 billion to 9.7 billion by 2050 -and is projected to reach 11 billion by the end of the century (United Nations, Department of Economic, & Social Affairs, 2019).Humanity will be confronted with a lack of nutritive resources as the demand for conventional proteins induce unsustainable pressure on our land, water, and energy supplies (Van Huis, 2013).There is an urgent need to find sustainable and innovative solutions for alternative proteins; edible insects may be among them.Although interest in edible insects in the West has been growing, the potential for these foods is still poorly understood, especially with respect to consumer acceptance (Collins et al., 2019;Mancini et al., 2019).The acceptance of insect foods among children, the next generation of consumers, is especially under-researched as most studies have targeted adults.
Barriers to consumer acceptance
Coming to accept novel foods is a complicated process and it would be no small task to achieve the successful integration of insects into the diets of Western consumers (Martins & Pliner, 2006;Rozin & Fallon, 1980).Human food preferences and aversions are formed by a complex multitude of experiential, cultural, and personal factors; insect foods are no exception.Acceptance of insect foods is often measured by "willingness to eat" and is informed significantly by food neophobia (Hartmann & Siegrist, 2016;Verbeke, 2015;Orsi et al., 2019), disgust sensitivity (La Barbera et al., 2018;Hamerman, 2016;Orsi et al., 2019), prior exposure (Hartmann et al., 2015;Hartmann & Siegrist, 2016;Verbeke, 2015), and a variety of sensory properties (Ruby & Rozin, 2019).
Insect foods are unfamiliar to most Western consumers, where there is no tradition for eating them, unlike in many other parts of the world.Food neophobia is the tendency to avoid unfamiliar or novel foods, a trait evolved to protect against the ingestion of potentially poisonous foods (Sogari et al., 2018;Dovey et al., 2008).Though neophobia is a generic attribute of omnivores, there are individual and measurable differences in the degree of food neophobia which can impact food choice and acceptance (Hartmann & Siegrist, 2016;Hartmann et al., 2015;Verbeke, 2015;Chow et al., 2021).
Similarly, disgust protects individuals from potential sources of disease and is related to the fear of unfamiliar foods and the perception of danger.While disgust and neophobia are often related, they are not the same; not all unfamiliar foods lead to disgust.On the contrary, familiar foods may lead to disgust in some individuals, either due to their sensory properties (e.g.slimy fish exterior, Højer et al., 2021), or to the anticipation of negative effects e.g.aversions based on prior experience connected to food poisoning (Rozin & Fallon, 1987).Disgust has long been recognized as a basic human emotion characterized by well-known facial expressions (such as a gaping mouth, tongue protrusion, wrinkled nose, and raised lip) and is believed to be integral to the behavioral immune system (Darwin, 1872;Rozin et al., 1999;Rozin & Fallon, 1987).
Not all disgust elicitors are related to pathogen avoidance, some are culture-specific and arbitrary (Looy et al., 2014).According to Rozin and Fallon (1987), ideational rejection of foods is based on the origin or nature of the food (the rejection of insects as food, simply because they are insects).Rejection of foods based on ideational factors is a culturally constructed, learned process and may well pose a barrier to the adoption of nutritious and sustainable novel foods, such as insects in Western cultures (Rozin et al., 2009;Evans et al., 2015;Evans, Flore, & Frøst, 2017;Hartmann & Siegrist, 2018).
Strategies for gaining acceptance in the West
Strategies aimed at the adoption of novel insect foods may need to be different than novel foods in general due to their perceived cultural inedibility (Looy et al., 2014).There is empirical evidence that in Western cultures, insects are implicitly associated with other sources of primary disgust (e.g.feces, decaying matter).For some consumers, insects are not viewed as food; instead they are associated with vectors of diseases and perceived as a health risk (La Barbera et al., 2018;Looy et al., 2014).Despite the unique challenge that novel insect foods present, a growing preoccupation with sustainability, especially in the younger generation, may provide an opening for their introduction to the Western market (Looy et al., 2014).It follows that a heightened concern for the environment in individuals is predictive of a greater readiness to accept insect foods (Verbeke, 2015).Indeed, emphasizing the benefits, including the sustainability benefits, is one strategy for encouraging entomophagy.Often this entails educating consumers about the nutritional and environmental gains (Verneau et al., 2016;Verbeke, 2015;Lensvelt & Steenbekkers, 2014;Sogari, 2015).There are notable differences between types of information campaigns; to argue for entomophagy on the grounds of sustainability is to appeal to the collective good of humanity, whereas arguments based on health and taste benefits reflect individual interests.Sometimes information transmission about edible insects is provided in conjunction with sensory learning and tastings.Increasing familiarity through experimental tastings (i.e.bug banquets) seem to have good results (Looy et al., 2014;Lensvelt & Steenbekkers, 2014;Caparros Megido et al., 2014).
The formulation and presentation of insect products will also be important in gaining consumer acceptance (Evans, Flore, & Frøst, 2017).
Studies on the acceptance of insect foods in children are more limited than in adults, though much research has been done on the acceptance of other kinds of novel foods in children.In children, resistance to trying novel foods impacts diet diversity and is related to eating fewer fruits and vegetables (Krølner et al., 2011).Cognitive approaches such as the French "Les classes du goût" and similar sensory education methods have been shown to decrease neophobia and increase willingness to try novel foods (Mustonen & Tuorila, 2010;Reverdy et al., 2008).Other modes of education, such as cooking with novel foods, can also increase willingness to taste (Allirot et al., 2016;DeCosta et al., 2017), but will not necessarily lead to an increase in hedonic response (Chow et al., 2021).Houston-Price et al. (2009) found that simple exposure to novel fruits and vegetables by way of picture books could increase children's interest in the foods followed by an increase in the willingness to try them.Repeated experiences with foods high in novelty increase familiarity, and together with social influences, are powerful tools to promote the acceptance of new foods in young children (Addessi et al., 2005).
The primary objective of the present study was to explore the effects of communicating three types of informationthe taste, health, and sustainability benefits of eating insectson the willingness to try insect foods among Danish children.The secondary objectives were: i) to analyze the interrelationship between food neophobia, disgust sensitivity, and insect species on acceptance of insect foods and ii) to determine which insect-based foods are most desirable.To reach this goal, an online learning activity and lecture as well as a survey was developed and implemented in schools across Denmark.These findings can provide useful insights to the emerging insect food industry and open new avenues for future research.
Participants
A total of 181 students from Danish schools were recruited through advertisements posted on Taste for Life's social media sites.The intervention was designed as a teaching activity, where the three versions of provided information could be part of the curriculum in the mandatory class Food Knowledge.Recruitment for the study ran from May 7th to June 12th, 2020 and was open to teachers of 5th and 6th grade classes, all of which were teaching in physical classrooms at the time of the study.Children were 9-13 years old (only one 9-year-old, the rest were 11-13 years old, mean 11.8, Std 0.7).Of the children, 96 were females, 83 were males, and two did not identify as one of those genders.Prior to beginning the survey, parental consent was obtained.Children were permitted to partake in the activity and survey along with the rest of their class with the consent of their teacher, as it was a teaching activity.Data from children whose parental consent was not obtained was deleted immediately as part of the data cleaning and processing.The experimental period was during partial COVID-19 lockdown in Denmark, so although the students were physically at schools, they were allowed only very limited interaction outside their small peer groups.Hence access for experimenters was not allowed.Participants came from four different schools, and a total of nine different classes.Due to the partial lockdown, for one school, the group of participants was not in their normal class group, but in different groups.It was not possible to identify which individuals came from which class, hence they are treated as only one class.
Study design
A link to the online survey and lecture was sent to teachers in advance for distribution to their students.Students partook in the study in the classroom but were instructed to work independently.The survey was structured in three parts: pre-exposure questionnaire, intervention exposure, and post-exposure questionnaire.Classes were randomly assigned to one of the three experimental groups with the goal to equally distribute 5th and 6th graders.Classes participated in a narrated online PowerPoint lecture and quiz on the benefits of eating insects in relation to either i) taste, ii) health, or iii) the environment depending on which experimental group they were assigned to (see Appendix A).Each lecture lasted 15-18 min.The presented information was neutral regarding the three topics above.It was based on peer-reviewed and validated information regarding insects as food in relation to each of the interventions.All delivered information that was not based on peerreviewed information (e.g.chef statements) was fact-checked.Insect images, images of people eating insects, insect dishes etc. were displayed during the recorded lectures.It contained attention checks via questions prompting participants to respond true/false to statements relating to the content of the lecture.Appendices C to E contain the slide deck for each lecture with Danish slides and with the speaker's full narrated manuscript in English.All participants received the same questionnaire regardless of experimental condition.
The pre-exposure questionnaire consisted of demographic questions, familiarity with insects as foods, willingness to try (WTT), the Food Neophobia Test Tool (FNTT; Damsbo-Svendsen et al., 2017), and the Food Disgust Scale (FDS-short; Hartmann & Siegrist, 2018).After the intervention, participants were presented with two images: a buffalo worm and a cricket, both of which were dried and frozen.Participants were prompted to give their hedonic response as well as perceived appropriateness of these two insects as a food ingredient and their appropriateness for use in animal husbandry.Participants were next presented with images of nine commercially available insect-based products: protein bars (x2), crisp bread (x2), chocolate, a burger, falafel, chocolate chip cookies, and chips.These products were made with either buffalo worm flour or cricket flour and had no visible insect fragments (Table 1).For each image it was described that the food was produced using insect flour of one of the two insect species.The questionnaire ended with an assessment of perceptions of appropriateness of these foods with insects for different situations and overall willingness to taste insect foods in the future.Perceptions of appropriateness were measured for consumption in everyday situations (i.e.food eaten at home or at school) versus special occasions (i.e.food eaten at a party, see Appendix B for full survey).The study was thus executed as a full online digital experiment, where the respondents received all information, including sample presentations and responded through a digital platform.This was the only feasible way of completing the study during the partial COVID-19 lockdown.
All evaluations of statements were done using 7-point scales, with appropriate anchors.All anchors are indicated in parentheses in the following.All hedonic responses were provided on a 7-point emoji smiley scale.The emoji scale has been validated as an appropriate and modernized approach for measuring emotional response to verbal food and non-food stimuli in children between the ages of eight to eleven (Swaney-Stueve et al., 2018).Although not all children in this study fall into this age group, the authors consider this tool to be readily accessible to the older children while still inclusive to the younger children.WTT, appropriateness measures, and perceptions of everyday and occasion foods were all evaluated on the same 7-point scale (1 = not at all, 4 = neutral, and 7 = definitely).See appendix B for full survey.The questionnaire and lecture materials were developed in English and then translated into Danish by native speakers.It was tested prior to release in order to assure consistency and comprehension.
Ethical approval
The Research Ethics Committee for HEALTH and SCIENCE at Copenhagen University reviewed the authors' methods and approved this research project as complying with their ethical standards (ethical approval number: 514-0149/20-5000).The survey was administered between May 26th to June 12th, 2020 via the platform SurveyXact.
Data analysis
Data from all three sets of the questionnaire were exported from SurveyXact into Excel and thereafter compiled into one dataset.Answers based on the emoji scale were converted to continuous numerical values from 1 to 7. Incomplete surveys were eliminated from the dataset prior to analysis.Data analysis was then performed with R (R Core Team, 2021).
Linear mixed models (LMMs) were used throughout.In all LMMs, class was included as random effect in order to account for within-class correlation and because intervention was varied at class-level; sex (two levels) and school (four levels) were included as fixed effects.School was included as fixed effect in order to adjust for potential geographic variation, which is relevant because not all interventions are tested at all schools.There is thus a partial confounding in the design that we counteract by adding the effect of school.In general, hypothesis tests were carried out as F-tests with Satterthwaite's approximation for degrees of freedom (dfs), as implemented in the lmerTest package in R (Kuznetsova et al., 2017).Pairwise comparisons were carried out using the Kenward-Rogers method for dfs and Tukey's adjustment for multiple testing as implemented in the emmeans package in R (Lenth, 2021).
LMMs with FNTT and FDS as outcomes were used to study potential factors affecting the scores.Segments with different levels of food neophobia were created by separating into quartiles (neophilic = lowest quartile of FNTT scores = low; neutral = two central quartiles = medium; neophobic = highest quartile of FNTT scores = high).Moreover, correlations between FNTT and FDS were computed and tested, both in an overall test and separately for each class.FNTT and FDS were included as covariates in an LMM for willingness to taste (WTT) before intervention in order to examine if the scores are associated with WTT.The LMM for WTT after treatment furthermore included intervention group and baseline WTT, and the LMM for change in WTT (after minus before) included intervention group.LMMs for appropriateness of using insects as a food ingredient and livestock were run separately for the two buffalo worm and cricket, including intervention group and controlling for FNTT and FDS.
Hedonic ratings of products were analyzed with an LMM which, apart from the above-mentioned fixed and random effects, also included subject as random effect and product type as fixed effect.Similarly, an LMM for expected/potential usage included subject as random effect, as well as product type, frequency (everyday/occasional) and their interaction as fixed effects.
For robustness, ordinal mixed models were applied as supplement to the LMMs, and p-values for intervention effects were also computed with simulation methods (parametric bootstrap).Results were comparable to those from the LMMs.In addition, we also ran analyses with schools as random effect and got comparable results with regards to conclusions.
Results
A total of 181 questionnaires were included for data analysis.Responses from 26 participants were excluded (20 were omitted due to incomplete answers and 6 lacked parental consent for data processing).Four schools and nine classes participated.Sixty-two participants received the taste intervention, 71 the health intervention, and 48 the sustainability intervention.Almost all participants were familiar with the concept of insects as food and about half (49.7 %) had also tasted insects before.Only two participants did not know insects could be food.The participants as a whole were moderately willing to try insect foods both before (3.9 ± 1.8) and after (4.0 ± 1.8) the intervention.Males and females were equally willing to try before (p = 0.79) and after (p = 0.13).
Effect of intervention on outcomes
The Linear Mixed Model revealed no statistically significant differences in post-WTT between interventions after controlling for pre-WTT.Fig. 1 shows the post-minus pre-test differences, with confidence intervals.As can be seen, there are only very minute differences for Taste (-0.02, p = 0.95) and Health (0.02, p = 0.97), whereas there is a larger, but insignificant difference for Sustainability (0.78, p = 0.15).Hence, our first research objective, to evaluate if there is an effect of the interventions did not show an effect, nor a difference between the interventions.
Following the interventions, questions regarding appropriateness for using insects as a food ingredient and livestock were asked.Overall, participants were almost neutral, though slightly negative, both about the concept of using buffalo worms as a food ingredient (3.7 ± 1.9) and for being raised as livestock (3.6 ± 1.8).Participants, on average, did not feel that crickets should be used as a food ingredient (2.7 ± 1.7) or raised as livestock (3.1 ± 1.8).For buffalo worm there was a significant difference across intervention groups with respect to raising them as livestock (bootstrap p = 0.009), while it was close to significant for cricket (bootstrap p = 0.06).In both cases, the sustainability intervention gave the highest scores for rating them as appropriate for livestock, and the health intervention the lowest score.There was no significant intervention effect with respect to using insects as food ingredient.A ttest showed significant differences between species for both appropriateness measures for the overall sample, indicating the mealworm was rated significantly higher on both questions (p < 0.001).
Effect of neophobia on WTT
FNTT scores ranged from 9 (neophilic) to 58 (neophobic).The total sample (n = 181) had a mean of 33.3 ± 10.4 and scores were normally distributed.Males scored significantly lower (p = 0.04) on the FNTT (31.6 ± 10.9) than females (34.8 ± 9.9), i.e. they were more neophilic.FNTT does not differ significantly between grades (p = 0.92) or between schools (p = 0.80).There were no significant differences in FNTT scores between the intervention groups (p = 0.47).In contrast FNTT scores were strongly negatively correlated with WTT both before (r = -0.55,p < 0.001) and after (r = -0.54,p < 0.001) the intervention.As there was no main effect of the three main interventions on WTT, results from the interventions were pooled.Paired t-tests between pre-and post-WTT scores for each neophobia segment revealed no significant differences between any of the segments.
Effect of disgust sensitivity on WTT
FDS scores ranged from 8 (low disgust sensitivity) to 56 (high disgust sensitivity).The total sample had a mean of 35.9 ± 12.5 (median = 39, IQR = 18) and scores were slightly left skewed, indicating the sample had a tendency for higher disgust sensitivity overall.There were no significant differences between males and females with respect to FDS (p = 0.74).There were also no significant differences in FDS scores between the intervention groups (p = 0.73).There was no significant and close to zero correlation between FDS and WTT before, (r = -0.02,p = 0.80) and after, (r = -0.04,p = 0.56).
Relationship between food neophobia and disgust sensitivity
The overall correlation between FNTT and FDS is estimated to 0.11 and is not statistically significant (p = 0.11).Fig. 2 shows a scatter plot with colors indicating sex, that also shows the lack of correlation between the two measures.This test does not take geographical or other variation into consideration.Correlation tests carried out for each class and school separately gave positive and statistically significant correlation for two classes (Class 2: r = 0.59, p = 0.0024 and Class 4: r = 0.69, 0.0031), while insignificant for the remaining six classes, and for the schools.
Hedonic ratings of whole insects and insect-based products
Fig. 3 shows mean hedonic ratings for all insects and insect products, with added confidence intervals.Overall, the image of the whole cricket received significantly lower hedonic ratings (3.0) than the buffalo worm (4.1).Of the processed foods, the cookies were the only food with an above neutral hedonic rating (5.1).The protein bar (4.0), crispbread (4.4), burger (4.4), and chips (4.3) made from cricket flour all received average neutral ratings.The falafel received the lowest average rating (2.8), followed by the whole cricket.The lower hedonic rating of whole crickets compared to whole buffalo is eliminated by the incorporation into a product (Bar), and even reversed when insects are incorporated into bread (Fig. 3).
Segmentation into three neophobia groups based on quartiles resulted in the classification of 43 children as neophilic (FNTT score < 26), 89 as neutral (FNTT score between 26 and 42), and 47 as (FNTT score > 42).There was a significant main effect of the neophobia segment on hedonic response (p < 0.001).Hedonic ratings were always higher for the neophilic segment than the neophobic segment, indicating that the FNTT is a valid instrument for measuring food neophobia.Fig. 4 shows the mean hedonic rating for the three neophobia segments averaged over all nine insects/insect foods, including confidence intervals.It clearly shows the decrease in hedonic rating with increasing neophobia.Post-hoc pairwise testing confirmed significant differences in hedonic response between all three segments.
Everyday versus occasion foods
The participants rated their intention to eat foods with insects as an ingredient in the seven different categories of products they had been presented with.This we call their perceived appropriateness of insects as ingredients in the food categories.An omnibus test in the LMM revealed a main significant effect between the seven insect food categories with regard to appropriateness as everyday and occasion foods (p < 0.0001).There was no main effect of appropriateness for different situations (special occasion vs everyday, p = 0.50).Neither was there an interaction between situation and insect foods (p = 0.25).The mean perceived appropriateness ratings are listed in Table 2.The appropriateness to use insects in cookies was rated significantly higher than all others (4.9 on a 7-point scale), followed by insects for chips and burgers (both at 4.3).These were the three categories with above neutral ratings with regards to perceived appropriateness.The remaining four food product categories received a below-neutral rating, i.e. perceived to be inappropriate.The falafel was lowest (2.7), followed by the protein bar (3.4), with crisp bread and chocolate bar both at 3.7.
Discussion
This study examined the effects of educating children about the taste, health, and sustainability benefits of entomophagy on their willingness to try and hedonic response to insect foods.Moreover, the relationship between the acceptance of insects as food and food neophobia, disgust sensitivity, species, and appropriateness was explored.
The results support extant literature by showing that, on a whole, willingness to taste (WTT) insect foods was moderate and that conveying information about the benefits of edible insects did not increase WTT insect foods.More significantly, this study makes several novel contributions to this stream of literature.Firstly, while neophobia was found to be a strong predictor of WTT insect foods, surprisingly, disgust sensitivity was not at all.Secondly, no interrelationship between food disgust and food neophobia was found in this group of children.Thirdly, there were considerable differences in hedonic ratings between whole insect species (buffalo worms and crickets), yet this effect could be entirely attenuated by processing and incorporating these insects into products such as bars and bread.Fourthly, there were substantial differences in hedonic ratings between insect products, indicating that producers of insect foods should focus on those that were better liked.Lastly, differences in perceived appropriateness between food categories were found.
While many researchers assert a growing interest in edible insects in Western countries as an alternative to conventional animal protein Fig. 4. Mean hedonic rating for the three neophobia segments averaged over all nine insects/insect foods including 95% confidence interval, extracted from LMM.
Table 2
Estimated mean ratings of perceived appropriateness for different insect foods, ordered from least to most, and based on the associated LMM.The SE for all estimated means is 0.22.Products were compared pairwise, using Tukey's adjustment for multiple comparisons, and products with different letters are significantly different at significance level 0.05.(Patel et al., 2019), it is unclear to what extent consumers are actually willing to incorporate them into their regular diets and what tactics might best promote their adoption.One option is to educate consumers about the benefits of entomophagy but there are mixed results in the literature about the effectiveness of such communication strategies.Some studies indicate that information provision about the benefits of edible insects can increase acceptance (Collins et al., 2019;Verneau et al., 2016;Lombardi et al., 2019;Woolf et al., 2019) and that communicating the societal benefits (e.g.sustainability) is more influential than communicating individual benefits (e.g.health and taste) in Western populations (Yen, 2009;Verneau et al., 2016).However, few studies look at the long-term impact of education on regular incorporation of these foods into diets.The results from this study suggest that neither type of information treatment is an effective tactic in increasing acceptance of these foods, even in the short-term, as indicated by the null effect of information treatments on WTT across all conditions.There is evidence that educating consumers about edible insects can have the opposite effect than intended and potentially hinder acceptance further (Barsics et al., 2017).In one study, participants who tasted a bread product labeled as containing insects before an information session on benefits rated their overall liking higher than those who tasted the product afterwards (Barsics et al., 2017).The authors conjecture that seeing insects as part of the educational exercise conjures a state of disgust prior to the tasting experience (ibid).While extant literature indicates that adults will not be easily convinced to try insect foods through rational appeals, this research is the first to evidence that this is also the case in children.Food neophobia and disgust sensitivity have been found to impose independent effects on WTT insect foods.La Barbera et al. (2018) found that the explanatory power of disgust sensitivity is significantly greater than that of food neophobia.Likewise, participants with high core disgust and animal reminder disgust (but not contamination disgust) have demonstrated less interest in attending a Bug Banquet event where they would have the chance to eat insects (Hamerman, 2016).It is surprising then that, in the current study, disgust sensitivity was not significantly correlated with WTT insect foods, whereas neophobia was strongly correlated.Somewhat more consistent with the present findings, Hartmann & Siegrist (2016) found that while animal contaminant disgust was indeed correlated with willingness to eat insects, it was less so than food neophobia.The discrepancies between the current findings and those of La Barbera et al (2018) and Hamerman (2016) may be due to the younger group of subjects.For instance, it may be that disgust does not prohibit willingness to try insect foods in children, unlike in adults.Further investigating the differences in how disgust affects willingness to try insects in children versus adults is an interesting area for future research.
The discrepancy found between this research and others draws attention to the importance of accurate measurement tools and yields insights into how to best measure disgust in future studies.Inconsistency in disgust findings may stem from the differences in how disgust sensitivity is measured across studies.There is not unanimous consensus as to what domains an optimal disgust scale should measure, and likely there is a need for different disgust scales under different circumstances (i.e. the Food Disgust Scale in the context of food instead of a general Disgust Scale).Presently, disgust sensitivity was measured using the eight items of the Food Disgust scale (FDS-short) developed by Hartmann and Siegrist (2018).Within the various options proposed in the literature, this scale was chosen because it expressly focuses on food related disgust.One limitation of the FDS is that it does not include disgust elicitors in the domain of moral violations, which may be relevant when addressing the acceptance of novel foods and the distinction of "appropriate" and "inappropriate" animal-based food products.Instead, the FDS scale focuses on disgust elicitors within the domain of pathogen avoidance (i.e.animal flesh, human contamination, poor hygiene, decaying fruit, decaying vegetables, mold, fish, and living contaminants).
There were considerable differences between species both in perceived appropriateness and hedonic evaluation.Participants believed that buffalo worms were more appropriate than crickets both as i) a food ingredient for human consumption and ii) use for animal husbandry (p < 0.001 for both measures).Likewise, the buffalo worm was rated significantly higher in hedonic response than the cricket (p < 0.001).Due to the great variability in physical characteristics and sensory properties across the many edible insect species, it follows that certain species will be better accepted by Western consumers than others (Ruby et al., 2015;Fischer & Steenbekkers, 2018).The cricket image in this studywith limbs, distinct features, and relatively large sizeis arguably perceived as more animal-like and thereby less appropriate and appetizing than the buffalo wormwhich lacked any of these features.Indeed, reminders of animalness have been found to be one of the major determinants of perceived disgust (Martins & Pliner, 2006).Chow et al. (2021) found that oatmeal balls made with ground mealworms were better liked than those made with ground grasshoppers.They explain their findings by arguing that the disgust eliciting properties of different insect species vary due to differing degrees of animalness.
However, species may be less important in processed products.In this study, we see an increase in hedonic ratings for both insects when they are incorporated into food products.There were no differences between the processed foods made with flour from the two species (i.e.BUF-bar versus CR-bar and BUF-bread versus CR-bread).It appears that from a consumer's point of view, the species effect was important when the insect was present as a whole, but it was not a determining factor on hedonic evaluation when the insect was present as an indiscernible ingredient.This effect was found even in a context when the whole insect was quite salient, having presented images of the whole insect alongside the processed product.Therefore, producers of processed products are not constrained to producing specific species.Rather, they are free to produce the most economically viable and sustainable insect species.This may be a good path forward because research indicates that consumers are more willing to eat foods which have no visible pieces of insects (Schäufele et al., 2019;Collins et al., 2019;Caparros Megido et al., 2016).Processed insects, in the form of flour, can be incorporated into familiar and well-liked foods.This appears an appropriate way to introduce edible insects to the market, when familiarity is low.Eventually, and as consumers become more familiar with insect foods, degree of processing may become less important.
There were notable differences in hedonic evaluations between processed products, providing insight into which types of products will be better received on the market.Of the processed products, the falafel received the lowest hedonic ratings (2.8) whereas the cookies received the highest (5.1).The chocolate was the second least liked food (3.5).The chips (4.3), burger (4.4), crispbread (4.4), and protein bar (4.0)all made with cricket flourreceived just above neutral ratings (4).In keeping with the literature, many of the processed products were liked more than the whole insects, though this was not always true.Notably, the whole buffalo worm received significantly higher mean hedonic ratings than the chocolate and falafel.Processed insect foods may appeal to some consumers, while whole or foods made with visible insect pieces may be more appealing to others.One study found that this preference was predicted by previous exposure: individuals who had eaten insectcontaining foods before were most willing to consume fried, grilled, or roasted whole insects, while those who had not were most willing to eat protein bars with insect protein isolate (Woolf et al., 2019).
Although there was a large span in hedonic ratings and there was not a consistent pattern that would suggest any food category was preferred over another (e.g.sweets over savory foods, or snacks over meals), it is surprising that the chocolate received the second-lowest hedonic rating.Nor was there any difference in appropriateness for special occasion food than an everyday food.Even so, these measures did provide a reflection of overall appropriateness, as there were significant differences between product types on these measures.Certain items, such as the cookies, were perceived as appropriate for eating on all occasions and were also among the highest hedonically rated products.Conversely, the falafel was significantly lower than all other products on both measures of appropriateness and was also the lowest hedonically rated product.It appears that participants put weight on the visual appearance and expected texture, giving higher scores to products that look more appropriate and familiar or overall more appetizing.Similarly, the low score given to the falafel may be explained by the low familiarity of this food.Falafel is a traditional Middle Eastern food and may not be as familiar to Danish children when compared with other products like cookies or crispbread.However, familiarity cannot explain the low score given for the crispbread and chocolate because both are common foods in Denmark.As Tan, van den Berg, and Stieger (2016) also suggest, acceptability of insect foods is not merely attained by pairing familiar and well-liked carrier foods with insect ingredients, although it does play a role.The relationship between insects as ingredients in foods and the resulting hedonic response is complex beyond what the current experiment could disentangle.
Correctly profiling a consumer segment ready to adopt insect-based foods in the Western market is crucial, as there are clearly many consumers who will not be persuaded to try insects.Indeed, Verbeke (2015) indicates that profiling these consumers may be the first step for marketplace acceptance.Children were investigated in this study as they have been previously under-explored as potential entomophagists, an area in which the present study contributes to the current body of literature.They may be a good target for early adopters, as their food preferences are more adaptable than adults, albeit not through traditional educational methods (Collins et al., 2019).Moreover, disgust sensitivity may not pose an additional adoption barrier in children, as it does in adults.
Findings are pertinent to product development, marketing, and communication within the growing insect food industry in the West.Though neophobia poses a barrier to the acceptance of insect foods, young people appear at least moderately interested in trying them.As insect products become more readily available on the market, consumers will become more familiar with them, thus reducing the degree of neophobia towards insect foods.Thought should be given to the product type, degree of processing, and species of insect when positioning novel insect products on the market.Of course, insect-based products must also be appetizing and flavorful.Integrating insects into familiar and pleasing food items may facilitate the acceptance of insects as food.While consumers remain ambivalent towards consuming insect foods themselves, another avenue for future research is to investigate acceptance of insects as feed for livestock rather than human consumption.This may be another route towards increasing familiarity with edible insects.
This study was strengthened by a substantial sample size and the application of validated psychological instruments such as the Food Neophobia Test Tool (FNTT) and Food Disgust scale (FDS short).While many studies on novel insect foods focus individually on either neophobia or disgust, this study examined the effects of both simultaneously.Moreover, there are few studies that explore novel insect foods in children, making the results from this study unique.
The present study has some limitations: The use of commercially available products, while making the results more applicable to the real world, also made it impossible to control for differences in appearance across items.This may have allowed for individual preferences to confound hedonic response.Self-report measures and images of foods were used instead of behavioral measures, leaving room for an intention-behavior gap.Future studies should offer tastings of real insect foods whenever possible.However, the COVID-related partial lockdown at the time of the experiment did not allow us to present real foods to the participants.This is an important caveat to much of the literature on the acceptance of insect foods, including this studythat WTT does not equate to adoption into regular diets.Long-term studies would better predict openness to dietary change.
Conclusion
This study provides consumer insights, specific to children, about edible insects as a future source of animal proteins.An online educational activity and survey was used to examine the willingness to try and hedonic response to images of commercially available insect products among Danish school children.The implications of food neophobia, disgust sensitivity, species, and appropriateness were also explored.The results revealed that overall, Danish children were moderately willing to try insect-based foods and that there is a potential for edible insects in this population segment.It was demonstrated that communicating the benefits was not effective at increasing willingness to try insect products irrespective of the type of benefits (taste, health, sustainability).
Food neophobia was found to be negatively correlated with willingness to try and hedonic ratings of insect foods, whereas disgust sensitivity was found to have no correlation.In addition, we found no correlation between food disgust and food neophobia.Finally, the effect of species was a determining factor on hedonic response when insects were presented whole but not when presented as a food ingredient.
Fig. 3 .
Fig. 3.Estimated mean hedonic ratings for all insects and insect products with 95 % confidence intervals, extracted from LMM. Insect types indicated with blue and red colors.BUF = Buffalo worms, CR = Cricket.(For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
Table 1
Overview of Insect-Based Products Showed as Images in the Study. | 2022-09-07T15:02:06.159Z | 2022-09-01T00:00:00.000 | {
"year": 2022,
"sha1": "cc9c00a338e831cf142b4fad459b6d5e2608853e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.foodqual.2022.104713",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8b7401efeeea061e9ce98905c59436a10dc2523a",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
236486752 | pes2o/s2orc | v3-fos-license | Molecular dynamics study on the deformation of void single crystal magnesium under uniaxial stress
The uniaxial tension and compression process of single crystal magnesium model with voids along [0001] direction was simulated by using embedded atom potential and molecular dynamics method, and the micro plastic deformation mechanism of voids under tension and compression was studied. The results show that the elastic modulus of the single crystal magnesium model under compression is greater than the elastic modulus under tension, indicating that compression deformation is more difficult; In the process of plastic deformation, the dislocation, stacking fault and twin will be produced in the single crystal magnesium model under tension and compression, but the emission mechanism of dislocation is different. Tension will make the dislocation slip to the edge of the model along the direction of 45 ° and produce four symmetrical slip bands, while compression will produce an annular defect band near the cavity; In addition, the stacking fault area and twin type produced are also different. This asymmetry is mainly caused by the different initial deformation mechanisms under the two loading conditions.
Introduction
Magnesium and magnesium alloys are typical metal structural materials. With a series of advantages such as low density and high specific strength, it is widely used in aerospace, automotive and electronic consumer products and other fields [1][2]. However, there are usually complex micro defects (such as point defects, micro holes and micro cracks) in the internal microstructure of materials, and the macro failure of materials is caused by the propagation, initiation and penetration of these micro defects [3][4]. Therefore, it is very important to study the deformation propagation and failure mechanism of internal micro defects under external load. In recent years, With the rapid increase of computer operating speed and storage space, computer simulation technology has developed rapidly. Molecular dynamics simulation can observe some micro-deformation details that are difficult to obtain in experiments. These micro-deformation details can help to better understand and master the change law of the macroscopic morphology of matter. Therefore, molecular dynamics methods are often used when studying the physical and chemical properties of substances. [5].
So far, scholars at home and abroad have conducted a large number of studies on the effect of voids on the properties of different materials using molecular dynamics methods. Poeimiche G P [6] conducted a molecular dynamics study on the growth and aggregation of single crystal nickel voids, and analyzed the influence of material length on the destruction process of single crystal nickel. The results showed that the length dimension of the sample changed the dislocation mode. Tang et al. [7] studied the pore growth and pore aggregation in single crystal magnesium, and the results showed that the model size and temperature have a significant effect on the deformation of the pores. Zhao [8] established a single crystal copper model with columnar cavities using molecular dynamics methods to simulate the growth and evolution of columnar cavities. The results show that for a given porosity, the cell size has a significant effect on the initial yield strength. Mi et al. [9] simulated aluminum alloy materials with multiple voids and found that the distribution of large voids into small voids would increase the load-bearing capacity of the sample; No obvious dependence of void fraction evolution on void coalescence is observed.
In summary, domestic and foreign researchers have used molecular dynamics methods to study metal materials containing voids and have achieved many valuable results. However, most researchers only study materials in tension or compression. There are few comparative studies on the effects of tension and compression on the plastic deformation of crystals, and there is no report on the mechanism of tension and compression plastic deformation of single crystal magnesium containing voids. In order to understand the microscopic plastic deformation mechanism of metal materials more comprehensively, this paper takes the hexagonal close-packed single crystal magnesium as the research object to study the influence of tension and compression on its microplastic deformation mechanism. Figure 1 shows the tensile and compressive stress-strain curves of the single crystal magnesium model along the [0001] crystal direction, a cubic single crystal magnesium model with side length of L = 20 nm was established for uniaxial tension and compression simulation. At the same time, a sphere with radius of 2.5nm is deleted from the center of the cube to form a void defect, forming a single crystal magnesium model with void defects. As shown in Fig 1, the total number of atoms in the system is about 350000, and the two dark parts of the model are simplified spherical voids. The xyz coordinate axes are 䫨䫨 、 䫨 䫨 and [0001] respectively. Considering the influence of boundary effect on the simulation, the periodic (ppp) boundary conditions are adopted in the xyz direction of the model. The simulation time step is 1 fs. In order to make the results of the stretching simulation closer to the actual situation, before the uniaxial stretching simulation, the conjugate gradient method is used to minimize the system energy, relax 50,000 steps, and then perform an isothermal and isostatic system on the single crystal magnesium model with voids Comprehensive (NPT) stretch. The model was simulated with an average strain rate of 2×10 9 s -1 and a temperature of 100K.
Calculation method
In order to study the tensile and compressive deformation mechanism of the single-crystal magnesium model with voids, the large-scale molecular dynamics simulation software Lammps was used. The interaction between the metal mg atoms was described by the embedded atomic potential (EAM) developed by Sun et al. [10] The total potential energy E of the entire system composed of atoms is expressed as follows In the formula, is the function density of the local electron; is the electron density; U is the pair potential; is the distance between the atom and the atom. At the atom, the dipole force is as follows: In the formula, is the force between atoms; is the displacement between atom i and atom j; N is the number of nearest neighbor atoms;Ω is the atomic volume. The stress tensor is determined as the volume average of the entire material block: In the formula, N * is the number of active atoms, indicating the directional component. different. There are differences between the two curves in the elastic deformation stage, and the slope of the curve along the z-axis compression is larger, which indicates that the compressive elastic modulus of the single crystal magnesium model is greater than the tensile elastic modulus along the z-axis compression, which also indicates that the single crystal magnesium model is more difficult to deform than the tensile model along the z-axis compression.
Atomic configuration
The change trend of stress-strain curve mainly depends on the microstructure evolution of the crystal. In order to better understand the micro deformation mechanism in the process of void tension and compression, the microstructure evolution process of the crystal during uniaxial tension was analyzed combined with the stress-strain curve. Fig 3 shows the dislocation analysis diagram when the single crystal magnesium model is stretched along the z-axis. Fig 3(a) shows the microstructure when the crystal is in the elastic deformation stage, that is, the corresponding strain 0.052 stage. In addition to the structural changes of the atoms on the surface of the cavity, the atoms in the other crystals still belong to the perfect hexagonal close packed structure, and there are no dislocation, slip and other defects, but the distance between the atoms has a slight change, and the model can still return to the original shape when unloading. Fig 3(b)-(e) shows that the model has undergone plastic deformation. In Fig 3(b), the strain =0.054, the crystal reaches the yield strength. Due to the effect of stress concentration, the stress reaches the dislocation emission condition, and the dislocation begins to nucleate near the cavity. As the strain increases, it is observed that the dislocation starts to be emitted along the direction of about 45°from the horizontal plane of the cavity. A large number of dislocation emission is the main reason for the decrease of the stress-strain curve, and with the emission of dislocations, defect atoms (white atoms) continue to increase. In the stress-strain curve, it can be seen that there is a plateau period of stress in a-b segment. This is because at this stage, the slip band expands to the crystal surface, and the dislocation emission reaches the edge of the model, as shown in Fig 3(c). At this time, the strain =0.060, resulting in dislocation accumulation, which causes the overall stress to remain unchanged temporarily. After reaching b, the accumulated dislocations continue to emit, and the stress begins to decrease. Moreover, in the figure, it is also found that stacking faults lead to the hcp (red atom)→fcc (green atom) phase transition, but the phase transition accounts for a small proportion relative to the dislocation, which indicates that the dislocation is easier than the phase transition in the tensile process of single crystal magnesium along the [0001] direction. When the strain reaches =0.068 in Fig 3(d), it can be observed that twins appear in the model, and then the twins continue to grow; when the strain is =0.120 in Fig 3(e), most of the crystals are occupied by twins. In Fig 4(a) 0.056 shows the microstructure of the crystal in the elastic deformation stage. Similar to the elastic deformation stage of tension, the distance between atoms only changes slightly, and the model can still return to its original shape when unloading. But the difference between the two is: the effect of stretching the single crystal magnesium model is to make the cavity elongate in the stretching direction, while compression makes the size of the cavity elongate in the vertical compression direction. Similarly, Fig 4(b)-(h) shows that the model has undergone plastic deformation. In Fig 4(b), the strain =0.060, the model reaches the yield strength. Due to the effect of stress concentration, the stress reaches the dislocation emission condition, and the dislocation begins to nucleate near the cavity; However, there is no dislocation extending to the edge of the model around the cavity, but a ring-shaped defect region (white atom) appears around the cavity, as shown in Fig 4(e), which is observed along the z-axis. Moreover, there is also a phase transition from hcp (red atom) →fcc (green atom) caused by atomic stacking faults. In Fig 4(c) =0.065, the ring defect atom has developed to the edge of the model, the single crystal magnesium model reaches the strength limit, and the stress begins to decrease. With the increase of strain, the stacking fault area caused by compression increases gradually, and is much larger than that caused by tension. As shown in Fig 4(d), the stacking Fig 4(f). Further compression, twins can be observed at the edge of the model, as shown in Fig 4(g). After that, the twins grow up, but they do not extend to the cavity and only expand at the edge of the model, as shown in Fig 4(h). From the dislocation analysis at the end of tension and compression, Fig 3(e) and Fig 4(h) can clearly see that the twin types produced by the single crystal magnesium model are different. In the process of tension and compression of single crystal magnesium, the change trend of the stress-strain curve is similar, but the elastic modulus and yield stress of the model during compression are greater than the elastic modulus and yield limit during tension, indicating that compression deformation is more difficult. 2) The yield stress of the material in compression is much greater than that in tension. This asymmetry is mainly caused by the different initial deformation mechanisms under the two loading conditions. | 2021-07-29T20:07:03.081Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "3d83358fc61d8ad17597471d65738a151a489ef1",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1980/1/012003",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3d83358fc61d8ad17597471d65738a151a489ef1",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
} |
7392262 | pes2o/s2orc | v3-fos-license | Kabuki syndrome: a Chinese case series and systematic review of the spectrum of mutations
Background Kabuki syndrome is a rare hereditary disease affecting multiple organs. The causative genes identified to date are KMT2D and KDMA6. The aim of this study is to evaluate the clinical manifestations and the spectrum of mutations of KMT2D. Methods We retrospectively retrieved a series of eight patients from two hospitals in China and conducted Sanger sequencing for all of the patients and their parents if available. We also reviewed the literature and plotted the mutation spectrum of KMT2D. Results The patients generally presented with typical clinical manifestations as previously reported in other countries. Uncommon symptoms included spinal bifida and Dandy-Walker malformation. With respect to the mutations, five mutations were found in five patients, including two frameshift indels, one nonsense mutation and two missense mutations. Conclusions This is the first case series on Kabuki syndrome in Mainland China. Unusual symptoms, such as spinal bifida and Dandy-Walker syndrome, suggested that neurological developmental defects may accompany Kabuki syndrome. This case series helps broaden the mutation spectrum of Kabuki syndrome and adds information regarding the manifestations of Kabuki syndrome.
Background
Kabuki syndrome or Kabuki make-up syndrome was originally described in 1981 following observations of five Japanese children whose conditions were characterized by mental retardation, dwarfism and peculiar faces and abnormal dermatoglyphics [1]. Historically, the diagnostic criteria for Kabuki syndrome were mainly based on the typical clinical manifestations, which were established by analyzing a group of sixty-two patients with Kabuki syndrome in 1988 [2]. Five cardinal manifestations were frequently observed in patients with Kabuki syndrome and could be used as diagnostic clues, including peculiar facial appearances, mild-to-moderate mental retardation, dermatoglyphic abnormalities, skeletal anomalies, and postnatal growth deficiencies. Kabuki syndrome was previously considered to be prevalent in only Japan, but it has now been recognized to be prevalent all over the world. The estimated prevalence in Japan is approximately 1/32,000, whereas the estimated prevalence is at least 1/86,000 in Australia and New Zealand [2,3].
The underlying genetic mutation of Kabuki syndrome was not revealed until the year 2010, when exome sequencing identified MLL2 mutations in Kabuki syndrome patients (Kabuki syndrome 1, OMIM 147920) [4]. Bögershausen et al proposed a new nomenclature for the MLL2 gene as KMT2D [5]. KMT2D consists of fifty-four coding regions and functions as a histonelysine N-methyltransferase in various signalling pathways such as epigenetic modulating. However, KMT2D mutations alone were not able to account for all Kabuki syndrome cases. Later, mutations in the KDM6A gene, which encodes a histone demethylase that interacts with KMT2D, were identified (Kabuki syndrome 2, OMIM 300867) [6]. Therefore, analyzing mutations in the KMT2D and KDM6A genes would help confirm the diagnosis in patients who fulfilled the clinical diagnostic criteria for Kabuki syndrome.
Medical centres and institutions in Europe, North America and South America have carried out extensive KMT2D mutation spectrum reports. However, studies concerning the KMT2D mutation spectrum in China have rarely been reported. This study describes the first case series of eight Chinese patients with Kabuki syndrome, their clinical manifestations, and their atypical symptoms. We also analyzed the genetic changes in the KMT2D gene and conducted a literature review of the KMT2D mutation spectrum. The main aim of this study was to determine the mutation spectrum of the KMT2D gene in Chinese Kabuki patients and to review the mutation spectrum reported in the literature.
Patients
This study was reviewed and approved by the Peking Union Medical Collage Hospital Ethics Review Board. All of the participants' legal guardians provided their written informed consents to participate on behalf of the children and/or for themselves. We obtained written informed consent to publish characteristic features of the disease and to maximally hide other non-disease-related features, thus protecting the privacy for each patient.
We retrospectively searched the clinical records for the initial outpatient visit covering from January 2010 to February 2013 in the Department of Pediatrics, Peking Union Medical College Hospital (PUMCH), Beijing, China and Children's Hospital of ChongQing Medical University (CH-CQMU), ChongQing, China. Those patients with a clinical diagnosis of "Kabuki syndrome" were enrolled. Overall, eight patients were enrolled. Clinical manifestations were retrieved from the original clinical records. A telephone-based follow-up was conducted to ask for patients' current height and weight as of February 2013.
Sanger sequencing of KMT2D gene and identification of pathogenic mutations KMT2D gene mutation status was obtained by Sanger sequencing after obtaining informed consent. First, peripheral blood samples from each patient were collected. Blood samples from the parents were also collected if available. Then, the total genomic DNA was extracted by standard procedures, and the 54 exons and exon-intron junctions of the KMT2D gene (UCSC NM_003482) were amplified in three PCR fragments. Exons were amplified in a reaction containing 2 μL of 10x PCR buffer, 3 μL of dNTPs (2.5 mmol/L), 0.3 μL of rTaq polymerase (5 U/μL) (TAKARA, Dalian, China), 1 μL of genomic DNA (100 ng/μL), 1 μL of each primer (10 pmol/μL), and 11.7 μL of ddH2O. The thermal cycler conditions were as follows: 95°C for 5 min, then 35 cycles of 94°C for 30 s, 59°C for 30 s, and 72°C for 45 s, and a final elongation step at 72°C for 5 min. Direct sequencing was performed on a Genetic Analyzer (Biomed Corp, Beijing, China). As a reference, the A of the ATG translation initiation codon of coding sequence of KMT2D was referred to as nucleotide +1.
Results from Sanger sequencing were compared with the reference sequence of KMT2D to identify single nucleotide substitutions, frameshift indels, and nonframeshift indels. Novel missense mutations would be subjected to further analysis to explore their pathogenicity. First, the parents' KMT2D genes were sequenced. Second, the proband's missense mutation was searched in the 1000 Genomes Database and in the Exome Variant Server [7]. Finally, the pathogenicity of protein change due to the missense mutation was predicted using the in silico prediction models SIFT, PROVEAN and Polyphen-2 [8,9].
Literature review of the KMT2D gene mutation spectrum Two of the authors (CS and XH) independently searched the published literature in MEDLINE and EMBASE using the following search keys: ("Kabuki syndrome" OR "Kabuki make-up syndrome" OR "Niikawa-Kuroki syndrome") AND ("KMT2D" OR "MLL2"), without language restriction, with published data up to January 6, 2015. Another author (ZQ) supervised the literature review process and resolved any disagreement between the authors (CS and XH) regarding a study's inclusion. Articles were manually reviewed, and KMT2D mutation data were retrieved. Here, we included only studies with 10 or more patients. The KMT2D mutations were summarized and categorized as "missense," "frameshift indel" or "nonsense."
Clinical manifestations of eight Chinese patients with Kabuki syndrome
We identified eight patients with a clinical diagnosis of Kabuki syndrome (six from PUMCH and two from CH-CQMU). Six patients were male, and two were female. The ages of initial diagnosis ranged from 8 months to 9 years. All of patients belonged to the Han ethnic group.
The general clinical manifestations were categorized in a "five cardinal manifestations" pattern, which included typical craniofacial abnormalities, skeletal abnormalities, dermatoglyphic abnormalities, cerebral abnormalities, and postnatal growth deficiencies ( Table 1). The percentages of patients with each symptom were compared with previous reports [10]. The typical features of the aforementioned patients are illustrated in Figure 1. All of the patients in this case series presented with typical craniofacial abnormalities, including high/sparse eyebrows, long palpebral fissures with eversion of the lateral third of the lower eyelids, large ears and depressed nasal tips and blue sclera. The percentages of patients presenting with strabismus and ptosis were lower than in previous reports.
With respect to skeletal abnormalities, four patients presented with clinodactyly of the 5th finger. Patient 1 was diagnosed with bilateral hip dislocation at one year of age and underwent corrective surgery. The same patient also presented with hyperlaxity and spina bifida occulta. Patient 8 also presented with hyperlaxity.
All of the patients presented with fingertip pads typical of previously reported dermatoglyphic abnormalities. With respect to cerebral abnormalities, seven out of the eight patients presented with mental retardation. Only Patient 5 and Patient 8 presented with microcephaly (two standard deviations from the median value) according to the standard for Chinese children [11]. With respect to seizures, Patient 5 and Patient 3 exhibited this manifestation. Patient 5 presented with an episode of seizure that occurred in the first day of life and was infantile spasm type. MRI indicated corpus callosum hypoplasia and Dandy-Walker malformation, which indicated that congenital cerebral developmental defects might happen concomitantly with Kabuki syndrome.
With respect to postnatal growth, Patient 6 and Patient 7 were diagnosed with postnatal growth retardation, and Patient 1 experienced self-reported feeding difficulties.
In addition to these five cardinal manifestations, Patient 1 experienced fever or diarrhoea every other week from the age of two to three. Patient 2 experienced multiple upper respiratory symptoms and tonsillitis when he was 3 years old. Patients 5, 7 and 8 experienced frequent respiratory infections. With respect to cardiac abnormalities, Patient 5 was diagnosed with a patent foramen ovale and Patient 7 and Patient 8 presented with atrial septal defects. Patient 2 presented with pectus carinatum, and Patient 3 presented with bilateral knee joint stiffness.
Review of the KMT2D gene mutation spectrum in Kabuki syndrome
We performed a literature review of the published studies on KMT2D gene mutations based upon the method described above. The number of studies from MEDLINE and EMBASE were 48 and 182, respectively. After merging the duplicates, there were 201 published articles. Each abstract was manually read, and data were retrieved if available. The data from the included studies contributed to the further analysis.
Sanger sequencing to identify mutations in the KMT2D gene in this study
Our eight patients underwent genetic analysis of the KMT2D gene, and ten variants were detected (Table 2). Unlike the distribution in previous reports, only two frameshift indels and one nonsense mutation were identified, which were c.3095delT in Patient 4, c.4395dupC in Patient 6 and c.4140T > A in Patient 8. One in-frame indel, c.11718-11723delGCAACA, was also identified in Patient 8. Six missense variants were also identified. Patient 1 had two missense variants, c.12199C > T and c.16295G > A (Figure 3). Patients 2, 3, 5 and 7 had one missense variant each: c.4664C > T, c.8639T > C, c.96C > G (p.Asp32Glu) and c.11638C > A, respectively. We sequenced both of the parents of Patients 1, 2, 3 and 5 and found none of the previously described missense variants in the parents' KMT2D genes. The other four patients' parents were not available for genetic testing.
We did not find any of the missense variants identified in this series in the 1000 Genomes Database or the Exome Variant Server. With respect to the pathogenicity analysis, we used the in silico prediction models SIFT, PROVEAN and Polyphen-2 to predict the protein changes due to the missense variants. Two missense variants (c.16295G > A and c.8639T > C) were predicted to be pathogenic by all three of the models. Therefore, we categorized those two variants as "pathogenicity confirmed." However, the other four missense variants (c.12199C > T, c.4664C > T, c.96C > G and c.11638C > A) were predicted to be benign, neutral or tolerable by at least two of the programs. Therefore, we would categorize those four variants as "pathogenicity undetermined." The single inframe indel in Patient 8 (c.11718-11723delGCAACA) was also categorized as "pathogenicity undetermined." In conclusion, five of the eight patients had pathogenic mutations in the KMT2D gene. These patients were Patient 1 (c.16295G > A), Patient 3 (c.8639T > C), Patient 4 (c.3095delT), Patient 6 (c.4395dupC), and Patient 8 (c.4140T > A). Three of the eight patients did not have confirmatory pathogenic mutations in the KMT2D gene, and the clinical symptoms could not be ascribed to KMT2D gene mutations. These patients were Patient 2, Patient 5, and Patient 7.
Nine of the ten variants were novel. One missense variant (c.16295G > A) was previously reported by Kokitsu-Nakata et al [23].
Discussion
Kabuki syndrome is a rare congenital disease. Cases from different parts of the world have been extensively reported. However, reports on Chinese patients with Kabuki syndrome have been rare. To our knowledge, five cases of Chinese patients due to this disease were published in several case reports [24][25][26][27]. However, the KMT2D mutation status was not available for these cases. This is the first case series to include both the typical clinical manifestations and the KMT2D mutation status of patients from Mainland China. The clinical manifestations in this case series were consistent with the clinical diagnostic criteria.
With respect to atypical symptoms, Patient 5 presented with hypoplasia of the corpus callosum and Dandy-Walker malformation. Increasing evidence suggests that structural central nervous system (CNS) malformations, including Dandy-Walker malformation, can be present in Kabuki syndrome patients [28][29][30][31]. These congenital neurologic defects may partially account for the mental retardation commonly observed in Kabuki syndrome patients. Further analysis is needed to evaluate whether CNS structural defect are frequent in patients with Kabuki syndrome.
With respect to congenital heart defects, the major types associated with Kabuki syndrome are left-sided obstructions and aortic dilation, coarctation of the aorta (COA), atrial septal defects (ASDs), ventral septal defects (VSDs), and tetralogy of Fallot (TOF), among others [32,33]. In our series, two patients presented with ASDs. Thus, we propose that patients should be meticulously screened for congenital heart defects once the diagnosis of Kabuki syndrome has been made. Gene sequencing of KMT2D and KDM6A can be used to detect mutations in patients with a clinical diagnosis of Kabuki syndrome. Studies with large cohorts of patients indicate that KMT2D gene mutations can be detected in half to three-fourths of patients with clinical diagnoses of Kabuki syndrome [13,16,17]. Pathogenic KDM6A gene mutations account for a small percentage of patients [14,16,19,[34][35][36].
KMT2D pathogenic mutations were detected in five of the eight patients in our case series. This case series identified six missense variants, two frameshift indels, one in-frame indel and one nonsense mutation. All of the frameshift indels and nonsense mutations were considered pathogenic because the protein structure was significantly altered. However, we would urge caution in interpreting the influence of missense mutations or inframe indels on protein function because some amino acid substitutions would not necessarily impair protein function. Therefore, we used in silico prediction models to analyze the missense mutations and non-frameshift indels.
We found two patients with two variants in their KMT2D genes. Patient 1 had two missense mutations (c.12199C > T and c.16295G > A), which were both absent from the parents' KMT2D genes, indicating that they were de novo mutations. Both mutations were absent from the 1000 Genomes Database and the Exome Variant Server, indicating that both mutations are rare in the general population. One mutation, c.12199C > T, was predicted to be pathogenic by the SIFT program but not by PROVEAN or Polyphen-2. The other mutation, c.16295G > A, was predicted to be pathogenic by all three in silico models and has been reported by Kokitsu-Nakata et al in a Brazilian case of familial Kabuki syndrome [23]. Therefore, we considered c.16295G > A to be pathogenic, whereas the pathogenicity of c.12199C > T was inconclusive and could potentially be non-pathogenic. Patient 8 also had two variants. We could not determine the inheritance status because the parents of Patient 8 refused to give blood samples for sequencing of the KMT2D gene. One mutation was a nonsense mutation (c.4140T > A), which was considered to be pathogenic. The other mutation in this patient was an in-frame deletion (c.11718-11723delGCAACA), which was predicted to be non-pathogenic by the PROVEAN prediction model. Therefore, the pathogenicity of c.11718-11723delGCAACA was inconclusive and could potentially be non-pathogenic. To our knowledge, it is a rare event for a patient to carry two mutant variants of the KMT2D gene. A previous study by Hannibal et al reported only three patients who carried two KMT2D gene variants in a cohort of 110 patients [22].
Conclusion
This report is the first case series of Kabuki syndrome patients with definitive genetic diagnosis in China. These data help broaden the mutation spectrum of Kabuki syndrome and add information regarding the manifestations of Kabuki syndrome. | 2016-05-04T20:20:58.661Z | 2015-04-21T00:00:00.000 | {
"year": 2015,
"sha1": "4633a658ab8aa26c02f492d490db9e4f345a6a4c",
"oa_license": "CCBY",
"oa_url": "https://bmcmedgenet.biomedcentral.com/track/pdf/10.1186/s12881-015-0171-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5aadc37a5b1fac2f945cc34de7bc2d6293a6fd00",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
242849670 | pes2o/s2orc | v3-fos-license | Spiritual Intelligence During Catastrophe: The Covid-19 Pandemic Case
This study is an attempt to explain the significance role of spiritual intelligence during the complexities and uncertainties, particularly due to the tragedy of the COVID-19 pandemic. Spiritual intelligence is the capacity to behave with insight and humility while preserving inner and outer harmony irrespective of the circumstances. Not only does one with a high spiritual intellect respond appropriately in a particular circumstance or state, but it also analyzes how he or she is in that position. This study also aims to understand how the COVID-10 pandemic could lead someone to adjust, embrace and change life and turn the tragedy into a time of resilience, strength, knowledge and a new environment of mutual and communal living, sharing responsibilities and appreciating solidarity.
Introduction
Spirituality has for centuries been the realm of theology and philosophy. Only in the last century it has become a recognized subject of research within the field of psychology. It has become a focal point by a growing number of studies, due to the effect on people's quality of life, especially during the catastrophe. (Mirghafourvand & Charandabi, 2016) The quality of life during the period of uncertainty in this context of discussion is a state of well-being in diverse communities, especially university students during the lockdown or movement control order due to COVID-19 pandemic. University student which is categorized in youngster group of community is considered to be one of the most important stages of population growth, and low spiritual fitness levels may have an effect on well-being during maturity. (Mathew, 2018) COVID-19 has pushed all of us in ways we've never been pushed, and made us do something we've never done before. It also emphasizes us in very strange ways. Perhaps one of the most tiresome aspects is the lack of consistency all over the place. The situation is bad enough to worry about the impact on economies of shifting working habits. Travel, restaurants, sports, and other businesses worry as people are motivated to stop being in closed spaces with others. Hospitals and other health care services are planning for the surge of patients. Indeed, during this pandemic time we can understand our inner force more. There are opportunities here, which is increased reliance upon spiritual practices through our spiritual intelligence or also known as spirituality. Despite the ongoing confusion, COVID-19 quietly gives us an opportunity to reflect on the metaphysical effect it has on the social and individual lives. In this broad sense, the spiritual impact is actually not unmistakably positive; however, at the end of the day, society as a whole will make a spiritual leap forward. Victor Frankl, survivor of the Nazi concentration camps, quoted Nietzsche when asked how he survived the horror: "He who has a 'why' to live can bear with almost any how." One way of getting to the "why" in your own life is highlighting and defining our meaning, values and purpose. As Albert Camus realized, a pandemic is the time to question yourself what life is for. Indeed, there are ways to make this uneasy period not only endurable but rewarding. There are ways on how to endure with purpose and make this a period of emotional and moral transformation.
Research Methodology
This study employed WHOQoL-BREF questionnaire which is short questionnaire consists of originally 26 items version of World Health Organization Quality of Life assessment (WHOQoL-100). For this article, only those items that are directly applicable to the problem have been discussed in this article. The focus of the cross-sectional analysis is restricted to students from different faculties who take compulsory university courses for the academic year 2019/2020 in this second semester, with a total of 160 samples. This analysis also uses a systemic approach, where problem-solving is objectively evaluated on the basis of information gathered from previous studies, and a new concept emerges with a new conclusion. This applies to any research that attempts to include an overview of current work, such as a completion report, a literature review, and so on.
Findings & Discussions
This study used the descriptive approach to summarize the percentage of data represented. Rate based on the percentage of those elements that are directly linked to the facets of spirituality. Spirituality in this context is defined as having a specific meaning, purpose, and value of life through a spiritual connection with the superpower, transcendence of self, others, and nature, which results in a feeling of inner peace, harmony, and satisfaction. source of joy. Another study found that some students resort to spirituality as a coping method as a response to the various challenges they face (Castellanos and Gloria, 2008). Human beings have been born into three basic realms, physical, intellectual and spiritual. For the physical realm, it is best systemically defined as consisting of complex structures such as the intestinal, central nervous, renal, endocrine, reproductive and lymphoid organs. The second realm is intellectual, where the brain plays its part. The third one is the spiritual realm. In the Bible, the world "Lev" (heart) appears 1024 times and the frequency that affirms to the multivalence of the concept common to the Hebrew and ancient Semites, for whom the heart, as an organ crucial to life, is a place for the attention of all vital forces. (Encyclopedia of Religion) According to Islam, this spiritual domain places the heart or is called "qalb" in Arabic. In this spiritual heart, resides the spirit or referred to as the soul. The success of the function of the heart can be seen through akhlaq (moral deeds) that are able of producing to pure values or virtues. The word "spiritual" can evoke images of the sacred experience of the soul, the question of meaning, purpose and value of existence. Knowledge connotes the mind at work and is generally correlated with intellectual problem-solving and scientific comprehension. Spirituality is the foundation of human existence and getting more attention in Global Health recently. Spirituality is now widely examined, especially in relation to many other manifestations, such as identity, quest for the essence of life, technique for coping, well-being and wellbeing. In terms of unique skills, Sisk (2019) indicates that spirituality can also have properties or organs that are better interpreted in terms of special abilities. In this case, according to Islamic perspective, we believe that for the spiritual organs, it should come inside spirit, which could encompass organ like, heart, soul and aql. In order for this process to work, spirituality requires those instrument that encourages or enables many of its activities (especially the quest for both general and personal meaning in life). Many philosophies such as Buddhism and Confucianism, as well as parts of psychological literature such as Danah and Ian Marshall (2000), stress both consciousness transition and meaning as key factors in spiritual development. Spiritual intelligence is a set of spiritual abilities and resources that emerge from the highest degree of human intellectual growth, that is, transcendental achievement (Frankl, 1966;King 2010;Danah, 2000). A professor of medicine at the George Washington School of Medicine and Health Sciences described spirituality as an aspect of humanity that refers to the way in which individuals seek and communicate meaning, purpose, value and experience their relation to the moment, to themselves, to others, to nature and to the meaning or the sacred. (Christina Puchalski, et al., 2009) Life's purpose will help boost your motivation and keep it there. In fact, knowing life's purpose is so powerful, it could be considered the source of your actual motivation, no matter what it is you're doing (Rick, 2002). Not everyone is agreed that spirituality can be classified as a type of intelligence, least of all Gardner (2000:27) who regards intelligence mostly in the domain of cognitive that requires information processing and mental action to acquire knowledge and understanding through thinking, experience and perception. However, in this analysis, we will consider it an intellect because, with it, it could improve the ability to interpret others at the deepest level, to discern both the 'true cause' of actions without judgement, and to meet the 'true needs' of others before they learn to serve their own needs.
In 1997, Danah Zohar, a management thought leader, physicist, philosopher and author of her best-selling books include Spiritual Capital: Wealth We Can Live By and SQ -Spiritual Intelligence, first presented the word spiritual intelligence (SI). According to her, spiritual intelligence is the force that the person can manifest on the basis of his or her deepest meaning, purpose and value (Danah, 2000). It is a set of spiritual abilities that emerge from the highest degree of transcendental achievement that originates from the spiritual domain, which includes all spiritual organs, as described from the Islamic point of view, the spirit, or the soul, resides in the heart, the spiritual heart, or is called the "qalb" in Arabic. The performance of the work of the heart can be seen by akhlaq (moral deeds) capable of producing pure values or virtues. According to Danah, the first step to embark on this spiritual intelligence is learning to focus on the inner life and the seat of the conscience. She also asserted that the indications of high spiritual intelligence are, flexibility, self-awareness, the ability to face and use suffering, the ability to face and surpass pain, the quality of being inspired by purpose and values, unwillingness to cause unnecessary harm, the tendency to see in a big picture and to connect between various things, the tendency to ask why, what if, and to seek answers, the ability to work against conventional. According to Vaughan, spiritual intelligence is a capacity for strong comprehension of philosophical issues and insight into various layers of consciousness. It links the self to the spirit, opens the heart, illuminates the mind and inspires the soul, linking the individual human psyche to the fundamental ground of human being. Richard (2001) affirmed that every one of us has spiritual intelligence. Because of this we have the capacity to reason with our hearts. Thus, spiritual intelligence can be infused as a human capacity to challenge the meaning, purpose and value of life and to find a link between each one of us and the universe in which we reside. The intangible universe that deals with faith and knowledge that tries to understand. There are three ways of discovering meaning: by giving to the world in terms of creation, by relating to and appreciating the world in terms of encounter and experiences, and by taking a stand toward unavoidable suffering (Frankl, 1959). In dealing with catastrophe and uncertainty, the first step each of individual should do is to accept that in life, not everything we can control. With this belief in mind, one become more open-minded and realistic and can easily accept the fact that catastrophe and uncertainty are something that is acceptable and bearable. We need to be reminded that we can do so much thing right now -and that make us strong and not weak. Things are going to unfold soon enough. In the meantime, we are in charge of the way we handle them (Hardy, 1979). By having a clear spiritual perspective that full of meaning, purpose and value, one can approach it appropriately and sensibly. In the case of the COVID-19 pandemic, this tragedy needs to be overcome, and one way to do this is through the empowerment of spiritual wisdom, features that all human beings possess but are still under-utilized. Indeed, this COVID-19 pandemic is the best opportunity to lead anyone to adapt, embrace and turn life and tragedy into a moment of resilience, strength, wisdom and a new environment of shared and collective life, sharing responsibility and appreciating solidarity. In this situation, people need to believe in themselves, develop sensitivity and cognitive regulation that will cope more positively with catastrophe, momentous tragic event and uncertainty in order to live with the best version of ourselves. In order to develop the spiritual intelligence, intelligence that can infuse as a human capacity to have a meaning, purpose and value of life it is important to establish the relationship and interactions between individuals and own self, individual and the superpower and individuals with other people. Relationships can also lead to a sense of identity, a strong social network, active community interaction, increased understanding of proximity and encouragement, access to appropriate services and knowledge of health, exposure to positive modelling, mentoring and involvement in pro-social activities (Cohen & Wills).
Establishing a Relationship
In dealing with catastrophe and uncertainty, the first step each of individual should do is to accept that in life, not everything we can control. With this belief in mind, one become more open-minded and realistic and can easily accept the fact that catastrophe and uncertainty are something that is acceptable and bearable. We need to be reminded that we can do so much thing right now -and that make us strong and not weak. Things are going to unfold soon enough. In the meantime, we are in charge of the way we handle them (Hardy, 1979). Getting a relationship is also a way of showing how we owe ourselves meaning and value, and this is a kind of wise answer to the pandemic. Relationships can contribute to a greater sense of belonging, a healthy social network, constructive engagement with the group, a greater awareness of closeness and motivation, access to effective health resources and information, commitment to positive modelling, mentoring and engagement in pro-social events.
Relationship with Superpower
A personal relationship with a higher being can be done through meditation. All religions practice forms of meditation. There are many ways of meditation, one of them is through prayer. In prayer we reflect on the silence around us and seek a deeper connection with the God or Universe. It follows that we should agree that there is one power beyond the grasp of human beings anytime something occurs beyond us. A superpower (God) relationship, a higher force, is theoretically a source of energy for humanity. This induces a confidence in people that all human suffering should be coped with. One will have a philosophical viewpoint through this relationship that allows him to make sense of life and better frame life events (Danah, 2000).
From the above chart we might understand that during the catastrophe 60.7% of students discovered something relevant to the superpower. Even though the number should be higher, it is sufficient to mean that a certain number of students have discovered that there is a kind of relationship, a relationship with a metaphysical element that is also known as the Superpower or God.
By having a sense of connectedness with the superpower, it will promote a feeling of bonding. The Islamic purpose deeply rooted in being aware of God and seeking His Pleasure gives us drive, motivation and success at work and home. In this case many people believe that faith makes them part of something greater than themselves. It can happen by prayer or meditation, or by engaging in religious activities or simply doing things like listening and brisk walking (Park, 2017). Spirituality can lessen the impact of the catastrophe on people's wellbeing. Research indicates that confidence serves as a shield for the deleterious effects of a disaster and/or leads to psychological distress in the disaster. Based on their research on the 9/11 event, Ai, et al. (2005), postulated that stronger trust, optimism, and spirituality are inversely linked with depression and anxiety associated with the exposure to direct and indirect 9/11 trauma.
Relationship with Own Self
Being spiritual includes thinking, acting and connecting with one's self-consciousness as not forming the spirit or form, but the mind, not the body. Self-love conquers your darkness in many ways. It helps us cultivate a relationship with our spiritual core so we can love ourselves, have a self-value, self-love, self-confidence and give the best of our own self like courage who we are on a much deeper level than what we see in the mirror. Self-compassion involves how we respond to ourselves during life challenges and painful experiences. We will react in a positive way to struggles and errors when you help and respect yourself. Going back to the superpower, to God (the one that belongs you), is one of the safe ways to react. From Islamic perspective for example, people need to understand that life circumstances are not us. They are events that happened to us and definitely they don't define us. In this case we need to be in the state Mindfulness. This mindfulness is not only about mental and emotional well-being, more specifically about spiritual well-being -which we believe can maintain our emotional wellbeing and is profoundly rooted in our knowledge of and relationship with God. The spiritual mindfulness is not judging any situation as negative. But always embrace it. Fromm (2008) claimed that self-love requires confidence and bravery to take chances and conquer the life's setback and sorrows. Trust in ourselves allows us to be comforted and to face obstacles and defeats without lapsing into concern or judgement. We need to develop the ability to see objectivity and realize you're going to thrive; despite the feelings we are feeling. Mindfulness is one of the technique to exercise self-love. It helps us to focus to the present moment without being swept away by all the stimuli, emotions, thoughts and feelings that we encounter. Mindfulness is the very basis of emotional intelligence as we are aware of ourselves: the effect of our feelings, emotions and behaviors on our personal success, our teams and our organizations is the core of emotional intelligence. According to numerous research on neuroscience, mindfulness has a strong relationship with emotional control and well-being.
Almost half of the students, 53.1% during the lockdown, felt their quality of life to be 'fair and very good' and the other half felt the other way round. Even if some of them were at home and some of them were at the university or private accommodations, they all were lockdown. From this result, we may infer that some students will not be able to manage the obstacles very well, but some will. It would be normal for students involved in an incident such as this pandemic to feel some anxiety, panic, depression or disconnectedness. It's the confusion that scares them. They can't get together, and just keep on studying. Their lives have also changed due to loss of earnings or loss of parental income, deterioration of psychological state due to bad news and events such as fear of becoming sick or spreading the disease to one's family.
Relationship with Other Human being
During this time, people need to speak and interact with others and not get in isolation but speak to someone about their sorrow, anger, and the like, although it can be difficult to get started. Even if we need to keep a physical distance, but in terms of social cohesion, this is the moment we need to get closer and closer. Physically isolated does not mean that there is a need to isolate socially. These are the kind of treatment they need and they have to take excise. Identify a friend who knows your feelings and supports them, or a trusted teacher, counsellor (Coles, 1990) The sense of community and the core human values such as love, kindness, caring, justice and integrity can only be externalized by relationships with others or by social life. Islam genuinely upholds this by claiming that the best of you is the one who does the best to a fellow human being. That is why giving or taking care or helping the people less fortunate, sharing the emotional burden like sadness, the pain of the suffering are great ways to establish relationship with fellow human being and also a process to attain spirituality. Social connections can help us to cope, whereas isolation may make us feel more depressed and anxious. This especially holds true if you ruminate about the disaster. While you may want to hold in your feelings and your stress, sharing these feelings with people that care about you will help you move forward. Even so, because we need to limit the spread of illness, which is primarily dependent on a coordinated response from the general population, it makes it impossible for humans to be totally "social." Travel, tours, social events, and public functions have been restricted, and people are being asked to practice basic hygiene, avoid meeting others, and maintain a safe distance when they do. Nonetheless, the current crisis brings people together on an emotional and spiritual level by allowing them to care for one another. In many countries, we could wee from the internet, the societies pulling themselves together now, just singing together out of their balconies or windows, uniting them to face this common catastrophe.
Our relationship with other people is an important aspect of spirituality. 59.4% felt compelled to assist those affected by the outbreak. A strong sign that students have a spiritual ability is a sense of helping others. This is because students want to contribute things like good deeds, no matter what kind of support, tangible or intangible. This is because the act of kindness is going to reach the hearts and make people recover in happiness. It is important to see helping others as equally essential spiritual practice, alongside common subjects such as prayer, meditation, study of the Scriptures, or fasting. As asserted by Addiss (2016), 'Public health embodies a spirit of interconnectedness and acknowledges the need for global collaboration to solve these problems. A popular way for individuals to be in contact with their spiritual nature is to do something that can help others without getting something back. COVID-19 demands physical distancing but requires unified societal action. Social distancing may raise concerns over the cohesiveness of our society, community, or family, yet it is crucial to stop the spread. Solidarity is the key to defeating COVID-19". The young and old need to care for each other, people with good health should care about the people with health conditions, and obviously, countries should also care for each other in this now global pandemic. In other words, we need intergenerational solidarity, cross-national solidarity.
Relationship with Nature
Another relationship that we need to build in order to acquire spiritual intelligence is a relationship with nature. The COVID-19 pandemic is changing human interactions with nature at various levels and through a wide variety of contexts. New information is required to understand the impacts of the global pandemic, considering its large spatial dimensions and the social distance measures used to contain it. Globally, people around the world have recognized that the pandemic may have resulted in beneficial environmental externalities, such as decreased air pollution. Furthermore, with the changed of mobility patterns and limited opportunities for other types of sociability, there has been widespread public use and interaction with local green spaces, especially in densely populated areas. At the individual level, however, we can see that the more urgent issues regarding health and hygiene have relegated conservation practices to a lower priority.
While MCO has restricted our movement, we can still connect with nature by planting trees or urban farming at home or just have a brisk walk in the park or nature reserve and imagine that all the trees, plants and animals are the creatures of the Creator. Instead of just looking at them, strive to feel the presence of the Creator that we have created for the sake of the human being. Human beings once crossed the globe and remained in close contact with nature, helping to define us and becoming an integral part of who we were. To help us make sense of this world, we gave it the human qualities we could understand, the spirits we considered our equalswe didn't think ourselves superior. We need to understand and respect the beauty of nature. We've never been left with the spirits. Being between land and sea has settled us down, and given us a sense of our greater selves. In a logical and busy world, when we don't have time to think let alone feel, to connect with spirits in nature, whether we believe they actually exist or not, can help us find who we are, remind us of our place and priorities, and be a powerful and vital tonic. There is so much of life to embrace, explore, and relish. Connecting with the spirit of nature can be our guide, and doorway, to a better life. The lockdown or MCO period is the best way to get to nature to show how much we appreciate and value them by planting the trees and breeding the animals. This is the best time to look around and imagine and notice how the earth grows and nurtures so much of all that becomes part of it. Feel the presence of the rocks, earth, mountains, streams and lakes. Let yourself feel what they are like in their parts and as a whole and are there just to serve human being. Feel the spirit of kinship we share with each other and the entirety.
Conclusion
From the above discussion, we can see that Coronavirus Disease 2019 (COVID-19), which posed unparalleled health threats to all strata of society across the world, has also urged us to look at it from a new viewpoint, one that involves spiritual treatment in terms of quality of life, health and well-being and, similarly, the end of life. It is likely to be imprinted on every human, in all walks of life, including students. With spiritual intelligence that could be formed through relationships and experiences between individuals and one's own self, the person and the Superpower, and individuals with others, human capacity can be infused to reach the meaning, purpose and value of life and to find a connection between each one of us and the universe in which we reside. The relationships with the Superpower or God have a direct effect on people's beliefs, attitudes, feelings and behavior. People like students or individuals need to possess this through their moral belief in times of personal hardship and widespread fear or disaster. The need to introduce spiritual care for all walks of life during the pandemic and post-pandemic era is therefore essential. The findings of this study provide a broad view of spirituality or spiritual intelligence during the COVID-19 pandemic. Spiritual intelligence is better understood by integrating a spiritual perspective which includes defining aspects of faith and individual life. Indeed, COVID-19 is only one example of instability, catastrophe, uncertainty and confusion. What is important is that we have learned the lesson from it and how we treat it. In this case, we have been able to understand how much spiritual intelligence people, particularly students, possess. From this, as a way forward more effort need to be done to ensure during their journey as university students they would be able to equip themselves with physical, emotional, spiritual and intellectual aspect equally in order they could become a balanced and healthy personality. This spiritual intelligence is a grace or noble quality that helps students to adapt or integrate with others, no matter what their lives may be. This paper also suggests that spiritual intelligence is the road to attaining and cultivating our own faith, and that it is a technique for dealing with instability such as the COVID-19 Pandemic. It is times like this that the words of Viktor Frankl become most relevant: "It is we ourselves who must answer the questions that life asks of us, and to those questions we can respond only by being responsible for our existence" (Frankl, 1959). Indeed, this is an amazing moment in history, full of opportunity to learn more about ourselves and those around us. | 2021-10-15T16:19:07.964Z | 2021-08-01T00:00:00.000 | {
"year": 2021,
"sha1": "73ce526524a4f3ef344c449c925608bac9e72e08",
"oa_license": "CCBY",
"oa_url": "https://hrmars.com/papers_submitted/10672/spiritual-intelligence-during-catastrophe-the-covid-19-pandemic-case.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0b94c85059a64e23746384678cb8effa92d895db",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
258154107 | pes2o/s2orc | v3-fos-license | A zero-agnostic model for copy number evolution in cancer
Motivation: New low-coverage single-cell DNA sequencing technologies enable the measurement of copy number profiles from thousands of individual cells within tumors. From this data, one can infer the evolutionary history of the tumor by modeling transformations of the genome via copy number aberrations. A widely used model to infer such copy number phylogenies is the copy number transformation (CNT) model in which a genome is represented by an integer vector and a copy number aberration is an event that either increases or decreases the number of copies of a contiguous segment of the genome. The CNT distance between a pair of copy number profiles is the minimum number of events required to transform one profile to another. While this distance can be computed efficiently, no efficient algorithm has been developed to find the most parsimonious phylogeny under the CNT model. Results: We introduce the zero-agnostic copy number transformation (ZCNT) model, a simplification of the CNT model that allows the amplification or deletion of regions with zero copies. We derive a closed form expression for the ZCNT distance between two copy number profiles and show that, unlike the CNT distance, the ZCNT distance forms a metric. We leverage the closed-form expression for the ZCNT distance and an alternative characterization of copy number profiles to derive polynomial time algorithms for two natural relaxations of the small parsimony problem on copy number profiles. While the alteration of zero copy number regions allowed under the ZCNT model is not biologically realistic, we show on both simulated and real datasets that the ZCNT distance is a close approximation to the CNT distance. Extending our polynomial time algorithm for the ZCNT small parsimony problem, we develop an algorithm, Lazac, for solving the large parsimony problem on copy number profiles. We demonstrate that Lazac outperforms existing methods for inferring copy number phylogenies on both simulated and real data.
Introduction
Tumor evolution is characterized by both small and large genomic alterations that alter the fitness of cancer cells [31]. Copy number aberrations, i.e. modifications to the number of copies of a genomic segment, are an important and frequent sub-class of such alterations that drive prognostic and metastatic outcomes [1]. Deriving the evolutionary history of copy number aberrations, herein referred to as copy number phylogenies, is thus important for understanding the emergence of primary tumors and the development of subpopulations of cells that evade treatment and/or metastasize to other anatomical sites.
Recent technological and computational improvements in single-cell sequencing have enabled the mapping of high resolution copy number profiles in single cells. For example, the high-throughput 10x Genomics Single-cell Copy Number Variation solution [2,48] produces ultra-low coverage (< 0.05×) whole genome sequencing data from ≈ 2000 individual cells. Other recent technologies, including DLP/DLP+ [49,23,15] and ACT [28], produce similar data. Multiple computational methods [48,45,44,10,20,24] have been introduced to infer high resolution copy number profiles, integer vectors that contain the number of copies of each genomic segment, from this type of data. Other recent methods can infer copy number profiles from thousands of cells or spatial locations from single-cell RNA sequencing (scRNA-seq) [16], scATAC-seq [47], or spatial transcriptomics data [12].
The increasing availability of technologies to measure genomic copy number in thousands of cell motivates development of methods to infer the cellular phylogenies from copy number profiles. However, there are multiple challenges in inferring phylogenies from copy number profiles. First, copy number aberrations are diverse, ranging from small duplications and deletions [3] to whole chromosome shattering and reconstruction events [41]. Second, a single copy number aberration can alter the number of copies of a large section of the genome simultaneously. This means that loci on the genome cannot be treated as independent phylogenetic characters, a widely-used assumption in phylogenetics [18,19,39,46] Finally, the increasing size (> 10, 000 cells) and resolution (< 5Kb bins) of copy number profiles require increasingly scalable algorithms. One widely used model of copy number evolution is the copy number transformation (CNT) model [38]. In the CNT model, a genome is represented as a vector of non-negative integers and copy number aberrations correspond to the increase or decrease of the entries in a contiguous interval of coordinates in the vector, explicitly modeling the non-independence of copy number amplifications and deletions. The CNT distance is the minimum number of copy number events needed to transformation one profile to another. The CNT distance is computable in linear time [51] and has been used to define an evolutionary distance between profiles. Since the CNT distance is not symmetric, a variety of symmetrized CNT distances have also been used to construct copy number phylogenies using distance-based phylogenetic methods [38,11,50,22]. Further, owing to its effectiveness, the CNT model has become the basis of a variety of distinct models [50,22,8] for copy number evolution. While the CNT model is described by specific events -or mutations -there has been little work on constructing phylogenetic trees under the CNT model using the method of maximum parsimony. Even the small parsimony problem -where the topology of the tree is given and one aims to infer the ancestral profiles that minimizes the total number of copy number events on the tree -has no known efficient solution. For example, for the special case of a two leaf tree, the best algorithm for the CNT small parsimony problem [51] runs in O ( 7 ) time where is the largest allowed copy number and is the number of loci [11]. Without an efficient algorithm for the small parsimony problem under the CNT model, one cannot hope to solve the large parsimony problem, where the topology of the tree is unknown. Figure 1: (a) The results of applying a copy number transformation 2,6,+1 under both the CNT model and ZCNT model. Under the CNT model a zero cannot be increased via the amplification, but the zero-agnostic CNT model allows the zero to increase to one copy. (b) An instance of the ZCNT small parsimony problem: given a tree with copy number profiles labeling the leaves, the goal is to infer the ancestral copy number profiles that minimize the total ZCNT distance across all edges.
the ZCNT model allows for the amplification of zero copy number regions. While such an operation is not biologically realistic, we show that this relaxation makes the ZCNT distance a metric, in contrast to the CNT distance. Moreover, we derive a closed form expression for the ZCNT distance between two profiles. We use this closed form expression as well as an alternative characterization of copy number profiles to solve two relaxations of the small parsimony problem in polynomial time. To our knowledge, this is the first attempt to solve the small parsimony problem for a segment-based (i.e. non-independent) model of copy number evolution. We then use our efficient algorithm for the (relaxed) small parsimony problem to design an algorithm, Lazac (Large-scale Analysis of Zero Agnostic Copy number), for inferring copy number phylogenies by solving the large parsimony problem. We show on simulated data that Lazac is > 100× faster than other phylogenetic methods and also more accurate in recovering the ground truth phylogeny. On single-cell whole-genome sequencing data from human breast and ovarian tumors, Lazac finds phylogenies that are more consistent with both copy number clones and single-nucleotide variants (SNVs).
Copy number transformations
A copy number profile = [ 1 , . . . , ] is a vector of non-negative integers where ∈ {0, . . . , } is the the number of copies of locus . Suppose we measure the copy number profile of cells of a tumor across loci in a single-cell DNA sequencing experiment. We encode the copy number profiles in a × copy number matrix = , where , ∈ {0, . . . , } is the copy number of cell at locus . The copy number profile of cell is then the th row of this matrix, and is denoted . and ℓ 1 distance between copy number profiles. However, these distances do not account for dependencies between loci caused by long CNAs spanning contiguous segments of the genome, leading to inaccurate phylogenetic reconstruction [38,22].
In this section, we describe and investigate the copy number transformation (CNT) model, one of the most well-known and successful evolutionary models for copy number evolution in cancer. The CNT model was originally introduced in MEDICC [38] and extended in subsequent studies [51,11,50,8,22]. Since the CNT model only allows intrachromosomal copy number events, it is sufficient to consider the case of a single chromosome, and thus for ease of exposition we will describe the model using a single chromosome.
The fundamental operation in the CNT model is a copy number event which increases or decreases (by one) the entries in a contiguous interval of a copy number profile, defined formally as follows.
Definition 1 (Copy number event). A copy number event , , : ℤ + → ℤ + is a function that maps a copy number profile ∈ ℤ + to a profile , , ( ) described by its entries as where ≤ and ∈ {+1, −1}. We denote such a function as when clear by context.
That is, an amplification (resp. deletion) increases (resp. decreases) the copy number of all non-zero entries in the interval between positions and , or alternatively a copy number event skips the zero entries ( Figure 1). Thus, once a locus is lost (i.e. = 0), the locus cannot be regained or deleted further. A copy number transformation is the composition of multiple copy number events and we denote this function as = ( 1 , . . . , ) when ( ) = (· · · ( 2 ( 1 ( )))).
Several copy number problems have been previously studied to compute evolutionary distances under the CNT model. The first, and simplest, is the copy number transformation problem, originally introduced in [38], which defines a distance, ( , ), between two copy number profiles. Put simply, the distance between two profiles is the length of the shortest copy number transformation needed to transform one profile to another.
Definition 2 (Copy number transformation distance). Given two copy number profiles and , the copy number transformation distance is [ 51,43] show there is a (non-trivial) strongly linear time algorithm (i.e. time complexity O (| | + | |)) for computing the CNT distance ( , ). Unfortunately, the CNT distance ( , ) is not symmetric (i.e. ( , ) ≠ ( , )), which makes it difficult to use in distance based phylogenetic methods such as neighbor joining [34].
In order to apply distance-based phylogenetic methods, multiple approaches to symmetrize the distance ( , ) have been introduced.
[51] use a mean correction replacing the asymmetric ( , ) with a symmetric distance ′ ( , ) defined as over all profiles . Computing this median distance is called the copy number triplet problem in [11]. Unfortunately, no efficient algorithm is known for the copy number triplet problem. The fastest algorithm uses O ( 7 ) time and O ( 4 ) space where is the maximum allowed copy number [11].
Small and large copy number parsimony
The small parsimony problem for copy number profiles is the following: given a tree T whose leaves are labeled by copy number profiles, infer ancestral copy number profiles that minimize the total dissimilarity between profiles across all edges ( Figure 1). For evolutionary models in which each character evolves independently and has finitely many states (e.g. single nucleotide substitution models), the small parsimony problem is solved in polynomial time via Sankoff's algorithm, a dynamic programming algorithm [36].
Unfortunately, the CNT model presents two major challenges in solving the small parsimony problem. First, since copy number events affect multiple loci simultaneously, the loci cannot be analyzed independently, in contrast to most phylogenetic characters. Second, the space of possible copy number profiles is a priori unbounded, since the maximum copy number of a segment in a genome is unknown. Thus, it is not surprising that there is no published solution to the small parsimony problem for CNT dissimilarity, with the exception of the special case of two-leaf trees [11]. Here, we formalize both the CNT small parsimony problem and the corresponding large parsimony problem, the latter of which was previously described in [11].
A copy number phylogeny (T , ℓ) is a rooted tree T and leaf labeling ℓ. Let (T ), (T ), and (T ) denote the edges, vertices, and leaves of T , respectively. In our applications below, each leaf of T represents one of the cells (or bulk samples) from a tumor. An ancestral labelingl of a copy number phylogeny is a vertex labeling of T that agrees with ℓ on the leaves of T , i.e. ℓ ( ) =l ( ) when ∈ (T ). We say that (T , ℓ) is a copy number phylogeny for copy number matrix if T has leaves such that ℓ labels each leaf by a row of . Formally, if (T , ℓ) is a copy number phylogeny for a copy number matrix , then there exists a cell assignment : [ ] → (T ) that assigns each cell to a leaf such that ℓ ( ) = .
We define the cost (T ,l) of a vertex labeled, copy number phylogeny as the total number of copy number events required to explain the phylogeny: (T ,l) := ∑︁ ( , ) ∈ ( T ) (l ( ),l ( )).
We now introduce the small parsimony problem [14] under the copy number transformation model. The parsimony score is defined as the cost (T ,l) of the solutionl to the CNT small parsimony problem.
To the best of our knowledge the CNT small parsimony problem (Problem 1) has not been analyzed in the literature. We believe this is due to the difficulty of solving the CNT small parsimony problem. That is, for even a special case of two-leaf trees, referred to as the copy number triplet problem [11], no strongly polynomial time algorithm is known (Section 2).
The CNT large parsimony problem defined in [11], aims to find a vertex labeled, copy number phylogeny (T , ℓ) for a matrix with minimum cost.
( 2 + log ) variables and does not scale to the size of current real data sets with thousands of cells.
The zero-agnostic CNT model
The copy number transformation (CNT) model imposes the constraint that once a locus is lost (has zero copy number), the locus remains with zero copies for all time. While this constraint is biologically realistic, the constraint also makes the inference problems -including the CNT small (and large) parsimony problems -computationally hard to solve. Here, we show that relaxing the constraint that copy number events do not alter zero entries leads to a simpler model with favourable mathematical properties. We call this the zero-agnostic copy number model ( Figure 1) to indicate that the model allows the amplification and deletion of loci with zero copies. Formally, we define a zero-agnostic copy number event as follows.
Definition 3 (Zero-agnostic copy number event). A zero-agnostic copy number event , , : ℤ → ℤ is a function that maps a profile ∈ ℤ to a profile written , , ( ) described by its entries as where ≤ and ∈ {+1, −1}. We denote such a function as when clear by context.
Thus, a zero-agnostic copy number event either increases or decreases the number of copies of all loci in the interval ( , ) regardless of whether the loci have zero copies. While our formulation allows for the number of copies of a locus to decrease below zero, one can show that given two profiles with non-negative entries, it is always possible to find an optimal ZCNT such that no intermediate profile has negative entries. Specifically, as a corollary to the commutativity of zero-agnostic copy number transformations (Proposition 2), one can re-order events such that amplifications always occur first.
Due to space constraints, we do not include all proofs in the main text. Any proof not present in the main text can be found in Supplementary Proofs C.
Delta profiles
We simplify our analysis of zero-agnostic copy number events by examining their effect on the differences between the copy number of adjacent loci. In particular, while a zero-agnostic copy number event , , increments (or decrements) all entries where ∈ { , . . . , }, , , only alters two differences between adjacent loci, namely the difference − −1 and the difference +1 − . To formalize this idea, we first define the delta profile, vectors obtained by taking the differences in copy number between adjacent loci.
Definition 4.
A delta profile is any vector ∈ ℤ that satisfies the balancing condition: Or equivalently, >0 | | = <0 | |. We denote the set of delta profiles in ℤ as D . The above definition provides us with a convenient (and useful) description of the image of the following difference transformation, which we call the delta map.
A basic property of the delta map Δ : ℤ → D +1 is that it is invertible.
Since Δ is one-to-one and onto with respect to D +1 , each delta profile ′ then corresponds to a unique copy number profile = Δ −1 ( ′ ).
Interestingly, a copy number event , , applied to a copy number profile only affects two entries of the delta profile Δ( ), meaning that loci of the corresponding delta profile are (nearly) independent. We formalize this in the following definition of a delta event.
Definition 6 (Delta event). A delta event , , : D → D is a function that maps a delta profile ∈ D to a delta profile , , ( ) described by its entries as where ≤ and ∈ {+1, −1}. We denote such a function as when clear by context.
is the composition of multiple delta events, where ( ) = (· · · ( 2 ( 1 ( )))). We now state the connection between delta events and zero-agnostic copy number (ZCNT) events in the following theorem and corollary. Proof. The corollary follows by induction on | | and repeated application of (Theorem 1). ■
Computing the ZCNT distance
Let ( , ) be the minimum number of zero-agnostic copy number events needed to transform the copy number profile to . In this section we derive a closed form expression for ( , ).
We begin by noting that ( , ) is equal to the minimum number ′ (Δ( ), Δ( )) of delta events needed to transform delta profile Δ( ) to Δ( ). This follows from the equivalence between the copy number transformations and the corresponding delta transformation (Corollary 1). Thus, it suffices to only consider delta profiles and delta events; for the rest of the section all profiles and are delta profiles unless otherwise specified.
We start by observing two basic facts: delta transformations are commutative and ′ ( , ) forms a metric. was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023. ; Note that this also implies that zero-agnostic copy number transformations are commutative and that (·, ·) is a distance metric. To see this, let = ( 1 , 1 , 1 , . . . , , , ) be a zero-agnostic copy number transformation and = ( 1 , 1 , 1 , . . . , , , ) be the corresponding delta transformation, then for any vector and permutation , where the first equivalence follows from Corollary 1, the second from Proposition 2, and the third from Corollary 1. This implies that ( ) = ( ), which proves that a zero-agnostic copy number transformation is commutative. To see that (·, ·) is a distance metric, it suffices to observe that ( , ) = ′ (Δ( ), Δ( )) implies symmetry and reflexivity. Then, the triangle inequality is satisfied since the composition of a zero-agnostic copy number transformation from to and to yields a copy number transformation from to .
From our characterization of delta profiles, we derive our expression for the distance between delta profiles.
Proof. Since each event decreases the total magnitude of ∥Δ( )∥ 1 by at most two, to transform Δ( ) to the 0 profile requires at least 1 2 ∥Δ( )∥ 1 events. We prove the other direction by induction on >0 |Δ( ) |. Clearly, if the sum is zero, the claim holds. Otherwise, by (Proposition 1), we can choose to be any event that decrements ∈ { : Δ( ) > 0} and increments Invoking the induction hypothesis then yields a sequence of 1 2 ∥Δ( )∥ events to transform Δ( ) to the 0 profile.
The second statement follows from Proposition 3. ■ As a corollary to the above theorem and the equivalence between zero-agnostic copy number transformations and delta transformations (Corollary 1), we have our closed form expression for the ZCNT distance between copy number profiles.
Corollary 2. For copy number profiles and ′ , Further, as a corollary to the fact that ( , ) is a distance metric, the following median distance is trivially computed in linear time:
ZCNT small parsimony
We show below that the special form of the ZCNT model enables us to solve two natural relaxations of the small parsimony problem in polynomial time. First, using the equivalence between copy number profiles and delta profiles described above, we formulate the small parsimony problem (Problem 1) using the ZCNT model as follows.
8 . CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023. ; Problem 3 (ZCNT Small Parsimony). Given a copy number matrix , a tree T and cell assignment : such that the following two conditions are satisfied: ii. ( ) satisfies the balancing condition (1) for all vertices ∈ (T ).
To solve the above problem, we recall the general form of the Sankoff-Rousseau recurrence [9,36] for solving the small parsimony problem. Let (T ; ) be the cost of the optimal labelingl of (T ) that agrees with copy number matrix and has label for the root. Let T denote the sub-tree rooted at and suppose that has children and . Then, by condition (ii), and the requirement that the ancestral labeling lies in ℤ , we have the following recurrence relation [9]: This recurrence has several difficulties. First, and are unbounded and can take on any value in ℤ . Thus, it is impossible to store a dynamic programming table for ( ; ) without imposing bounds on the maximum copy number. Further, even when the entries are constrained to a bounded interval {0, . . . , } , the dynamic programming table has size ( + 1) , exponentially large. Second, because of the balancing conditions (1), =1 = =1 = 0, one cannot analyze the loci independently.
Despite these challenges, the recurrence (2) is a substantial improvement over the analogous recurrence under the CNT model. In fact, if we remove either the balancing (1) or the integrality condition, we can solve this recurrence in (resp. strong or weak) polynomial time. Both of these facts derive from our closed form expression for the ZCNT distance between two copy number profiles in terms of the ℓ 1 norm. We sketch the ideas here, and refer to Supplementary Results B.2, B.1 for proofs of these claims.
For the first case when we drop the balancing condition, we can analyze the loci independently as there is no constraint on the entries of the ancestral profiles. Then, it suffices to observe that since the distance corresponds to the absolute difference, the function (T ; ) has a nice structure and we do not have to store an infinitely large dynamic programming table. When the integrality is removed, then, since both (i) and (ii) are linear constraints on the profiles and because ℓ 1 norm minimization can be written as a linear program (LP), there is an LP formulation of the ZCNT small parsimony problem. As it is well known we can solve LPs in (weakly) polynomial time, this concludes the second case.
Lazac algorithm for ZCNT large parsimony
We develop a tree-search algorithm, Lazac, to find approximate solutions to the ZCNT large parsimony problem (Problem 2). Our procedure searches the space of copy number trees for a given copy number matrix using sub-tree interchange operations [27] and relies heavily on the efficient algorithm we developed for the small parsimony problem (Problem 3) when the balancing condition (1) is dropped. The procedure is similar to the tree search procedure we developed for lineage tracing data [37]. Complete details on our tree search procedure are in Supplementary Methods A.1. Lazac is implemented in C++17 and is freely available at: github.com/raphael-group/lazac-copy-number.
Comparison of copy number distances and phylogenies on prostate cancer data
We first investigated the differences between the CNT and ZCNT distances on copy number profiles inferred from bulk whole-genome sequencing data from ten metastatic prostate cancer patients [17]. We analyzed the copy number profiles for these patients published in [22]. For each pair of copy number profiles from distinct samples (e.g. anatomical sites) from the same patient, we computed the CNT distance CNT and ZNCT distance ZCNT . We found that for all ten patients, the median relative difference | CNT / ZCNT − 1| over all pairs of samples was less than 5% -and for most patients the relative difference was even smaller (Supplementary Figure 5, 6).
Evaluation on simulated data
We compared Lazac to several state-of-the-art methods for inferring copy number phylogenies -namely MEDICC2 [22], MEDALT [43], Sitka [35], and WCND [50] -on simulated data. The copy number phylogenies inferred by Lazac (a) and Sitka (b) on sample SA1184 with the leaves colored by the corresponding clone labels, as visualized using Iroki [29]. The normalized RF distance between the two trees is 0.9869. on every instance, taking less than ∼250 seconds to run on the largest simulated dataset containing 600 cells. Further, it was ∼100 times faster than the other top performing methods Sitka and MEDICC2 ( Figure 2b).
As a further evaluation of the differences between the ZCNT and CNT distances, we compared the trees obtained using distance-based phylogenetic methods with the ZCNT and CNT distances. Specifically, we compared the performance of applying neighbor joining on the ZCNT distances, referred to as Lazac-NJ, to three distance-based methods for reconstructing copy number phylognies: MEDICC2 [22], WCND [50], and MEDALT [43] on simulated data. MEDICC2 and WCND compute distances based on extensions of the CNT model and then apply neighbor joining to infer phylogenies. As such, they allow for a natural benchmark with which to compare our simpler, ZCNT distance. Lazac-NJ had nearly identical (within 1%) median RF and Quartet distance compared to other distance based methods (Supplementary Figure 2, 3) and was often the top performer. This provides evidence that even by itself, the ZCNT distance is useful for phylogenetic reconstruction.
Single-cell DNA sequencing data
We used Lazac to analyze single-cell whole genome sequencing (WGS) data from 25 human breast and ovarian tumor samples [15]. This dataset was generated using the DLP+ [23], single-cell whole-genome sequencing technology which produces ≈ 0.04× coverage from a median (resp. mean) of 636 (resp. 1457) cells per sample. The original study used Sitka [35], a method that uses the breakpoints between copy number segments as phylogenetic markers, to construct copy number phylogenies using this data.
We found that the phylogenies inferred by Lazac are substantially different than the phylogenies constructed by Sitka. Specifically, the normalized RF distance between pairs of phylogenies was greater than 0.90 in all cases (Supplementary Figure 9). In many cases, the normalized RF distance was 1, indicating 11 . CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023. ; that the phylogenies completely disagree. To investigate these differences, we analyzed the concordance between the phylogenies and the assignments of cells to copy number clones, reported in the original publication [15]. Specifically, we defined the clonal discordance score as the parsimony score of the clonal labeling; i.e. the minimum number of changes in clone label that are required to label the leaves with the published clone labels. Thus, for a dataset with clone labels the minimum possible clonal discordance score for a tree is − 1 corresponding to the case each clone label is a clade in the tree. We find that on 18/25 of the samples, the Lazac phylogenies had substantially lower clonal discordance scores than the Sitka phylogenies (Supplementary Figure 4, 10) showing that the Lazac phylogenies were more concordant with the copy number clones compared to the published phylogenies. As a qualitative example, the phylogeny constructed for sample SA1184 by Lazac also appears more concordant with the copy number clones than that inferred by Sitka (Figure 3). Further details on the clonal discordance analysis are in Supplementary Methods A.3.
As a further evaluation of the Lazac and Sitka phylogenies, we examined whether somatic singlenucleotide variants (SNVs) supported the splits in each phylogeny, following the approach of [48]. Note that these SNVs were not used in the inference of either phylogeny, and thus they provide independent validation of the phylogeny. Given the extremely low sequence coverage (0.04× per cell), it is not possible to reliably measure SNVs of individual cells. Thus, we performed this analysis on the three samples (SA039, SA604, SA1035) with the largest number of cells. We identified subtrees in the phylogeny with at least 5% and at most 15% of the cells and identified SNVs present in the subpopulation of cells in these subtrees. Following the approach in [48], we perform a permutation test to determine whether the subtree is supported by more SNVs than expected (Supplementary Methods A.4). For all three samples, we found that the Lazac phylogenies had a greater fraction of supported subtrees ( < 0.05) than the Sitka phylogenies (Supplementary Figure 11). On the largest sample, SA1035, we identified five out of six supported subtrees (supported by 3175, 3334, 3799, 3435, and 3402 SNVs) for the Lazac phylogeny compared to only three of eight statistically significant subtrees (supported by 3426, 3129, and 3362 SNVs) for the Sitka phylogeny.
Approximation error of ZCNT small parsimony relaxations
We investigated the approximation error produced by the relaxations (Section 5.1) used in our two polynomial time algorithm's for the ZCNT small parsimony problem. To perform this investigation, we first generated a set of 200 copy number phylogenies by stochastically perturbing a phylogeny inferred by Sitka [35] from single-cell whole genome sequencing data (Section 6.3). Then, for each phylogeny, we computed the optimal solution to the ZCNT small parsimony problem and its two relaxations using (integer) linear programming.
Importantly, we found that the exact solution to the ZCNT small parsimony problem and the solution obtained by relaxing the integrality condition were identical in every case. This leads us to believe that the relaxed linear program has a special structure, which we state as the following conjecture: Conjecture 1. The constraint matrix of the linear program obtained by relaxing the integrality condition of the ZCNT small parsimony problem is totally unimodular.
If true, this would imply that ZCNT small parsimony can be solved exactly in polynomial time using linear programming.
In contrast, dropping the balancing condition resulted in solutions with a lower score, implying that the balancing condition does meaningfully constrain the solution space. Specifically, the Pearson correlation between the score produced by dropping the balancing condition and the exact ZCNT small parsimony score was 2 = 0.972 ( < 10 −100 ) across the 200 copy number phylogenies (Supplementary Figure 7).
12
. CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023. ; https://doi.org/10.1101/2023.04.10.536302 doi: bioRxiv preprint Further, as the number of stochastic perturbations increased, both the parsimony score of the relaxed and the exact solutions increased (Supplementary Figure 8). This serves as a sanity check for the ZCNT small parsimony score since the phylogeny generally becomes further from ground truth as the number of stochastic perturbations increase.
Discussion
We introduced the zero-agnostic copy number transformation (ZCNT) model, a relaxation of the CNT model that allows for modification of zero copy number regions. We derived a closed-form expression for the ZCNT distance and presented polynomial time algorithms to solve two natural approximations of the small parsimony problem for copy number profiles. We used our efficient algorithm for the small parsimony problem to derive a method Lazac, to solve the large parsimony problem for copy number profiles. We demonstrated that on both real and simulated data, Lazac found better copy number phylogenies than existing methods.
There are multiple directions for future work. First, the complexity of the small parsimony problems for both the CNT and ZCNT models remains open, though we conjecture, and provide empirical evidence, that the latter is polynomial. Second, the algorithm we developed for the ZCNT large parsimony problem relies on a simple, hill climbing search using nearest-neighbor interchange operations. We expect that a more advanced approach that uses state-of-the-art techniques from phylogenetics [27,32] could substantially improve both inference speed and accuracy. Finally, is to apply Lazac to other large single-cell WGS datasets [48,28]. We anticipate that the scalability and accuracy of Lazac will be useful in analyzing the increasing amount of single-cell WGS cancer sequencing data.
Acknowledgements
We thank Uthsav Chitra and Gillian Chu for their helpful comments and review of this manuscript during several stages of its preparation. This work was supported by National Cancer Institute (NCI) grant U24CA248453 to B.J.R.
13
. CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023. ; https://doi.org/10.1101/2023.04.10.536302 doi: bioRxiv preprint . CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023. ; https://doi.org/10.1101/2023.04.10.536302 doi: bioRxiv preprint . CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023. ; https://doi.org/10.1101/2023.04.10.536302 doi: bioRxiv preprint . CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023. ; https://doi.org/10.1101/2023.04.10.536302 doi: bioRxiv preprint
A.1 Large parsimony details
Our tree-search algorithm starts with an initial set of candidate trees = (T 1 , . . . , T ) and iteratively improves upon the trees by stochastic perturbations followed by a hill-climbing procedure . Specifically, at each iteration we select a candidate tree uniformly at random and perturb the tree using a random number of nearest neighbor interchange (NNI) operations. With our perturbed candidate tree, we then perform local optimization using hill climbing to minimize the cost of the tree, where we use our small parsimony algorithm to efficiently evaluate the cost of each candidate topology. Once the hill-climbing procedure reaches a local minimum, we complete the iteration and update the candidate tree set if an improvement was found. The algorithm terminates after no improvement is found for a fixed number of iterations.
For all experiments and analysis in this paper, the number of iterations prior to termination was set to = 150. The number of random NNIs to perturb the candidate tree is selected uniformly at random from the discrete interval {0, 1, . . . , ⌊2.5 ⌋} at every iteration. Our candidate tree set was generated by performing neighbor joining on the boundary insensitive distances and then randomly perturbing the the neighbor joining tree.
A.2 Simulation details
We used a modified version of CONET's [25] copy number phylogeny simulator. Specifically, we found that CONET's simulation of tree structure was non-standard and opted to use a forward-birth death model [30] to simulate our topology. Once the tree structure was generated, we then used CONET's simulator to sample copy number events on each vertex. We then took our event labeled copy number phylogeny and sampled the ground truth copy number states on the leaves of the phylogeny to obtain our copy number profiles.
To generate the tree topology, we used Cassiopeia's [21] implementation of a forward-birth death model. We performed simulations for = 100, 150, 200, 250, 600 leaves with a fitness parameter of 1.3 and an initial birth scale of 0.5. We drew the birth-death waiting times from an exponential distribution. With the topology, we randomly sampled events on each vertex using CONET with = 1000, 2000, 3000, 4000 loci. We performed each simulation with parameters ( , ) a total of 7 times with unique random seeds = 0, 1, 2, 3, 4, 5, 6. In total, there were 140 randomly simulated instances.
A.3 Clonal concordance analysis
To analyze the concordance of the inferred phylogenetic trees with clonal information, we measured the minimum number of evolutionary events required to explain the clones. Specifically, for each sample clones were identified by clustering the GC-corrected read count profiles embedded using UMAP [26,15]. The clone labels were then attached to the leaves of the inferred phylogenetic trees. With this clone labeled phylogenetic tree, we solved the small parsimony problem under the Wagner [42] model to obtain a parsimony score, , which we call the clonal discordance score. This clonal discordance score is the minimum number of clonal transitions required to explain the cells of the phylogeny.
To compare across different phylogeny sizes, we computed the relative clonal discordance score between the Lazac and Sitka phylogenies as where 1 , 2 are the clonal discordance scores of the Lazac and Sitka phylogenies respectively. In particular, a positive score indicates that the Lazac phylogeny is more concordant with the clones while a negative score indicates that the Sitka phylogeny is more concordant with the clones.
18
. CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made
A.4 Permutation test for analysis of SNV support
In this section we provide details about how subtrees of the phylogenies are identified for the analysis and the permutation test used for investigate if the subtrees are supported by the SNVs.
First, we describe how we identify subtrees in the phylogeny to analyze. Our goal is to identify subtrees that have enough cells so that pooling the reads from the cells and finding SNVs is feasible. However, if the number of cells is too large, the permutation test will not yield a significantly low p-value. To that end, we perform a breadth-first traversal of the nodes of the tree (starting from the root) to identify the desired subtrees. At each iteration, we compute the number of cells in the subtree rooted at the node (i.e. the number of leaves in the subtree). We select the subtree if (1) the number of cells in the subtree is more than 10% of the total number of cells (2) the number of cells in the subtree is less than 25% of the total number of cells and (3) the subtree is not contained in any of the subtrees selected in previous iterations. Now, we provide details about the permutation test for a given subtree. We say that an SNV supports a subtree of a phylogeny if all the cells that yield a read harboring the SNV are contained in the subtree. We randomly permute the cell labels 500 times and count the number of SNVs supporting a given subtree. The p-value is empirically estimated by the ratio of the number of instances in which more SNVs support the subtree than with the original unpermuted cell labels with the total number of permutations tested (which is 500 in our study).
A.5 Comparison to simulated trees
We assess the accuracy of the inferred trees compared to the ground truth simulated trees by employing two distinct tree dissimilarity metrics. These metrics are implemented in the TreeCmp tool [5] and the comparisons are done in a similar manner to the comparisons in our Startle [37] paper. Our metrics take a ground truth tree, T * , and an inferred tree, T , both in Newick format [6].
The Robinson-Foulds (RF) distance, RF (T , T * ), is a tree distance metric based on the induced bi-partitions in the input trees [33,4]. Each edge ∈ T is associated with a bi-partition := ( ,¯) of its leaves, using the equivalence relation ∼ if is connected to in T − , the forest formed by removing edge . The set of bi-partitions for a tree T is Bip(T ) = { : ∈ (T )}. The RF distance is then: Similarly, the quartet distance, Q (T , T * ), is a tree distance metric based on the induced quartets in the input trees [33,4]. We define the set of quartets (T ) as the set of all consistent 4-leaf sub-trees with the unrooted topology of T . Then, Finally, we used normalized versions of both RF and Q to enable comparison across different parameter settings. This normalization is implemented in TreeCmp [5] and described in their paper.
B.1 ZCNT small parsimony: dropping the integrality condition
Let be the delta matrix obtained by applying the delta transformation to each row of the copy number matrix (i.e. = Δ( ) ). Using the formulation of the ZCNT small parsimony problem as stated in (Problem 3), we can write the objective as the following mathematical program. Notice that we can rewrite the optimization objective as a linear function subject to additional constraints. Specifically, ∥ℓ ( )−ℓ ( )∥ 1 = =1 ( + − − ) when + = max{ℓ ( ) , ℓ ( ) } and − = min{ℓ ( ) , ℓ ( ) }.
And we can set + using the two linear constraints + ≥ ℓ ( ) and + ≥ ℓ ( ) ; a similar procedure works for − . Then, by dropping the integrality condition ℓ ( ) ∈ ℤ , we obtain the following equivalent linear program. Since we can solve linear programs in (weakly) polynomial time, this proves the following theorem.
Theorem 4. The ZCNT small parsimony problem can be solved in (weakly) polynomial time when the constraint that ℓ ( ) ∈ ℤ is relaxed to ℓ ( ) ∈ ℝ using a linear program with O ( ) variables and O ( ) constraints.
B.2 ZCNT small parsimony: dropping the balancing condition
When we drop the balancing condition, our problem becomes equivalent to the fixed topology rectilinear Steiner tree problem [36] on the delta profiles where the ancestral nodes lie in ℤ . While there are several algorithms for the unrooted variant of this problem when Steiner vertices are in ℝ [13,36,9], our problem is different in that i) it assumes a rooted topology ii) the Steiner vertices are required to lie in ℤ . Further, this problem has not been recently studied and we believe deserves a modern treatment. In this section, we present and prove the correctness of a linear time dynamic programming algorithm that solves the ZCNT small parsimony problem without the balancing condition.
We first observe that it suffices to analyze each locus independently. Let be the delta matrix obtained by applying the delta transformation to each row of the copy number matrix (i.e. = Δ( ) ). Let ℓ minimize the quantity ( , ) ∈ ( T ) |ℓ ( ) − ℓ ( )| and agree with the delta matrix on the leaves; that is, 20 . CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023. ; https://doi.org/10.1101/2023.04.10.536302 doi: bioRxiv preprint Algorithm 1 ZCNT small parsimony without the balancing condition Require: A binary tree T rooted at vertex , a delta matrix , an assignment of cells to leaves, and a locus . Output: A minimizer of the cost (ℓ , T ) for locus that agrees with on the leaves of T . 1: if is a leaf in T then Thus, we can compute the cost of the optimal labeling ℓ for each locus independently and sum them together to obtain the entire cost. We also introduce the quantity (T ; ) as the cost of the optimal labelinĝ ℓ of T that agrees with ℓ on the leaves of T and has root label . Our algorithm relies on the following easy to compute function on discrete intervals denoted [ , ] = { , + 1, . . . , − 1, }: Our algorithm then applies the Sank(·, ·) function in a top-down recursive fashion, to compute the interval of optimal root labelings for each node. Though this procedure is quite natural, its proof of correctness is not immediately obvious and relies on a technical (Lemma 3). was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023. ; The correctness of our algorithm relies on two technical lemmas whose proofs are in (Section C). Then, the correctness of our algorithm follows from induction using the following lemma.
Lemma 3. Let T be a tree whose root vertex has two children 1 and 2 . Let T 1 and T 2 denote the sub-trees rooted at 1 and 2 respectively. Then, suppose that where the first equality follows from the definition of (·; ·) and the fact that the distance is ℓ 1 . And the first inequality follows from our assumptions about (T 1 ; ) and (T 2 ; ).
We now consider the two cases of Sank ( This will prove the desired inequality of the theorem. Then, to see that [ , ] is the optimal labeling of the root it is enough to observe that the inequality is realized when ∈ [ , ]. As the proof of this inequality is rather technical and unenlightening, it is summarized in in (Lemma 1) and proven in Supplementary Proofs.
22
. CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023. ; https://doi.org/10.1101/2023.04.10.536302 doi: bioRxiv preprint Which will again prove the desired inequality of the theorem. Then, to see that [ , ] is the optimal labeling of the root it is enough to observe that the inequality is realized when ∈ [ , ]. The proof of this inequality is given in (Lemma 2). ■
23
. CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023.
The remaining two cases occur when either = or − 1 = .
is the result of applying the delta event to the profile Δ( ).
(⇐) This case is handled symmetrically. ■ Proposition 3. ′ ( , ) is a distance metric. Further, Proof. To see that ′ (·, ·) satisfies the triangle inequality, it suffices to observe that the composition of delta sequences taking to and to transforms to . It is clearly reflexive since no delta event needs to be applied to map to itself.
To see symmetry and the above equality, we observe that something stronger holds. Let Proof. We first notice that +1 =0 Δ( ) expands to a telescoping sum after applying our definition of a delta profile. Specifically, 24 . CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made where the last equality follows from the definition of absolute value. ■ Proposition 1. The delta map Δ : ℤ → D +1 is invertible.
Proof. By (Proposition 4) the range of the map Δ lies in the space of vectors in ℤ +1 satisfying the balance condition. Thus, it suffices to show that Δ is injective and can reach all vectors (i.e. is surjective) in this space.
To see that the map is injective, let , ∈ ℤ be two distinct vectors. Then, define as the first coordinate in which these vectors differ. Since is the first coordinate in which these vectors differ, −1 = −1 but ≠ . Thus, proving that Δ is injective.
To see the map is surjective, let ∈ ℤ +1 be a vector satisfying the balance condition. Define a vector ∈ ℤ such that 0 = 0 + 2, and = + −1 for ∈ {1, . . . , }. Then, Δ( ) agrees with on the first coordinates, but since and Δ( ) both satisfy the balance condition by (Proposition 4), their last coordinate is also determined, proving that Δ( ) = . ■ To see this, we perform a case analysis on . If
25
. CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023. ; . CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023. ; . CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made Proof. The proof again follows by a case analysis, but it is much simpler than in (Lemma 1).
Case 1: ∈ [ , ]
This case is trivial since dist( , [ , ]) = 0 and the left hand side of the inequality is always non-negative.
since is to the right of while is to the left of . This completes the proof. ■
28
. CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023. Figure 9) shows the distribution of normalized RF distance between trees inferred by Lazac and trees inferred by Sitka on human breast and ovarian tumour data [15]. • (Supplementary Figure 10) a table displaying the results of our concordance analysis on trees inferred by Lazac and trees inferred by Sitka on human breast and ovarian tumour data [15]. • (Supplementary Figure 11) Results of somatic SNV analysis on subset of samples from an in vitro cell line system modeling instability in human breast and ovarian tumours [15].
29
. CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023.
30
. CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023. ; https://doi.org/10.1101/2023.04.10.536302 doi: bioRxiv preprint Supplementary Figure 3: Comparison of reconstruction accuracy (left: RF distance; right: Quartet distance) CONET simulated data across distance based methods for copy number tree reconstruction with varying number of cells = 100, 150, 200, 250, 600 across four sets of loci = 1000, 2000, 3000, 4000 and seven random seeds = 0, 1, 2, 3, 4, 5, 6. As MEDICC2 was too slow to run on more than 150 cells, we exclude it from comparisons where the number of cells > 150.
31
. CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023. Figure 6: Relative difference between the ZCNT distance ( , ′ ) and the CNT distance for patient 12 from a metastatic prostate cancer tumor sample [17].
32
. CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023. ; https://doi.org/10.1101/2023.04.10.536302 doi: bioRxiv preprint . CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023. ; https://doi.org/10.1101/2023.04.10.536302 doi: bioRxiv preprint . CC-BY-NC 4.0 International license available under a was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint (which this version posted April 12, 2023. ; https://doi.org/10.1101/2023.04.10.536302 doi: bioRxiv preprint | 2023-04-16T13:15:02.346Z | 2023-04-12T00:00:00.000 | {
"year": 2023,
"sha1": "31c25b4da78af04227f8b6bb4777faee20c65882",
"oa_license": "CCBYNC",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/04/12/2023.04.10.536302.full.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "606c516b1f5054f3794b51eebaa0b82c1981af78",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
257532950 | pes2o/s2orc | v3-fos-license | The effects of dydrogesterone treatment on first-trimester aneuploidy screening markers and nuchal translucency in women with threatened miscarriage
Objective: To evaluate the effects of dydrogesterone treatment on first-trimester aneuploidy screening markers and nuchal translucency (NT) in women with threatened miscarriage. Materials and Methods: This study is an prospective case-control study. One hundred seven pregnant women who applied for the first-trimester screening test at 11-14th weeks of gestation were included in the study. The study group consisted of 53 pregnant women using oral dydrogesterone due to the threat of miscarriage for at least 2 weeks and without vaginal bleeding for the last 72 h at the time of enrollment. The control group was composed of 54 healty pregnant women. Fetal Crown-rump length (CRL), NT, pregnancy-associated plasma protein-A (PAP-A) level, and free beta-human chorionic gonadotropin (free B-hCG) levels of the patients were measured. Results: One hundred seven patients included in the study, 54 (50.46%) were in the control group, and 53 (49.54%) were in the study group using dydrogesterone. Age, body mass index, gravida, parity and abortion numbers, gestational weeks, and CRL values of the two groups were congruent. In the comparison-free B-hCG, PAPP-A and NT values of both groups, no statistically significant difference was found between the two groups in terms of first-trimester test results and NT (p<0.05). Conclusion: The use of dydrogesterone in first-trimester pregnancies does not affect first-trimester screening tests and nuchal translucency.
Introduction
Threatened miscarriage is the most common complication of pregnancy, manifested by vaginal bleeding, while the cervix are closed before 20 weeks of gestation, occurring in approximately one-fifth of pregnant women (1)(2)(3) . In addition to its negative social and economic effects, it has an important effect on physical and psychological well-being. Research showed that the level of distress associated with miscarriage can be equivalent to the stillbirth of a full-term baby and causes post-traumatic stress disorder (4) . This situation forces physclinicians to take more stringent prevention measures. Empirically, it is attempted to be treated and prevented using progesterone supplementation, anticoagulation, and/or immunomodulatory therapies (5) . The most common practice is to prescribe progesterone (6,7) . It is now understood that progesterone is necessary for the initiation and maintenance of pregnancy at all stages (8,9) . It has been reported that progesterone deficiency causes miscarriage (10) . From the late 1900s to the present, first-trimester screening tests remain important for fetal chromosomal anomaly evaluation (11,12) . Nuchal translucency (NT) is a hypoechoic region located between the skin and soft tissues behind the cervical spine. This hypoechoic area is presumed to represent mesenchymal edema and is frequently associated with distended jugular lymphatics. In some studies, it has been stated that progesterone may cause abnormal blood flow patterns and therefore an increase in NT, but it does not change the result of the screening test (13,14) . It has been demonstrated that progesterone could cause both rapid dose-dependent relaxation of the placental vascular smooth muscle and the proliferation of cultured human vascular smooth muscle cells of the umbilical vein. Incorrect evaluation of NT, which is a sensitive marker in Down syndrome screening and is accepted as a component of firsttrimester screening tests, may lead to false-positive results. Therefore, the parameters affecting NT measurements should be thoroughly investigated. Although there are many publications on dydrogesterone, which is structurally and pharmacologically similar to microgenized progesterone, in the prophylaxis of threatened miscarriage and its use in early pregnancy (15) , as far as we know, there is no study on its effect on first-trimester screening test components. In our study, we evaluated the effect of dydrogesterone use in the first-trimester on NT, pregnancy-associated plasma protein-A (PAPP-A), and free beta-human chorionic gonadotropin (B-hCG) values, which are the components of the first-trimester aneuploidy screening test (16) .
Materials and Methods
Our study is a prospective observational case-control study conducted at University Health Sciences Turkey, Gaziosmanpaşa Training and Research Hospital on firsttrimester pregnant women who presented to our pregnant outpatient clinic between November 2022 and January 2023 for first-trimester screening tests. Participants were included in the study by invitation. Written informed consent was obtained from all pants who agreed to participate. The inclusion criteria were age 18-39 years, between 11-13.6 weeks of gestation, and confirmation of a crown-rump length (CRL) of 45 mm to 84 mm in ultrasonographic evaluation. Pregnant women with multiple pregnancies, high blood pressure, gestational diabetes, metabolic disease with vascular involvement, renal failure, and chronic drug use during pregnancy were excluded from the study. In the gynecological evaluation, pregnancies with closed cervix and vaginal bleeding were diagnosed as a threatened miscarriage. All participating pregnant women were questioned about whether they experienced the threat of miscarriage and vaginal bleeding and therefore used oral dydrogesterone. Participants who used oral dydrogesterone for at least 2 weeks and no-bleeding for the last 72 h were included in the case group. Pregnant women who used dydrogesterone for less than two weeks or whose bleeding still continued after treatment were excluded from the study. The control group was composed of 54 healthy pregnant women with the same criteria as the study group but who did not take exogenous dydrogesterone and without bleeding. The reason for taking the 2-week period as a criterion is in almost all the studies that can be compared in the literature, oral microgenized progesterone was used and the duration of use was taken as 2 weeks. To make an accurate comparison with other studies in the literature, it was sought to use at least two weeks to ensure that the duration was constant. The reason for seeking a condition of 72 h without bleeding after treatment was to eliminate the possible effect of bleeding on the test. Pregnant women who used any progesterone other than dydrogesterone were excluded from the study. In the group using dydrogesterone, it was required to have received treatment at a standard dose for at least 2 weeks. The dose recommended by the pharmaceutical company for dydrogesterone treatment was accepted as the standard dose. To standardize the study, daily use of 10 mg orally every 8 h up to 30 mg was accepted as the standard; doses higher than this were considered high doses and excluded from the study. Only oral preparations were included in the study. The pregnancy histories, medical histories, smoking habits, age, and height information of the patients were questioned, and the information was recorded. Then, measurements of all participants were made using the same ultrasonography (USG) scanner by the same single physician. Gestational age was calculated using the fetal CRL. All ultrasound examinations Sonuç: İlk trimester gebelerde didrogesteron kulanılması ilk trimester tarama testi parametrelerini ve ense saydamlığını etkilememektedir.
Anahtar Kelimeler: Didrogesteron, ense saydamlığı, düşük tehdidi, doğum öncesi tarama testi were performed with a 4.5-16.5 MHz transabdominal probe or with a 5-9 MHz transvaginal trans-ducer (Mindray DC8 Expert, Wauwatosa). In cases where it was difficult to visualize the fetus [such as with high body mass index (BMI)], the was examined 2-dimensionally (5-9 MHz). Scans were performed transvaginally using a transvaginal probe. Patients with a CRL less than 45 mm and more than 84 mm, the presence of a non-viable fetus, multiple pregnancies, the presence of major serious fetal anomalies such as anencephaly, the presence of spina bifida, and cardiac anomalies were excluded from the study. NT measurements were performed thrice for each patient and the highest value was recorded (15) . Measurements were made in the sagittal plane, whereas the fetus was in a neutral position, with clear separation of the amnion membrane, and after magnification on the USG screen to cover the fetal head and upper thorax. Patients NT values greater than 2.5 mm were excluded (which is roughly equivalent to the 95 th percentile (15) . Immediately after USG evaluations, blood samples for PAPP-A and free B-hCG were taken from the patients. Ethical approval was obtained from the Ethics Committee of University Health Sciences Turkey, Gaziosmanpaşa Training and Research Hospital before starting the study (date: 23/11/2022, no: 145) and was carried out in accordance with the Declaration of Helsinki.
Statistical Analysis
In the power analysis performed before starting the study, it was found appropriate to detect a difference of at least 0.25 (medium level) effect size between the groups, with 80% power and 5% type error, with a total of 106 people, and a minimum of 53 people in each group. The calculation was made using the G*Power 3.1.9.7 program. It was aimed to reach the numbers specified as the study endpoint. The normality of the distribution of continuous variables was evaluated using the Shapiro-Wilk test. Student's t-test was used to compare normally distributed variables, and the Mann-Whitney U test was used for non-normally distributed variables. Spearman Rho correlation coefficients were calculated to examine the linear relationship between continuous variables. Fisher's Exact test was used in the analysis of categorical data. The analysis of the data was performed using the IBM SPSS 21 program.
Results
Of the 107 patients included in the study, 54 (50.46) were in the control group and 53 (49.54%) were in the dydrogesterone group. The demographic characteristics of the participants in the groups are presented in Table 1. There was no statistically significant difference between the two groups in terms of Table 2. There were no significant differences in NT MoM levels, maternal serum PAPP-A, and free B-hCG MoM levels between the study and control groups. Both groups were divided into four subgroups to determine the relationship between NT and dydrogesterone use for specific CRL measurements. CRL in the first, second, third, and fourth groups were 45-54 mm, 55-64 mm, 65-74 mm, and 75-84 mm, respectively. The NT, PAPP-A, and free B-hCG MoM levels of the groups are shown in Table 3. There were no significant differences in NT levels, maternal serum PAPP-A, and free B-hCG levels between the study and control groups for all subgroups of CRL measurements. When examining whether fetal NT measurements were related to maternal and fetal parameters, no significant relationship was found between these parameters and NT MoM values. A negative significant relationship was found between NT MoM and CRL mm only in drug users and non-users (p<0.05). This means that as CRL measurements increase, regardless of drug use, the NT MoM value decreases (Table 4).
Discussion
In our study, we investigated the effects of dydrogesterone treatment on first-trimester aneuploidy screening markers and NT in women with threatened miscarriage. We found similar NT levels in patients who used dydrogesterone because of the threat of miscarriage and in the control group that did not. We also found similar levels of PAPP-A and free B-hCG between the groups. The use of dydrogesterone in first-trimester pregnancies does not affect first-trimester screening tests and nuchal translucency. The use of first-trimester screening tests is very common because it is non-invasive. It does not require special training other than NT measurement, it can be performed as soon as the patient is seen in outpatient clinics, and it is still one of our powerful weapons as a screening test. It is of great importance that it is performed meticulously and its accuracy has increased. Therefore, the factors that may affect the parameters should be investigated thoroughly. In the study that was the starting point of our study, Giorlandino et al. (13) were the first to examine the relationship between first-trimester progesterone therapy and the fetus in a total of 3.716 pregnant women and they found that the use of exogenous progesterone increased NT. Moreover, they showed that this increase was independent of maternal age, BMI, smoking, and gestational age. They stated that this increase did not change the results of first-trimester screening tests. However, although they evaluated many progesterone formulations in their studies, dydrogesterone was excluded. Additionally, in the correspondence with Bellver et al. (17) on the subject after these studies, it was stated that the NT increase was only in the 11 th week, and it did not occur in the following weeks. Again, in the same correspondence, it was emphasized that different formulations might change the fetal circulation differently and thus affect NT. Based on this criticism, we included pregnant women who used a single preparation for the same period in our study.
In the study by Namlı Kalem et al. (18) in 2018, 121 pregnant women using intravaginal progesterone for assisted reproduction treatment (ART) and 124 healthy pregnant women who did not use progesterone were compared and it was found that NT increased in the progesterone group and this increase was statistically significant. However, in this study, the women who became pregnant spontaneously and who did not use drugs were chosen as the control group and compared with the case group, which became pregnant with assisted reproduction. Whether assisted reproduction pregnancy itself affects NT is unknown.
To exclude this factor from our study, we included pregnant women who became pregnant spontaneously and excluded the ART group from the study. In a study by Keçecioğlu et al. (14) that was designed retrospectively in 2016, groups that did and did not receive micronized progesterone were compared and it was shown that oral progesterone treatment increased only NT and free B-hCG values without causing abnormal results in the test result. In their ROC analysis, the area under the curve for NT was found as 0.634, which was distinctive, and a correlation was found between treatment time and NT (14) .
In a study conducted in 2021, Karadağ et al. (19) divided case and control groups by dividing women who did and did not use progesterone into subgroups according to their CR, not according to their gestational week. As a result, they found no differences in MoM levels of NT, PAPP-A, and free B-hCG in all CRL groups (19) . In the study of Karaca et al. (20) in 2022, which we found as the most recent in the literature, the participants were divided into three groups according to bleeding and progesterone use, and they found that free B-hCG increased in the group that used progesterone regardless of whether the women had bled, compared with the group that did not use progesterone.
Study Limitations
The main limitation of our study is The small number of patients and not being compared with other progesterone types. New works with other types of progesterone can be detailed. There are no detailed studies on the duration of progesterone use in the literature. New studies can be performed by comparing the duration of use. The other limitation is the dose regimen of dydrogesterone; we gaved all patients orally 30 mg/day dydrogesterone, but the patient weight was different. Although we excluded in the study the patients >30 kg/m 2 BMI, different weights of patients could obtain different dose regimens.
Conclusion
The strength of our study is that the speculative hypothesis that the use of progesterone affects fetal circulation and increases nuchal translucency, thus increasing the risk in first-trimester screening tests, is not true.However, all of these studies were performed either with micronized progesterone or with other progesterone preparations that excluded dydrogesterone. There is no other study in the literature comparing NT, PAPP-A, and free B-hCG levels with the use of dydrogesterone. Oral use is very convenient and preferred compared with vaginal use in women with bleeding. Additionally, because it did not change any parameters, it had no negative effect on the reliability of the first-trimester screening test and the objectivity of the NT value, which provides an advantage in terms of use in first-trimester pregnancies. If our study is supported by studies involving larger numbers of participants, we think that dydrogesterone will come to the forefront during pregnancy because it does not affect first-trimester tests. | 2023-03-12T15:18:28.926Z | 2023-03-01T00:00:00.000 | {
"year": 2023,
"sha1": "2fff5908fcb978ec0cd7bc5153477402d752cbd2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "30e8f28c57a2b3ab9050735aaef1cd99b7a0e634",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
234553367 | pes2o/s2orc | v3-fos-license | The Aesthetic Education Methods of Fashion Design Majors in Colleges and Universities—Taking Clothing and Design Faculty of Minjiang University as an Example
Chinese colleges and universities attach much importance to aesthetic education in developing students’ quality education, especially for those students majoring in fashion design. The objective of this study is to explore the aesthetic education methods of fashion design majors in colleges and universities. Taking Clothing and Design Faculty of Minjiang University as an example, this article promotes how to enhance aesthetic education, optimize the construction of its courses, enrich students’ extracurricular life and strengthen teaching staff construction, then improve integration of regional traditional culture and aesthetic education, and finally reinforce the social service ability of aesthetic education. By this means, the college both practices students’ artistic innovation and aesthetic ability, and contribute to the growth of teaching staff, which enhance the influence of the college and hopefully provide a reference for others.
INTRODUCTION
Aesthetic education is an education that cultivates students' ability to recognize, love, and create beauty [1]. In recent years, the country has accelerated its pace in promoting aesthetic education. In August 2018, General Secretary Xi Jinping replied to the eight old professors of the Central Academy of Fine Arts, and emphasized that we need to do a good job in aesthetic education and promote the spirit of Chinese aesthetic education. In March 2019, the document, Opinions on Effectively Strengthening the Work of Aesthetic Education in Institutions of Higher Learning in the New Era, issued by the Ministry of Education, pointed out that it was an important task for higher education, currently and also in the coming future, to cultivate students' aesthetic and humanistic qualities, and to improve aesthetic education quality in every aspects. The document also emphasized that aesthetic education of colleges should focus on three important areas: universal art education, professional art education, art teacher education in colleges. This means that professional art education occupies a pivotal position in the reform of aesthetic education. Fashion design refers to artistic expression and structural modeling of clothing line, color, tone, texture, light, space, etc. It is an art form that combines practicality and artistry, and it carries people's pursuit of beauty. With the continuous need of people's toward quality of life, people's aesthetics of clothing are gradually changing. The quality of the fashion design directly affects the sales of the clothing. On the other hand, good clothing design can indirectly enhance the aesthetic taste of customers, which puts forward higher requirements on fashion design. At the forefront of aesthetic education, fashion design majors in colleges and universities need to develop students' aesthetic quality comprehensively while improving students' professional skills. Through classroom teaching, campus life experience, social practice and other ways to carry out aesthetic education, students can comprehend the "truly beautiful things" in the elegant works of art, understand the true connotation of aesthetic education, and enhance their abilities to appreciate beauty. Students also can expand their fashion horizons, improve their creativity, and eventually form their unique design ideas and styles. Moreover, students can cultivate temperament, establish correct values [2], and understand beauty is not only about the choice of material materials, but also the enrichment of the spiritual world and dedication to society. Currently, fashion design has made significant achievements in aesthetic education. But there are still some problems, including the unreasonable curriculum design of aesthetic education, insufficient construction of aesthetic education teachers, lack of aesthetic education activities and social service capacity, etc. [3]. This article took the Clothing and Design Faculty of Minjiang University as the research object, discussed the method of aesthetic education in fashion design, and had achieved certain results, hoping to provide a reference for the implementation of aesthetic education in fashion design majors in colleges and universities.
Paying more attention to aesthetic education
The Clothing and Design Faculty of Minjiang University fully implemented the spirit of guided education policy of the Chinese Communist Party and General Secretary Xi Jinping's speech on aesthetic education. The college paid more attention to aesthetic education based on the current status of the aesthetic education work of the fashion design majors in colleges and universities. The primary goal of college education was taking the improvement of students' aesthetics and humanistic quality. The college highly valued the role of aesthetic education in talents cultivation, and paid attention to the construction of the college's connotation, highlighted the characteristics of fashion design teaching, insisted on educating people through aesthetics and culture, and integrated the cultivation and practice of socialist core values into the entire process of college aesthetic education. Art activities with diverse forms have been carried out frequently, such as the annual joint exhibition of cross-strait college students' clothing design works, clothing matching contests, Beijing College Student Fashion Week and other competitions, attracting students' high engagement, and eventually creating a lively artistic atmosphere in campus. The college had increased its investment in aesthetic education and paid attention to the renovation of the interior decoration, including the construction of aesthetic education teaching practice areas and the display of artworks in public places to create a good aesthetic education atmosphere. There are the most complete teaching and experimental conditions for the textile and garment subject on the west side of the Straits, including textile and clothing materials and textile testing laboratories, clothing technology laboratories, image design laboratories, clothing CAD laboratories, physical training laboratories and clothing ergonomic laboratories, etc.. These laboratories provide teachers and students with valuable teaching services integrating modern teaching, scientific research, development and training, which play an essential role in the process of cultivating students' aesthetic education ability and innovation awareness.
Optimizing the construction of aesthetic education courses
The introduction of aesthetic education into the classroom is one of the important signs of the implementation of school aesthetic education, and curriculum construction is directly related to the quality of talent training [4]. The college paid great attention to the formulation of undergraduate talent cultivation programs. Based on the investigation of the talent needs of the textile and apparel industry in Fujian Province, it had established a collaborative education model and school running mechanism with all-round participation of the industry and enterprises. The talent cultivation program was formulated according to the three steps of investigation, demonstration and practice, and puts aesthetic education into the focus. The college optimized the construction of aesthetic education courses of the two art majors, which called the Fashion Design and Engineering and Fashion Design, added the aesthetic content and offered the new course Fashion Aesthetics to the 2019 undergraduate talent cultivation plan. By increasing the elective courses related to aesthetic education, the college provided students with a variety of learning materials and created a strong learning atmosphere. Public elective courses such as Clothing Color Science, Appreciation of Fine Arts, and Appreciation of Taiwan Minority Costumes were offered to all students in the school, creating a good aesthetic education environment and humanistic atmosphere. Teachers delved into the content of the course. According to the actual situation of each class of students, they comprehensively considered the actual needs of the society, then chose the contemporary, representative and classic knowledge to enrich the aesthetic education content, so that each course had its inherent commonality and forms an organic whole. In teaching practice, teachers infiltrated knowledge related to aesthetic education, such as explaining the cases of fashion design, guiding students to read fashion magazines and watching fashion shows, etc. to create aesthetic situations and cultivate students to experience and appreciate beauty [5]. At the same time, teachers fully excavated the aesthetic education factors of various professional courses and incorporated the essence of aesthetic education spirit into the teaching process of professional core courses such as Introduction to Art, Costume Material Science, and Design Performance Techniques. While improving their professional skills, students could feel the profound heritage of Chinese culture through aesthetic education, thereby enhancing their aesthetic perception and creativity. The college implemented policies and measures to encourage teachers to actively participate in the reform of teaching methods, and actively promoted the construction of aesthetic education courses and the reform and innovation of teaching methods. Teachers had improved the level of teaching by conducting various teaching seminars to discuss and share good teaching methods. Through active exploration of heuristic, inquiry, discussion, and participatory teaching, teachers could fully mobilize students' learning enthusiasm and encourage students to learn independently.
Enriching students' extracurricular life
Aesthetic education is ubiquitous. It is not only reflected in classroom teaching in schools but still has its meaning in after-school life. The college took advantage of the professional benefits of fashion design to implement the plan of college aesthetic education infiltration action. To enrich students' after-school experience, the college organized several activities, such as actively Advances in Social Science, Education and Humanities Research, volume 505 carrying out cultural and sports activities like DIY, Turning Waste into Treasure, Wonderful Room Dormitory Decoration Contest. Students used their professional expertise in these activities to process and transform waste materials in life into exquisite handicrafts, calligraphy, and painting works. These works with rich cultural connotations and artistic flavour would also subtly influence students' aesthetic consciousness and enhance aesthetic taste. With the goal of cultivating students' innovative spirit and practical ability, the college strengthened the construction of undergraduate aesthetic education. The college actively encouraged students to participate in various discipline competitions after class, and had held competitions in related disciplines such as new clothing reforms and clothing knowledge competitions for three consecutive years. In addition, the college formulated related management measures, allocated funds, combined subject competitions with curriculum reforms, and used courses to drive subject competitions to provide a guarantee for cultivating college students' aesthetic quality, awareness of independent innovation, and practical ability. These valuable experiences could significantly stimulate students' sense of innovation and the students would appreciate the beauty of learning and creation from the competition process. The teachers often organized or encouraged students to go to parks or mountains to breathe fresh air and capture the beauty of nature. In these after-school activities, students could extract creative and design elements from nature, so that they could unconsciously get creative inspiration and enrich their own life experience.
Strengthening the construction of aesthetic education teachers
The teaching of aesthetic education courses in colleges and universities should pay attention to the improvement of students' artistic accomplishment and aesthetic ability. It is necessary to cultivate students' appreciation and artistic practice ability from multiple angles and channels in subject teaching and practice. This determines that the team of aesthetic education teachers must have a higher comprehension of the basic qualities of qualities and various arts [6]. In strengthening the construction of aesthetic education teachers, the college strengthened the training of young teachers and organized teachers to participate in aesthetic education theory training. By implementing a training system for young teacher tutors, standardizing the teaching process and strengthening quality control and establishing a good environment, teachers could better engage in aesthetic education teaching. On the other hand, the college continuously optimised the structure of the teacher team, systematically organized teachers to visit first-class universities at home and abroad for further study and training. Moreover, through various forms such as encouraging teachers to work on the job in the company, participating in the research and development of corporate topics, it could effectively improve the teachers' teaching level, practical ability and aesthetic education research ability. It was more able to guide the inter-disciplinary talent team with the ability of students to carry out aesthetic practice activities.
Promoting the integration of regional traditional culture and aesthetic education
The document, Opinions on Effectively Strengthening the Work of Aesthetic Education in Colleges and Universities in the New Era, clearly pointed out that Chinese excellent traditional cultural education was the foundation of the school's aesthetic education. The hardworking on the extraction, transformation and integration of traditional culture and art plays the key role in promoting the spirit of Chinese aesthetic education. The college gave full play to the professional advantages of fashion design, and dug deep into the cultural heritage and aesthetic concepts in the distinctive clothing of Fujian Province. For example, the college protected, inherited and promoted the spinning, weaving, dyeing, and embroidery skills of She nationality costumes, Huian women's costumes, Hakka costumes, Meizhou women's costumes, Fujian local opera costumes, and other textile and garment intangible cultural heritage. In addition, the college also actively organized activities such as national costume exhibitions and intangible cultural heritage costume cultural protection to continuously improve the aesthetic ability of college students, and enhanced the national cultural identity and cultural confidence of students. In September 2019, the School of Fashion and Art Engineering held an exhibition of intangible cultural heritage and traditional costumes in Yongtai, Fuzhou. There were a total of 102 sets of works in this exhibition, mainly including intangible cultural heritage costumes (dresses) of ethnic minorities such as the She and Miao nationalities, and the works of outstanding graduates such as Hui'an women and Meizhou women's costumes (dress) with Fujian regional characteristics. This clothing and apparel exhibition not only fully demonstrated the charm of Fujian intangible heritage and traditional clothing, but also demonstrated the efforts of the students of the school of clothing and design faculty in promoting the integration of regional traditional culture and aesthetic education. The college strived to build a unique exhibition hall and completed the upgrading and transformation of the crossstrait clothing cultural exchange experience centre in 2019. The exhibition mainly displays clothing textiles and clothing products on both sides of the Taiwan Strait and has the function of publicizing and exchanging clothing culture. At present, the exhibits on display at the centre include four themes, traditional clothing in Fujian Province, traditional clothing in Taiwan, some ethnic minority clothing, and creative design works of modern clothing. Specifically, the exhibits include the She nationality costumes, Hakka costumes, Minnan costumes and Hui'an women's costumes with regional characteristics in Fujian
Advances in Social Science, Education and Humanities Research, volume 505
Province, and the aboriginal costumes of Taiwan. The pavilion is opened to schools and society in an orderly manner, to provide students with a place for internships and assist teachers in scientific research. At the same time, the display of exhibits can promote regional traditional clothing culture and contribute to cross-strait cultural exchanges.
Enhancing social service capabilities
The college implemented the plan of school aesthetic education into public action. Relying on activities such as Youth Red Dream Building Journey-She Nationality Skills, National Costume Design Competition and other activities, the college actively developed the voluntary service of aesthetic education and the promotion of traditional culture, such as participation in community services, rural areas, and schools for college students. The activities of exchanges and cooperation between college teachers, students and folk craftsmen were also often carried out. Fashion design majors and high-level student art clubs could take advantage of the collision and integration of modern college education and folk traditional arts to play an important role and actively participated in the joint construction of the Maritime Silk Road and the Belt and Road educational activities. In 2018, the college signed an agreement with the Minhou County Cultural Center, the Tanhou Village of Linglu Township in Yongtai County, and the Nanyang Village Committee of Baita Township and Luoyuan County, etc.. And the college conducted She ethnic skills training for the local She ethnic people, and launches training courses that integrate blue dyeing, embroidery, weaving and other skills with cultural and creative products. By guiding the people of She villages to make handicrafts with ethnic characteristics, the college inherited the She culture, cultivating inheritors of She ethnic skills, helped the village revitalization, and gave full play to the social service function of aesthetic education. Students took the social practice and professional teaching practice in winter and summer vacations as a platform to participate in various social practices and gave full play to their artistic expertise. In order to better carry out social practice, the college organizes the writing of voluntary service and social practice work summaries every academic year, and incorporates the results of the activities into the comprehensive examination system for college students. The college established and improved the social service organization structure. The number of professional teachers in each college team was 3 or more, and the actual number of members of the practice group was not less than 48 hours per year. In addition, the college had created an aesthetic education brand Colorful Neon Clothes charity volunteer activities, and organized a cross-strait clothing culture exchange and experience center explanation team to regularly develop clothing culture for all students and children of faculty and staff, as well as local primary and secondary schools, kindergartens, and neighborhood communities. Experience activities had played a significant role in promoting the excellent Chinese traditions and intangible cultural heritage costumes. For example, in 2019, teachers and students were organized to go to Nanping Municipality and Yangyuan Township in Nanping County to carry out warm-air education and social practice, bringing culture and art courses to more than 100 children in Yangyuan Central Primary School. Through the integration of aesthetic education in practice, teachers and students were actively guided to strengthen their awareness of serving society and enhanced their social service capabilities.
The effect of students
In the course of the college's series of aesthetic education, students not only improve their professional skills in fashion design, but also enhanced their artistic innovation and aesthetic abilities. Students had repeatedly won good results in many national, provincial and ministerial-level clothing design competitions such as the Hanbo Cup China International Young Designers Competition, China Fashion Designer Newcomer Award, Jeanswest Cup, etc.. In the course closing works and graduation design works of the last two-term students, it could be found that the students' creativity, imagination, and completion ability in costume design had been significantly improved. Graduate employment statistics showed that the college's overall employment situation was good, with an employment rate of over 97% and an overall upward trend. Some students set up their studios after graduation, and some graduates successfully entered large clothing companies such as Anta, Septwolves, and Ports. They performed well in their work and were well received by the enterprises. The college had delivered outstanding clothing design professionals to Fujian Province through aesthetic education.
The effect of teachers
The teachers' artistic accomplishment, the ability of guide students, and project development ability had been continuously improved. Many teachers had been approved for provincial-level projects and transformed their outstanding achievements to promote the operation of aesthetic education further. Among them, the She Ethnic Costume Culture inheritance project had been approved as a base for the inheritance of Chinese outstanding traditional culture in universities in Fujian Province. In addition, the college aesthetic education faculty had been enriched. At present, 4 teachers are studying for a doctoral degree, and 2 young teachers are visiting abroad, and the subject team had been further strengthened.
Advances in Social Science, Education and Humanities Research, volume 505
The effect of the college
The college formed an artistic brand characteristic project and shaped the campus cultural brand. The continuous research on the national intangible cultural media and had widely reported heritage of Siping Opera costumes and websites such as People's Daily Online, China News Network, Xinhua Net, International Online, Fujian Daily, etc., which had increased the college's influence in the apparel industry. At the same time, it will further promote the development of the school's artistic characteristics and provide a reference for the development of more colleges and universities.
CONCLUSION
Comprehensively strengthening and improving aesthetic education is still an essential task of higher education at present and in the future. The nature of fashion design determines that it needs to undertake more missions and responsibilities of aesthetic education in the new eras. So promoting aesthetic education in all aspects is a must for the fashion design majors of colleges and universities to deliver qualified artistic talents to society. We must thoroughly understand the importance of aesthetic education, keep on summarizing experiences in the process of aesthetic education for the fashion design majors in colleges and universities, and cultivate socialist builders and successors who have comprehensively developed in moral, intellectual, physical and artistic aspects. | 2020-12-24T09:12:37.790Z | 2020-12-16T00:00:00.000 | {
"year": 2020,
"sha1": "8782331d3c8465c116f3af62b622a2516ca11016",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.2991/assehr.k.201214.145",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "6c4046368a778101389ec2a55a2e133e845f13aa",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Engineering"
]
} |
245923553 | pes2o/s2orc | v3-fos-license | Ghrline Promot the Ovarian Cancer Cell Autophagy by LncRNA linc 00598
Ruixia Bai Inner Mongolia People's Hospital Wanying Song xinganmeng people's hospital Yan Cui Inner Mongolia Medical University Haining Gao Inner Mongolia Medical University Yuxing Zhao Inner Mongolia Medical University Jingkun Lu Inner Mongolia Medical University Xuan Zhang Inner Mongolia Medical University Mei Hong Inner Mongolia Medical University Peng Sun Inner Mongolia Medical University Mingyu Zhang Inner Mongolia Medical University Jiaxian Cui Inner Mongolia Medical University Xiumei Wang Inner Mongolia Medical University Pengwei Zhao ( pengwzhao@126.com ) Inner Mongolia Medical University https://orcid.org/0000-0002-2526-5705
Introduction
Ovarian cancer is the common gynecologic cancer death in worldwide. It threat to women's health (Wu, Gao et al. 2021; Xie, Wang et al. 2021). And a large number of women was died from ovarian cancer each year. The overall 5-year survival rate of ovarian cancer is about 47%. However, in the early stage, the typical symptoms was lacked in the patients of ovarian cancer (Wang, Chen et al. 2021;Wei, Lv et al. 2021). More than 2/3 cancer cases was diagnosed at the advanced stage, and the 5-year survival rate was less than 25%. Hence, the diagnosis and treatment on ovarian cancer gave a big challenge(Wang Ghrelin is one of the endogenous ligands of growth hormone secretagogue receptor, which can promote secretion of growth hormone and is proven to be with the orexigenic and adipogenic effects (Leng, Zhao et al. 2021). Moreover, the positive effect of Ghrelin on metabolism of bone has been observed through regulation of by proliferation ovarian cancer cells ( Cell autophagy can clean out the damaged organelles, proteins in normal (Sun, Hu et al. 2020). However in the apoptotic cells, autophagy can prevent the necrosis, and local in ammation (Meng, Zhou et al. 2020;Cai, An et al. 2021). Hence, in this study, the autophagy effect of ghrelin on the ovarian cancer cell line SK-OV-3 was explored. And the lncRNA which regulate the ghrelin effect SK-OV-3 autophagy was showed.
RNA extraction and sequencing
Sequencing libraries were generated using the TruSeq RNA Sample Preparation Kit (Illumina, San Diego, CA, USA). Brie y, mRNA was puri ed from total RNA using poly-T oligo-attached magnetic beads. Fragmentation was carried out using divalent cations under elevated temperature in an Illumina proprietary fragmentation buffer. First strand cDNA was synthesize dusing random oligonucleotides and SuperScript II. Second strand cDNA synthesis was subsequently performed using DNA Polymerase I and RNase H. Remaining overhangs were converted into blunt ends via exonuclease/polymerase activities and the enzymes were removed. After adenylation of the 3′ ends of the DNA fragments, Illumina PE adapter oligonucleotides were ligated to prepare for hybridization. The library fragments were puri ed using the AMPure XP system (Beckman Coulter,Beverly, CA, USA). DNA fragments with ligated adaptor molecules on both ends were selectively enriched using Illumina PCR Primer Cocktail in a 15 cycle PCR reaction. Products were puri ed (AMPure XP system) and quanti ed using the Agilent high sensitivity DNA assay on a Bioanalyzer 2100 system (Agilent). The sequencing library was then sequenced on a HiSeq platform (Illumina) by Seqhealth Technology Co., Ltd., Wuhan, China. Differentially expressed genes were then identi ed by applying a FDR cutoff of 0.05.
Gene function annotation and pathway analysis
Identi cation of enriched KEGG pathways in the upregulated and downregulated gene lists was performed with DAVID (Database for Annotation, Visualization, and Integrated Discovery) v6.8. Pathway analysis was performed using Ingenuity Pathway Analysis (IPA), version to infer the functional roles and relationships of the differentially expressed genes based on the log2 fold-change value of each gene.
Cell transfection
The mimic of miR-378b as well as negative control (NC mimic) were purchased from Invitrogen (Carlsbad, CA, USA). The full length of SNHG10 was ligated in pcDNA3.1 vector (pcDNA3.1SNHG10) was purchased from GenePharma (Shanghai, China) with empty plasmid as negative control (pcDNA3.1). Cell transfection was conducted using Lipofectamine 2000 (Thermo Fisher Scienti c, Waltham, MA, USA).
After 48 h, cells were harvested for following experiments.
Real-Time qPCR analysis
The total RNA was extracted from the SK-OV-3 cells by using the commercial TRIzol reagent (Invitrogen, USA) following the manufacturer's instruction. The quality of the total RNA was determined by agarose electrophoresis, and the RNA was reversely transcribed into complementary DNA (cDNA) by using the TIANScript RT kit (Tiangen Biotech, China). Then, the SYBR Premix Ex Taq TM II Kit (Takara, Japan) was used to determine relative gene expression levels and enrichment. The up-primer sequences for linc00598 is CCTCCCCTACTATCAACATCCC and down primer is TGCCAAGAACGAGCCCTA. The expression levels were normalized by GAPDH (up primer CAATGCCTCCTGCACCACCAACTGC and down primer GCAGTTGGTGGTGCAGGAGGCATTG )
Statistical analysis
Statistical analyses of data were performed using Graphpad Prism 8.0 (GraphPad Software, San Diego, CA, USA). The measurement data were expressed as mean ± standard deviation. Independent sample t test were used in comparisons between groups, and one-way analysis of variance (ANOVA) was used for comparison among multiple groups, followed by Tukey's post hoc tests. Comparison of data between groups at different time points was performed using two-way ANOVA. A p < 0.05 indicated statistically signi cant.
Ghrelin down expression in the ovarian cancer tissues
To obtain the ghrelin expression in the ovarian cancer tissues, the GEPIA database was used to analyse.
The ghrelin expression was lower in the ovarian cancer tissues than the normal tissues (Fig. 1A). And in HPA database ghrelin was lowly expressed in ovarian cancer tissues compared with normal tissues ( P<0.05, Fig. 1B). Hence, the ghrelin was expressed low in the ovarian cancer tissues.
Ghrelin promoted the cell autophagy and inhibited the cell viability in SK-OV-3 cells
To obtain the optimal concentration of ghrelin effect on the SK-OV-3, the SK-OV-3 cells were treated with ghrelin with the concentrations of 400, 500, 600, 700 and 800 ng/ml. Compared with the blank control group (control), the cell survival rate of 400, 500, 600, 700 and 800 ng/ml group decreased at 24 h, 48 h and 72 h (P < 0.01). When SK-OV-3 cells were treated with 600 ng/ml ghrelin for 24 h, 48 h and 72 h, the cell survival rates were around 50%( Fig. 2A). Hence, the ghrelin concentratio of 600 ng/ml was the optimal concentration o and 24 h was the optimal time.
The in uence on the SK-OV-3 cell autophagy by ghrelin was showed by detecting the expression of Beclin-1, LC3 and LC3 (Fig. 2B). The expression of Beclin-1and LC3 in SK-OV-3 cell treated with the 600 ng/ml ghrelin 24h was higher than without ghrelin treated (Fig. 2C&D). To explore the function of ghrelin on the SK-OV-3 cell autophagy further, the D-Lys3-GHRP6, a ghrelin receptor antagonist, was used. The expression of Beclin-1and LC3 in SK-OV-3 cell treated with ghrelin was higher than ghrelin+ D-Lys3-GHRP6 treatment (Fig. 2E-G).
Linc00598 selected as the effecting the SK-OV-3 cells autophagy by ghrelin using RNA-Seq To found the LncRNAs which effected the SK-OV-3 cells autophagy by ghrelin, the SK-OV-3 cells treated with ghrelin was analyzed by RNA-Seq. The differential expression of LncRNA was analyzed by DESeq. The conditions for screening differentially expressed genes were as follows: the multiple of expression difference | log2foldchange | > 1, P < 0.05. Compared with the control group, 236 LncRNAs were differentially expressed (up-regulated 130 and down regulated 106), in the ghrelin treatment group. The volcanic map shows the distribution of LncRNAs, the difference of LncRNAs expression fold and signi cance results (Fig. 3A). In Fig. 3A, the red dot represents the differentially up-regulated LncRNAs, the blue dot represents the differentially down-regulated LncRNAs, and the gray dot represents the non signi cant differentially expressed LncRNAs.
The go enrichment analysis results of differentially expressed LncRNAs are classi ed, according to molecular function MF, biological process BP and cell component CC. The top 10 go term items with the smallest p-value, i.e. the most signi cant enrichment, are selected for display in each go classi cation (Fig. 2B). In the gure 2B, the orange histogram represents the top 10 cell components with the most signi cant enrichment, the green histogram represents the top 10 molecular functions with the most signi cant enrichment, and the blue histogram represents the top 10 biological processes with the most signi cant enrichment. According to the go enrichment results, the enrichment degree is measured by rich factor, FDR value and the number of LncRNA enriched on the GO term. The larger the rich factor, the greater the degree of enrichment. The general value range of FDR is 0-1. The closer is closed to zero, the more signi cant the enrichment. The rst 20 GO term entries with the lowest FDR value,are selected for display (Fig. 2C). Go enrichment analysis showed that these differentially expressed LncRNAs were mainly related to arginine and lysine transmembrane transport, oxidative stress response and developmental process.
The top 20 pathways with the lowest p-value value are displayed, according to the KEGG enrichment analysis the differentially expressed LncRNAs (Fig. 3D). The path in the gure mainly involves four aspects: environmental information processing (orange part), human diseases (green part), metabolism (blue part) and organic systerms (purple part). According to the KEGG enrichment results, the enrichment degree, the top 20 KEGG pathways with the lowest FDR value are selected. (Fig. 3E). KEGG enrichment analysis showed that these differentially expressed LncRNAs were mainly enriched in cytokine receptor signaling pathway, glucagon and insulin signaling pathway and cancer-related signaling pathway.
According to the prediction of target genes and the enrichment analysis results of go and KEGG pathway, the linc00598 was selected as the potential relationship with autophagy, and the linc00598 was differential up-regulation of fold change > 2 times. The expression of linc00598 in SK-OV-3 cells was detected by qPCR. The expression of linc00598 in the ghrelin treated group was higher that blank group.
Linc00598 promote the SK-OV-3 cells autophagy treated by ghrelin
To show the function of linc00598 on the SK-OV-3 cells autophagy, the linc00598 was silenced and overexpressed (Fig. 4A&B). And the expression of Beclin1 and LC3 in the linc00598 silence group (si-linc00598) was lower than the control group and si-NC group (the blank plasmid, Fig. 4C,D,F ). However, in the linc00598 overexpressed group (h-linc00598), the expression of Beclin1 and LC3 was high ( Fig. 4C,E,G).The effect of linc00598 silenced or overexpressed on the SK-OV-3 cells autophagy treated by ghrelin was also explored. The expression of Beclin1 and LC3 in the Ghrelin + Si-linc00598 group was lower than the ghrelin group (Fig. 4A,B,D). And the expression of Beclin1 and LC3 in the Ghrelin + h-linc00598 was higher than ghrelin group (Fig. 4A,C,E).
Discussion
Ovarian cancer has a high mortality rate ( Ghrelin and linc00598 can promote the SK-OV-3 cells autophagy. The linc00598 was silenced and overexpressed in the SK-OV-3 cells. Then the ghrelin was used to effect the SK-OV-3 cells. We found ghrelin mainly through linc00598 to promote the SK-OV-3 cells autophagy. When the linc00598 silenced, ghrelin promote SK-OV-3 cells autophagy was inhibited. And When the linc00598 overexpressed, ghrelin promote SK-OV-3 cells autophagy was inhanced.
Declarations
Ruixia Bai1a, Wanying Song and Pengwei Zhao wrote this article. Xiumei Wang, Pengwei Zhao designed, organized and reviewed this article. All authors have read and agreed to the published version of the manuscript.
Availability of data and materials
All data generated or analyzes during this study are included in this published article.
Ethics approval and consent to participate Not applicable.
Consent for publication
Not applicable.
Declaration of Interest Statement
The authors declare that they have no competing interests. Figure 1 Expression of grhelin in different ovarian cancer tissues. A. The expression levels of ghrelin in different cancer tissues were provided by GEPIA database. B. The expression levels of grhelin in ovarian cancer tissues were provided by HPA database. * P < 0.05 was regarded as statistical signi cance.
Figure 2
In uence of ghrelin on SK-OV-3 Cell viability and the autophagy pathway.A. Effect of ghrelin with 400, 500, 600, 700 and 800 ng / mL on SK-OV-3 Cell viability at 24h, 48h and 72h. B-D Western Blot analysis was conducted to examine the protein levels of Beclin-1, LC3 and LC3 in SK-OV-3 cells after ghrelin added. E-G Western Blot analysis was conducted to examine the protein levels of Beclin-1, LC3 and LC3 in SK-OV-3 cells after ghrelin and D-Lys3-GHRP6 added. Single experiment had 3 repetitions, and * P < 0.05 was regarded as statistical signi cance. Effect of LINC00598 on the SK-OV-3 cells autophagy pathway. A&B. the expression of LINC00598 in the SK-OV-3 cells after the LINC00598 silenced or overexpressed. C-F. Western Blot analysis was conducted to examine the protein levels of Beclin-1, LC3 and LC3 in SK-OV-3 cells after the LINC00598 silenced or overexpressed. Single experiment had 3 repetitions, and * P < 0.05 was regarded as statistical signi cance. | 2022-01-14T16:32:47.300Z | 2022-01-12T00:00:00.000 | {
"year": 2022,
"sha1": "d53d38493e39a62ac4430c666f03782893be4f4a",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-1235434/v1.pdf?c=1642025752000",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a2ccf1837f43c3bacf4b12710b9ef1167b2c36b5",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
232382639 | pes2o/s2orc | v3-fos-license | Inhibiting the Growth of 3D Brain Cancer Models with Bio-Coronated Liposomal Temozolomide
Nanoparticles (NPs) have emerged as an effective means to deliver anticancer drugs into the brain. Among various forms of NPs, liposomal temozolomide (TMZ) is the drug-of-choice for the treatment and management of brain tumours, but its therapeutic benefit is suboptimal. Although many possible reasons may account for the compromised therapeutic efficacy, the inefficient tumour penetration of liposomal TMZ can be a vital obstacle. Recently, the protein corona, i.e., the layer of plasma proteins that surround NPs after exposure to human plasma, has emerged as an endogenous trigger that mostly controls their anticancer efficacy. Exposition of particular biomolecules from the corona referred to as protein corona fingerprints (PCFs) may facilitate interactions with specific receptors of target cells, thus, promoting efficient internalization. In this work, we have synthesized a set of four TMZ-encapsulating nanomedicines made of four cationic liposome (CL) formulations with systematic changes in lipid composition and physical−chemical properties. We have demonstrated that precoating liposomal TMZ with a protein corona made of human plasma proteins can increase drug penetration in a 3D brain cancer model derived from U87 human glioblastoma multiforme cell line leading to marked inhibition of tumour growth. On the other side, by fine-tuning corona composition we have also provided experimental evidence of a non-unique effect of the corona on the tumour growth for all the complexes investigated, thus, clarifying that certain PCFs (i.e., APO-B and APO-E) enable favoured interactions with specific receptors of brain cancer cells. Reported results open new perspectives into the development of corona-coated liposomal drugs with enhanced tumour penetration and antitumour efficacy.
Introduction
Brain tumor is a health and social issue of considerable importance. It causes about 7% of cancer-related deaths for those under the age of 70 and it is the second most common form of cancer (after leukemia) for children and teens [1]. Among primary brain cancers, glioblastoma (GMB) is the most common and lethal form [2]. Despite aggressive therapy, this tumor is characterized by frequent relapse [3]. Surgical resection, followed by radiation with simultaneous chemotherapy, as part of a combined modality approach, is among the most frequent current treatments. Yet, although recent advances in handling many solid tumors, the treatment of GBM remains weak with a median survival of [12][13][14][15] Pharmaceutics 2021, 13, 378 2 of 12 months [4]. Treatment limits derive from radiotherapy and chemotherapy resistance, side effects limiting treatments and, mostly, low drug concentration in the brain [5,6]. In fact, an arduous challenge in the management of brain tumor with drug administration is the drug tumor-targeting. As a matter of fact, to reach the brain, the drug needs to overcome the blood-brain barrier (BBB), a physical and electrostatic barrier that limits brain permeation of therapeutics [7,8]. Considering that, a special goal is increasing the bioavailability of traditional brain tumor chemotherapeutic drugs such as Temozolomide (TMZ), doxorubicin hydrochloride, irinotecan hydrochloride and vincristine sulfate [9][10][11][12][13]. Temozolomide (TMZ) is a chemotherapy drug that has been shown to improve average survival rate for people with some high-grade brain tumors. In clinical studies, temozolomide consistently demonstrates reproducible linear pharmacokinetics with approximately 100% p.o. bioavailability, noncumulative minimal myelosuppression that is rapidly reversible and activity against a variety of solid tumors in both children and adults [14]. However, in aggressive tumors, TMZ treatment is rarely successfully curative, because of tumor's high resistance to therapy and low drug bioavailability in tumor tissue. [15] Thus, new alternatives are necessary, including tumor cells-targeted approaches through the use of directed vectors that are able to be transported across the BBB and accumulate in the target tissue. One approach towards increasing drug accumulation in cells, is to escape from their efflux mechanism by encapsulating drugs into nano-scaled vehicles [16], e.g., poly(D,Llactide-co-glycolide) (PLGA) [17], chitosan nanogel [18], iron oxide nanoparticles [19,20] and lipid nanoparticles [21]. Indeed, one of the most promising and versatile strategies makes use of nanoparticles (NPs) acting as drug delivery systems since they provide both protection to therapeutic agents and efficient delivery across the BBB [22]. Among different nano-vehicles, liposomes are considered potent nanocarriers systems for controlled drug release [22,23]. This is mainly due to their non-toxic nature, biocompatibility, size, specificity and membrane permeability. For instance, Patel and Parikh have compared the anti-cancer efficiency of free TMZ and TMZ loaded in Hydro Soy L-α-phosphatidylcholine (HSPC) liposome for the GBM management through cell culture technique [22]. Their results suggested that liposomal formulation enhances the availability of drug at the specific site and reduces the dose-related toxicity of chemotherapy. However, despite the success of liposomal formulations in vivo, their translation into clinic has progressed more slowly. Indeed, these delivery systems are prone to elimination from the bloodstream, limiting their therapeutic efficacy. This clearance may be due to several factors including opsonization of plasma proteins or uptake by fixed macrophages. Thus, to overcome this limitations, newer generation of liposomes have employed a combination of sterically stabilized and ligandtargeted liposomes to enhance their circulation times in blood and assure a site-specific release. For example, in a recent study, it has been demonstrated that multi-targeting liposome based on glucose and biotin showed a more consistent cellular uptake of glioma cells than uncoated liposomes [24]. Yet, in the clinic, most of these data still remain unavailable for many nanomedicines. This is principally owed to the wrong assumption that liposomes will directly interact with cell surface once administrated. This is principally owed to the wrong assumption that liposomes will accumulate at the target site once administrated [25]. In the last decade, on average, less than 1% of the dispensed dosage of nanomaterials has been found within a solid tumor [26]. This poor accumulation ability has harmful influence on the clinical translation of nanomaterials for human use with respect to cost, toxicity and therapeutic outcome. After ten years of intense investigation, we now know that the poor translation of liposomes from benchtop to patient's bedside is mainly due to our lack of knowledge of the bionano interactions, i.e., the interactions between nanosized liposomes and biological systems [27,28]. In fact, a common factor that is often ignored when dealing with the synthesis of drug delivery systems is testing their behavior in biological fluids [29]. Although an efficient translation of a drug delivery system relies on rigorous control over its physicochemical characteristic such as size, charge, morphology etc., these parameters are often altered upon contact with biological media. Following intravenous administration, liposomes interact with biological molecules present in blood which ad-Pharmaceutics 2021, 13, 378 3 of 12 sorb on their surface forming a "corona" [30][31][32]. This biomolecular corona (BC) alters the synthetic identity of the nanosystems conferring a new identity that mostly controls their biological activity (e.g., blood residency, biodistribution, immune system recognition, cell binding and intracellular fate) [27,33,34]. Furthermore, the exposition of particular biomolecules from the corona may facilitate interactions with specific receptors [35][36][37]. Currently, it is known that liposomes interact and bind to receptors at the BBB level [38]. Thus, it is maturing the idea that BC-based nano-delivery systems could be suitable for innovative treatments of brain-related diseases [39,40]. Consequently, to make the most of these novel aspects and generate an efficient drug delivery system for brain-targeting, it is essential to investigate whether the BC supports liposome-based brain targeting [41]. As a step to clarify this matter, this work was aimed at exploring the effect of BC on the drug penetration and anticancer activity of liposomal TMZ. We tackled this issue by employing four TMZ-encapsulating cationic liposomal (CL) formulations made of binary combinations of the cationic lipids 1,2-dioleoyl-3-trimethylammonium-propane (DOTAP) and 3(-[N-(N ,N -dimethylaminoethane)-carbamoyl]-cholesterol (DC-Chol) and neutral lipids dioleoylphosphatidylethanolamine (DOPE) and cholesterol (Table 1). This formulation was chosen according to previous findings as it exhibits unusual endosomal escape that results in high performances and potential applicability in difficult-to-transfect cells [42][43][44][45]. Table 1. Lipid compositions of TMZ-loaded CLs.
Sample Name DOTAP (mol %) DC-Chol (mol %) DOPE (mol %) Chol (mol %)
Research on brain cancer drug response has historically been performed using commercially available 2D cell cultures that poorly predict in vivo cellular responses. In recent years, 3D cell culture techniques have been largely investigated and became a suitable alternative to traditional cell culture methods [1,46,47]. GBM tumor growth develops around three different areas, a proliferative outer region, a hypoxic core and a permeable vasculature. Currently, it has become apparent that 3D cultures reproduce more faithfully GBM features, from its spatial distribution to its interaction with the surrounding microenvironment [48]. Furthermore, 3D cultures are generally more resistant to chemotherapy [49]. Therefore, here we tested TMZ-loaded CLs on a 3D brain cancer model derived from U87 human glioblastoma multiforme cell line.
Preparation of TMZ-Loaded CLs
, dioleoylphosphatidylcholine (DOPC) and dioleoylphosphatidylethanolamine (DOPE) were purchased from Avanti Polar Lipids (Alabaster, AL, USA). Cationic lipids were used in accordance with standard procedures [50] by dissolving appropriate amounts of lipids at φ = neutral lipid/total lipid (mol/mol) = 0.5. The encapsulation of Temozolomide (TMZ) into cationic liposomes (CLs) was performed through the dehydration-rehydration method by adding 3 mg TMZ to lipids in molar ratio 1:1 and dissolving the whole mixture in 1 mL of chloroform and 0.2 mL of methanol [51]. The mixture was placed on a rotary evaporator for 4 h at 65 • to produce the film layer. After rehydration with 2.5 mL of PBS, the solution was extruded 20 times by means of a 0.1 µm polycarbonate filter with the Avanti Mini-Extruder (Avanti Polar Lipids, Alabaster, AL, USA) then, subjected to centrifugal filtration with Amicon Ultra-2 mL centrifugal filters (Merck Millipore, Darmstadt, Germany). Finally, the obtained TMZ-loaded CLs were incubated with human plasma (HP) for 1h at 37 • C to form the TMZ-loaded CL biomolecular corona (BC) complexes.
UV-Vis Spectra Measurements
The TMZ encapsulation efficiency (EE) and drug loading content (DLC) were measured by separating free TMZ from the TMZ Cls using Vivaspin 500 (5 kDa MWCO, GE Healthcare) and performing absorbance measurement of both free and encapsulated TMZ with Jasco V-630 spectrophotometer. Then, we obtained the concentration of TMZ by using the Lambert-Beer law to the 330 nm absorption peak and we correlated the measured concentration to the sample volume to obtain the absolute amount of free and encapsulated TMZ. Thus, EE and DLC were calculated through Equations (1) and (2)
Size and Zeta Potential Experiments
Size and zeta potential measurements of TMZ-loaded CLs and TMZ-loaded CLs-BC complexes were performed with a Zetasizer Nano ZS90 (Malvern Panalytical, Malvern, UK) at 25 • C. All samples were firstly diluted 1:100 with distilled and data were expressed as mean ± standard deviation of three replicates. As excess HP was not removed from the suspension of biocoronated CLs, size and zeta-potential measurements refer to the mixture BC liposomes and HP.
Spheroid Preparation and Drugs Administration
U87 human glioblastoma cells were seeded on 96-well, round bottom, ultra-low attachment plates (Corning, Corning, NY, USA) at a density of 0.5 × 10 5 cells/ mL. The multiwell was centrifuged at 300 g for 3 min to ensure the confluence of cells to the centre of the wells. The so-formed single spheroids were incubated at 37 • C in 5% CO 2 humidity for 3 days before further treatments. CLs containing TMZ were administered to spheroids at a final concentration of 0.5 mg/mL in two different conditions: pre-incubated with human plasma (Sigma, 1 h at 37 • C), to form protein corona, or without incubation. TMZ alone was administered to spheroids at 0.5 and 1 mg/mL. Control spheroids were used to compare results for both conditions, by administering human plasma or PBS respectively.
Spheroid Size and Cell Viability Measurements
After administration, spheroids were regularly imaged for 14 days at 4× magnification with Cytation3 Cell Imaging Multi-Mode Reader (BioTek, Winooski, VT, USA), by fixing focal height at 2455 µm and by performing auto-correction of the white balance for each well. Size analysis was carried out with ImageJ software [46]. Briefly, spheroid images were converted to 8-bit. A mask was created and the area of each spheroid was measured. Data were normalized by the initial volume (day 0) of each spheroid. After the timecourse experiment, cell viability was assessed by the CellTiter-Glo ® Luminescent Cell Viability Assay (Promega, Madison, WI, USA). Results were normalized by respective control spheroids. Pristine liposomes (i.e., in the absence of the biomolecular corona) did not show any toxicity and cell viability was around 100% for all the four formulations.
Results and Discussion
TMZ encapsulation efficiency (defined as (mass of the drug in liposome)/(initial mass of the drug used) and the correspondent drug loading content (defined as (mass of Pharmaceutics 2021, 13, 378 5 of 12 encapsulated drug)/(mass of liposome)) of CLs were obtained by UV-Vis spectra analyses and the results are reported in Table 1. As shown, three out of four formulations exhibited EE values higher than 60% with CL2 that reached the highest one equal to 77.7 ± 5.1%. Then, TMZ-loaded CLs were incubated with human plasma (HP) for 1 h at 37 • C to form the TMZ-loaded CL-BC complexes.
A thorough characterization of CLs and CL-BC complexes is summarized in Table 2 and Figure 1. Dynamic Light Scattering and Electrophoretic Light Scattering measurements provided size and zeta potential of the investigated systems. TMZ-CLs were small in size (hydrodynamic diameter between 100 and 200 nm) and positively charged (zeta potential between 50 and 90 mV). Statistical differences between the four formulations in regard to all parameters mentioned in Table 2 were evaluated (Supporting Information). TMZ-loaded CL-BC complexes. A thorough characterization of CLs and CL-BC complexes is summarized in Table 2 and Figure 1. Dynamic Light Scattering and Electrophoretic Light Scattering measurements provided size and zeta potential of the investigated systems. TMZ-CLs were small in size (hydrodynamic diameter between 100 and 200 nm) and positively charged (zeta potential between 50 and 90 mV). Statistical differences between the four formulations in regard to all parameters mentioned in Table 2 were evaluated (Supporting Information). After 1 h incubation with HP, TMZ-CLs with BC were bigger in size than their counterparts (Figure 1a). The correspondent Polydispersity Indexes (PdI in Table 2) indicate that bare TMZ-CLs were homogenous in size, but TMZ-CLs with BC exhibited wider size distributions. These findings are most likely due to the formation of a thick protein layer at the particle surface leading to aggregation [41]. The inversion of the zeta potential is also caused by protein binding, as most plasma proteins exhibit negative charges at physiological pH. After preparation and chemical-physical characterization, TMZ-CLs with and without BC were diluted to have a final TMZ concentration of 0.5 mg/mL and administered to 3D spheroids [52]. Anticancer activity against GBM spheroids was evaluated by spheroids size distribution. Area of spheroids was monitored using digital microscopy to assess changes in spheroid size due to cell death and destruction of the spheroid architecture. Figure 2a istered to 3D spheroids [52]. Anticancer activity against GBM spheroids was evaluated by spheroids size distribution. Area of spheroids was monitored using digital microscopy to assess changes in spheroid size due to cell death and destruction of the spheroid architecture. Figure 2a reported the time course (from 0 to 14 days) of changes in tumor size, expressed as normalized average area, induced by the treatment with TMZ-loaded CLs, free TMZ at two different concentrations (0.5 and 1.0 mg/mL) and untreated spheroids as controls. (DC-Chol/cholesterol). Statistical significance was evaluated by student's test with respect to CLs incubated with HP. ** p < 0.05, * p < 0.001, no asterisk means not significant.
As depicted, after two weeks normalized average areas of spheroids, treated with TMZloaded CL1 and CL2, strongly reduces, respectively up to 75% and 69% with respect to their initial size. By contrast, the treatment with TMZ-loaded CL3 and CL4 seems not to affect cell proliferation. The different trends of the four CLs complexes could be mostly attributed to two possible factors, i.e., non-specific contributions related to the different physicalchemical properties of each complex or specific mechanisms associated to the different complex's chemical compositions, on which mainly depends the molecular recognition at cell membrane level [53,54]. The first factor can be excluded since all complexes exhibited roughly the same diameter and surface charge (Table 2). Consequently, lipid composition is expected to play a role in activating specific endocytic pathways [54]. Indeed, CL1 and CL2 are both composed of 50% of the cationic lipid DOTAP while CL3 and CL4 of 50% of the cationic DC-Chol. The different composition seems to be the main factor in regulating cell recognition with the result that the presence of DOTAP in TMZ-CL1 and TMZ-CL2 led to a higher internalization and a resultant inhibition of cell growth. Next, since the purpose of the present study was to evaluate the impact of BC on tumor growth, we administrated CLs with BC to the GBM spheroids and measured the corresponding size changes in time (Figure 2b). It is evident that spheroids treated with TMZ-loaded CL2 with BC abruptly reduced in size already after two days. Thus, the presence of BC would seem to promote the complex' cellular uptake. After two weeks the CL2 complexes with BC reduced spheroid growth with an effect like twice free TMZ (1 mg/mL). On the contrary, the effect of BC on the other formulations resulted irrelevant, despite the BC contributions in terms of size and surface charge was similar to all the complexes. According to recent literature [55], this inequality could be ascribed to a different BC composition that can either promote or inhibit cellular recognition. It was previously demonstrated that CL2 complexes are enriched of typical protein corona fingerprints (PCFs) (i.e., Vitronectin, APOA1, APOA2, APOB, APOC2, Ig heavy chain V−III region BRO, vitamin K-dependent protein and Integrin beta3) that trigger selective association with cancer cells leading to [56,57]. Among identified PCFs, Vitronectin binds to αvβ3 integrins, also known as the vitronectin receptor and was exploited to target cancer cells over-expressing αvβ3 integrins [37]. This is a point of great general interest, as αvβ3 integrins are overexpressed on U-87 cell line [58]. Apolipoproteins bind specific lipoprotein receptors, including low-density lipoprotein receptor (LDLR) and scavenger receptor class B type I (SR-BI). LDLR mediates the endocytosis of cholesterol-rich LDL, whereas SR-BI is a high-density lipoprotein (HDL) receptor that promotes cell internalization of cholesterol esters from circulating lipoproteins. Notably, both LDLR and SR-BI are upregulated in human gliomas and play an important role in the delivery and accumulation of cargos into human U-87 cells [59,60]. Importantly, it was not possible to measure spheroids treated with human plasma alone, since it caused spheroid disaggregation and invasion, as it has already been reported in literature [61]. For a complete view, in Figure 2c we reported the microscopy images of GBM spheroid size evolution from 0 to 7 and 14 days after administration with TMZ loaded CL2 with BC complexes in comparison with free TMZ and untreated spheroids. As a confirmation of previous findings, the images manifested the similarity between the size of spheroids treated with BC-CL2 complexes and free TMZ (1 mg/mL). According to these results, it is clear how the presence of BC does not have a unique effect on tumor growth. Thus, to have an exhaustive understanding of its influence on tumor activity, after two weeks we performed cell viability measurements on spheroids treated with TMZ-CLs with and without BC (Figure 2d). Except for TMZ-CL1 where the presence of BC seems not to significantly affect the viability compared to the counterpart without BC (respectively 0.62 ± 0.09 and 0.5 ± 0.01), for the other three complexes the reduction of cell viability is strongly stressed by the presence of BC. In particular, CL2 both with and without previous incubation with HP, exerted a significant reduction up to 0.13 ± 0.01 and 0.40 ± 0.03, respectively, similarly to free TMZ at 1 mg/mL (0.15 ± 0.003). By comparing tumor growth analysis and viability assay it is possible to draw some conclusions. Notably, except for CL2 complexes where BC displayed a remarkable impact both in size tumor decrease and cell viability, for the other cases there is a discrepancy between tumor growth trends and viability results. Specifically, BC, in most cases, did not exhibit an inhibitory effect on spherical growth but, on the other hand, had a significant impact on decreasing cell viability. A striking example is represented by TMZ-CL4 where the presence of BC had an irrelevant effect on spheroid size but not on cell viability that is reduced up to 0.43 ± 0.14. This divergence could depend on different aspects specifically related to the 3D cultures behavior. In the literature, several works correlate viability assays to the size growth trends of 3D cultures [49,62]. When dealing with a 3D culture growth analysis, it must be considered factors as cell density of the spheroid and cell cohesion. These two factors together are usually used to get a better estimate of the effective spheroid volume [63]. A decrease in cell adhesion corresponds to a decrease in density and an increase in spheroid volume. How does this relate to cytotoxicity? When cells begin to die, the ones that compose the peripheral region of the spheroid adhere less to each other, this phenomenon leads to an initial increase in the spheroid volume. After that, there is a consequential dissociation of the peripheral cells that cause a volume decrease. Thus, it is not always true that a decrease in cell viability is related to a decrease in spheroid area. To have a more reliable correlation between viability and spheroid growth, cell cohesion assays and density measurements should be developed in parallel. In conclusion, the effect of BC on the anticancer activity of TMZ-loaded CLs in a GBM spheroid culture was evaluated by means of spheroid size trends and cell viability assay. The results indicated a non-unique effect of the corona for all the complexes, especially regarding the tumor growth trends. An outstanding result was reported by TMZ-CL2 formulation with BC, that caused a notable reduction of tumor size in line with a considerable decrease of cell viability. This finding demonstrated that the exploitation of BC could be a helpful strategy to perform targeting nanodevice able to overcome the BBB and improve anticancer efficacy. However, further investigations are needed to better understand the mechanism behind cellular uptake in GBM spheroids' cultures.
Conclusions
BC on the anticancer activity of TMZ-loaded CLs in a GBM spheroid culture was evaluated by means of spheroid size trends and cell viability assay. The results indicated a non-unique effect of the corona for all the complexes, especially regarding the tumor growth trends. An outstanding result was reported by TMZ-CL2 formulation with BC, that caused a notable reduction of tumor size in line with a considerable decrease of cell viability. This finding demonstrated that the exploitation of BC could be a helpful strategy to perform targeting nanodevice able to target brain cancer cells and improve anticancer efficacy. Data Availability Statement: The datasets generated during and/or analyzed during the current study are available from the corresponding authors (M.P. and G.C.) on reasonable request.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-03-29T05:26:45.894Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "08f9c66694fa4450974065a86869f9703ceefb25",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4923/13/3/378/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "08f9c66694fa4450974065a86869f9703ceefb25",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13407461 | pes2o/s2orc | v3-fos-license | Patient and Provider Reported Reasons for Lost to Follow Up in MDRTB Treatment: A Qualitative Study from a Drug Resistant TB Centre in India
Introduction Multidrug-resistant Tuberculosis (MDR TB) is emerging public health concern globally. Lost to follow-up (LTFU) is one of the key challenge in MDRTB treatment. In 2013, 18% of MDR TB patients were reported LTFU in India. A qualitative study was conducted to obtain better understanding of both patient and provider related factors for LTFU among MDR TB treatment. Methods Qualitative semi-structured personal interviews were conducted with 20 MDRTB patients reported as LTFU and 10 treatment providers in seven districts linked to Nagpur Drug resistant TB Centre (DRTBC) during August 2012–February 2013. Interviews were transcribed and inductive content analysis was performed to derive emergent themes. Results We found multiple factors influencing MDR TB treatment adherence. Barriers to treatment adherence included drug side effects, a perceived lack of provider support, patient financial constraints, conflicts with the timing of treatment services, alcoholism and social stigma. Conclusions Patient adherence to treatment is multi-factorial and involves individual patient factors, provider factors, and community factors. Addressing issue of LTFU during MDRTB treatment requires enhanced efforts towards resolving medical problems like adverse drug effects, developing short duration treatment regimens, reducing pill burden, motivational counselling, flexible timings for DOT services, social, family support for patients & improving awareness about disease.
Introduction
Multidrug-resistant Tuberculosis (MDR TB) is emerging public health concern globally. Lost to follow-up (LTFU) is one of the key challenge in MDRTB treatment. In 2013, 18% of MDR TB patients were reported LTFU in India. A qualitative study was conducted to obtain better understanding of both patient and provider related factors for LTFU among MDR TB treatment.
Methods
Qualitative semi-structured personal interviews were conducted with 20 MDRTB patients reported as LTFU and 10 treatment providers in seven districts linked to Nagpur Drug resistant TB Centre (DRTBC) during August 2012-February 2013. Interviews were transcribed and inductive content analysis was performed to derive emergent themes.
Results
We found multiple factors influencing MDR TB treatment adherence. Barriers to treatment adherence included drug side effects, a perceived lack of provider support, patient financial constraints, conflicts with the timing of treatment services, alcoholism and social stigma.
Introduction
Multidrug-resistant Tuberculosis (MDR-TB) is a major public health concern in many countries because of its difficulty in treatment and high costs. Resistance to anti-TB drugs can occur due to inappropriate treatment regimens, poor quality drugs, and inadequate intake of first line Anti-TB treatment [1]. Globally in 2013, an estimated 480, 000 people developed multidrugresistant TB (MDR-TB) and almost 80% were from the European region, India and South Africa [2]. In India, there are about 64,000 MDR TB cases out of notified pulmonary TB cases [3]. Globally, overall treatment success for MDR-TB was 48% while 28% of cases were reported as being lost to follow-up (LTFU) or had no outcome information [2]. In India, for the treatment outcome reported in 2013 for cohort of 2010, treatment success rate was 48% with 18% LTFU [3]. Nagpur zone with one of the site (Sewagram) for drug resistant TB survey in Maharashtra was among the first sites in India to start MDR-TB treatment services and had 16 % treatment success rate, 11% death rate, 4% failure and 16% LTFU among 303 patients initiated on MDR-TB treatment till 2012 "S1 Fig". LTFU is one of the key challenge in MDR-TB treatment; it also poses a public health threat because individuals who do not complete treatment are more likely to remain infectious and develop further resistance to existing drugs. World Health Organization (WHO) defined LTFU as "patient whose treatment was interrupted for two consecutive months or more" [4]. The findings of several studies conducted earlier identified high pill burden, long duration of treatment, unemployment, homelessness, history of the imprisonment, alcohol abuse, and baseline positive smear as independent predictors of lost to follow up [5,6,7]. As majority of these studies are conducted outside India, there is limited knowledge regarding factors for LTFU among MDR-TB treatment in India which has high burden of MDRTB cases. Therefore, more comprehensive understanding of patient and provider perceived barriers to treatment adherence is required to develop effective patient centered program strategies to reduce LTFU. A qualitative study was conducted to obtain better understanding of both patient and provider related factors for LTFU among MDR-TB treatment.
Study Setting
The study was conducted in seven districts of eastern Maharashtra with approximately 10 million populations. These districts are linked to Drug resistant TB centre (DRTBC) and intermediate reference laboratory at Nagpur. These patients were initially admitted at DRTBC for pretreatment evaluation for approximately seven days and later received treatment at nearest MDR-TB DOT centre from their residence.
Study design and sampling
Qualitative semi-structured personal interviews were conducted during August 2012-February 2013 with a purposeful sample of 20 MDR TB patients, who were reported LTFU among those registered for treatment during September 2007 to March 2012 and with 10 providers, which included government DOT providers, private treatment providers and community volunteers working as DOT providers for MDR-TB patients. Sampling strategy sought to ensure diversity of patients by age, sex, occupation, and residence in rural and urban areas and maximum variation in responses to the open-ended interview questions.
Data collection and analysis
In-depth interviews were conducted in regional language using interview guide by the principal investigator along with the DRTBC Medical officer and District TB officer. Consent was obtained and interviews were electronically recorded. The interview guides for patients and providers were pilot tested and questions were readapted during concurrent analysis in accordance with a grounded theory methodological approach. Social cognitive theory was considered as a guiding framework for this study as it helps in understanding human actions, intuitions, motivations, and process that of behaviour change that determine adherence. The trained investigators who were trained in MDRTB national guidelines conducted minimum 45 min in-depth interviews. All study participants were interviewed at place convenient to them, which included patient residence, DOT provider's clinics and government health centres. After each interview case-based memos were written which allowed us to capture different perspectives and in some cases participants were revisited to gather more insight and make comparisons between providers and patients reflections which enriched the data analysis. During the interviews we changed our questions and sequence in some cases to dig dipper and understand the relationships in patients and providers perspectives. Audio-recorded data from interviews of patients and providers was transcribed verbatim and transcribed into English. Codes and themes were developed concurrently with data collection. Direct quotes that illustrated important themes were extracted and presented in this manuscript.
Ethical Considerations
The study protocol was approved by the ethics committee of the International Union for Tuberculosis and Lung Disease, (Paris, France) and India's National Tuberculosis Institute (Bangalore, India) for ethical clearance. The CDC institutional review board determined CDC investigators were not engaged in human subject's research as defined by U.S. regulations. Patients and provider participation in interviews were strictly voluntary and received no compensation. Individual verbal consent in Marathi and written informed consent from the patients and providers were obtained prior to interviews.
Results
The study participants comprised of 20 MDR-TB reported as LTFU patients, between ages of 23 to 53 years of age and included 15 male patients and 5 female patients.12 patients were from urban area and 8 patients belonged to rural area. 12 participants were married of which three participants although married were not staying with their spouse and eight were unmarried. Among the 10 DOT providers recruited for the study 4 were female DOT providers and six were Male DOT providers. The age of DOT providers was between 25 years to 48 years and five DOT providers were urban area where as five were from rural area. Among the DOT providers, two were private practitioners, 2 community volunteers; four were government TB health visitors & two pharmacists from government sector.
The study participant's in-depth interviews identified several factors that had impact on MDR-TB treatment adherence. Themes that emerged during interviews are summarised in Table 1 and verbatim quotes from participants are described below:
Adverse drug and treatment effects
Six study participants reported adverse drug effects as an important barrier for treatment adherence. Both patients and providers reported side effects such as vomiting, severe headache, vertigo, restlessness, and psychiatric conditions. In these patients, adverse effects were an important reason to discontinue treatment as quoted by one of the patient [Patient 1, Male, 40years, Rural]: "After taking medications I was neither able to sleep, nor was I able to move out. I had intense sweating. . . I felt like I would die today or tomorrow. Many don't take these medicines because of this fear only." Depression, feelings of intense confusion, and suicidal thoughts were also commonly reported and linked to MDR-TB treatment medications as exemplified by a young male patient [Patient 13, Male, 26 years, Urban]: "I was not able to see properly; I had itching all over my body. I had abdominal pain in the morning, and I could not sleep. I used to cry. . .I used to get up at midnight, talked like anything [patient was incoherent], not able to understand what is happening to me. My memory was going down. Sometimes I could not bear the pain, sometimes I had thought of suicide." The side effects experienced were also exacerbated by the quantity of pills, injections and lengthy time of MDR TB treatment. The daily regimen of 10-13 pills with injections for two years or longer made it difficult for two interviewed patients to complete treatment. Two patients also felt that they received inadequate information on the management of side effects or problems-this was also perceived as a lack of compassion and indifference from staff at treatment sites. One such patient, mentioned: "When I told the staff there about vomiting, abdominal pain and itching over palms and legs after taking medications, they didn't bother about it. They used to say 'go to the medical college and tell them; there will be pain, still you will have to take medicines that we give." [Patient 11, male, 26 years, Urban] The perceived lack of caring at government treatment facilities prompted three patients to seek treatment through private providers instead. Private practitioners were viewed as being more responsive to their needs than public providers are.
A private practitioner, DOT provider, stressed:
Conflict with the operating hours of treatment centres
Two patients had expressed their concern on the timing of the services in the treatment centres, particularly in urban regions and when it affected their employment. Government providers also recognized the timing constraints of centres. Although they desired to make their hours more accommodating to patients, there was difficulty in doing so because of their numerous other responsibilities. Facilities with few personnel in particular faced this challenge. Provider 6 who works in a small rural health centre expressed her frustrations: "I have to work for TB, Leprosy, and immunisation programs-everything. How much time can I give? Some patients were coming in the evening, I scolded them-I have to prepare food, I too have to look after my family. How can I give time during the evening or night?" [Provider 6, Government DOT provider, Female, 46 years, Rural]
Alcohol abuse
Although most interviewed patients themselves did not note alcohol addiction as a factor for not adhering to treatment, alcohol abuse was expressed as a major barrier to treatment adherence noted by their providers. Alcohol abuse resulted in not only missed MDR TB treatment doses and other scheduled appointments, but in patients also being unreceptive to counselling and treatment adherence messages by providers. One provider described these typically male patients as being an "army of one" because of their social isolation and difficulties in being receptive to counselling. Provider 9 noted his experiences with such a patient: "Most of them are addicted to alcohol consumption. In addition, most are adamant, they do not listen. One such patient threw chapels (shoes) at me while he was under the influence of alcohol." [Provider 9, Government DOT provider, Male, 30 years, Urban] Because alcohol abuse was intertwined with treatment non-adherence, providers suggested that relationships be developed with alcohol control programs.
Social Stigma and Discrimination
Themes of social stigma and discrimination as barriers to completing MDR-TB treatment also emerged during interviews with patients and providers. Three patients reported being socially isolated and discriminated against for being infected with MDR-TB. This fear of discrimination directly interfered with MDR-TB treatment and activities to promote adherence. Patients did not want health workers to visit their home for adherence counselling and did not want to attend their local treatment centre due to a potential disclosure of their disease. In particular, addressing issues of social stigma for unmarried women infected with MDR TB was challenging for providers. A private DOT provider noted that she needed to prioritize the counselling of the parents: "They [the parents] were worried of her marriage [prospects] and they didn't want to disclose the disease also. They wanted to hide this; they did not want anyone to know about her disease. . . You have to counsel the relatives as well. It is almost as important [as counselling the patient]. I counselled her parents at that time even when they used to say 'We can't see her in this way, and we have to stop the treatment. ' [Provider 5, Private practitioner, Female, 45 years, Urban] Fear of discrimination not only led to patients and family members not wanting to disclose about their treatment, but also in some patients did not want health workers to visit their home or did not want to take treatment at a nearby centre due to a fear of potential disclosure about their disease and MDR-TB treatment in their catchments areas. Patient 15 living in slum mentioned: "They talk about me differently and gossip about my disease, I have a family to look after. My daughter is yet to be married." [Patient 15, male, 47 years, urban slum]
Family and Social support
For other younger married women who lacked supportive spouses or in-laws, support came from their maternal mothers. Support typically was in the form of verbal encouragement to take medicines regularly, the provision of food, and encouragement to focus on ones health despite the difficult and lengthy treatment.
Patients who lacked strong networks of family and social support had were more prone for LTFU. Two married female patients lacked support from her husband, who left her alone to be cared for by her mother (maternal
Myths and Misbeliefs
Lack of proper awareness of disease and its treatment also influenced treatment. Myths and misbeliefs among patients who did not complete treatment were identified. A male patient from rural area mentioned: "I don't have TB, This is something else. This is external spirit influencing me. I went to goddess [local faith healer].she gave me ash. Since 2-3 months I am taking that daily...I feel ok now" [Patient 6, Male, 35years, Rural] A perception of MDR-TB treatment being more harmful to one's health was particularly evident in a patient who was LTFU. Patient 1 specifically noted: "I mean I had no relief [from MDR TB treatment]. Even after taking pills and many injections I had no relief-what is the use then? It is better to die at home rather than to take these medications.
Discussion
This study focused on qualitative interviews with both patients and treatment providers. Similar to a previous quantitative study on TB adherence in a neighbouring district [8], our findings reveal that factors related to LTFU among MDR TB treatment are multi-faceted as shown in " Fig 1". These ranged from patient experiences with treatment, to support from providers and family members, and to the social circumstances, characteristics, and self-beliefs of patients. The promotion of MDR-TB treatment adherence, therefore, will require a multi-faceted targeted approach. Our finding of a negative relationship between experiencing adverse effects to MDR-TB treatment and adherence is consistent with previous studies [9,10,11]. Vega P et al. described psychiatric issues in the management of patients with MDR-TB including depression and anxiety [12], two issues vividly described during patient interviews. A shorter duration of MDR TB treatment (currently 24 months) [13] and addressing psychiatric issues in patients that arise during treatment [12] have been recommended as potential strategies to improve adherence.
Alcohol abuse by patients presents another substantial barrier in MDR TB adherence and has been noted elsewhere by studies of resistant TB treatment adherence [7,14,15,16]. Alcoholism interferes in the regularity of treatment appointments and limits the effectiveness of the provider support and the counselling process. In this study, interviewed providers have suggested developing linkages with alcohol addiction program. This strategy has also been promoted previously and requires additional consideration [16].
Our finding of a perceived lack of support from government providers and the inconvenient operating hours of facilities were reported previously as barriers to treatment adherence in a study in an urban setting of Delhi, India [17]. Although flexible centre hours have been recommended by the local TB control program, having sufficient providers trained to administer injections for MDR TB treatment as well as having limited personnel in smaller facilities constrain this option.
In our study, some patients at particular risk for not completing treatment were those with direct conflicts between their job responsibilities and treatment centre hours. These were patients who were their household primary earners and who worked on daily wages. The inability to work due to treatment side effects, the relationship between financial constraints and having adequate nutrition, and the threat of job loss were all intertwined factors that present challenges for patients to complete treatment. Targeted approach of identifying these vulnerable individuals and developing programmatic strategies to provide comprehensive support is necessary to improve treatment adherence.
Limitations
Considering the differences in programmatic implementation in different regions, the socioeconomic variations, and our findings may not be necessarily generalised to other sites, moreover reasons for lost to follow-up may differ in different settings. Secondly as the study involved in-depth interviews of patients whose outcome were reported and providers, recall bias more distal from time of data collection could exist particularly for patients and treatment providers. To minimize the recall bias participant responses were also triangulated with treatment providers. Interviewers were also trained to probe during interviews to facilitate an accurate recall of events.
Conclusion
Adherence to MDR-TB treatment and LTFU among MDR-TB patients remains a pressing public health problem to address. Addressing issue of LTFU during MDR-TB treatment requires enhanced efforts towards resolving medical problems like adverse drug effects, developing short duration treatment regimens, reducing pill burden, motivational counselling, flexible timings for DOT services, social support, family support for patients & improving awareness about disease. Further implementing research is needed for devising strategies to address these issues and to document practices for improvement in adherence to MDR-TB treatment. | 2016-05-31T19:58:12.500Z | 2015-08-24T00:00:00.000 | {
"year": 2015,
"sha1": "e274471272864ad68bc95c8baa94820a20b1fe78",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0135802&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e274471272864ad68bc95c8baa94820a20b1fe78",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
145968147 | pes2o/s2orc | v3-fos-license | Image processing approaches to enhance perivascular space visibility and quantification using MRI
Imaging the perivascular spaces (PVS), also known as Virchow-Robin space, has significant clinical value, but there remains a need for neuroimaging techniques to improve mapping and quantification of the PVS. Current technique for PVS evaluation is a scoring system based on visual reading of visible PVS in regions of interest, and often limited to large caliber PVS. Enhancing the visibility of the PVS could support medical diagnosis and enable novel neuroscientific investigations. Increasing the MRI resolution is one approach to enhance the visibility of PVS but is limited by acquisition time and physical constraints. Alternatively, image processing approaches can be utilized to improve the contrast ratio between PVS and surrounding tissue. Here we combine T1- and T2-weighted images to enhance PVS contrast, intensifying the visibility of PVS. The Enhanced PVS Contrast (EPC) was achieved by combining T1- and T2-weighted images that were adaptively filtered to remove non-structured high-frequency spatial noise. EPC was evaluated on healthy young adults by presenting them to two expert readers and also through automated quantification. We found that EPC improves the conspicuity of the PVS and aid resolving a larger number of PVS. We also present a highly reliable automated PVS quantification approach, which was optimized using expert readings.
Imaging the perivascular spaces (PVS), also known as Virchow-Robin space, has significant clinical value. Many recent studies have shown pathological alteration of PVS in a range of neurological disorders [1][2][3][4][5][6][7][8][9] . It is also believed that PVS plays a major role in the clearance system, accommodating the influx of CSF to brain parenchyma through peri-arterial space, and the efflux of interstitial fluid to the lymphatic system through peri-venous space 6,[10][11][12][13] . MRI is a powerful tool that enables in vivo, non-invasive imaging of this less-known glia-lymphatic pathway.
In the clinical practice, PVS is quantified based on the number of visible PVS on the axial slice of a T2-weighted (T2w) image that has the highest number of PVS in the region of interest 14 . This process can be laborious and error-prone, so efforts to improve efficiency and accuracy have been made by using a wide range of automatic or semi-automatic segmentation techniques, from classical image processing approaches to deep neural network modelling [15][16][17][18][19][20][21][22][23][24][25] . Park et al. used auto-context orientational information of the PVS for automatic segmentation 25 . Recently, Ballerini et al. showed that Frangi filtering 26 could robustly segment PVS by extracting the vesselness map based on the PVS tubular morphology 20 . More recently, Dubost et al. used convolutional neural network with a 3D kernel to automate the quantification of enlarged PVS 27,28 .
While these methods have improved the automated segmentation of PVS, less effort has been made to enhance the visibility of PVS through postprocessing means. A typical MRI session includes a variety of different sequences and utilizing different intensity profiles of these sequences can potentially improve PVS detection rate for both visual reading and automated segmentation. Combining MRI signal intensities has been used for other applications to achieve tissue-specific sensitivity. Van de Moortele et al. combined T1w and proton density images by dividing the latter by the former to improve signal non-uniformity at ultra-high field and optimize vessels visualization 29 . MRI multi-modal ratio was also used to map cortical myelin content [30][31][32] (for a systematic evaluation of the contrast enhancement via the combination of T1w and T2w see 33 ). Additionally, Wiggermann et al. combined T2w and FLAIR to improve the detection of multiple sclerosis lesions 34 .
Here we describe a multi-modal approach for enhancing the PVS visibility, which was achieved by combining T1w and T2w images that were adaptively filtered to remove non-structural high frequency spatial noise. Furthermore, we present an automated PVS quantification technique, which can be applied to T1w, T2w or the enhanced contrast. The efficacy of the Enhanced PVS Contrast (EPC) on both visual and automated detection is assessed and the reliability of the automated technique is examined.
Method
Four folds of evaluations were performed. First, the visibility of PVS in EPC was qualitatively and quantitively examined. Second, the visibility of PVS to expert readers was evaluated by comparing the number of PVS counted in EPC and T2w images. Third, PVS automatic counting was introduced and evaluated. Fourth, the reliability of the PVS automated quantification was assessed using scan-rescan MRI data.
MRi data. T1w and T2w images of the human connectome project (HCP) 35 were used in the analysis.
Structural images were acquired at 0.7 mm 3 resolution images. We used data from "S900 release", which includes 900 healthy participants (age, 22-37 years). The preprocessed T1w and T2w images [36][37][38] were used. In brief: the structural images were corrected for gradient nonlinearity, readout, and bias field; aligned to AC-PC subject space and averaged when multiple runs were available; then registered to MNI 152 space using FSL 39 's FNIRT. Individual white and pial surfaces were then generated using the FreeSurfer software 40 and the HCP pipelines 36,37 . Among HCP subjects, 45 were scanned twice with scan-rescan interval of 139 ± 69 days; these scans were used to assess the reliability of the automated PVS quantification. enhanced pVS contrast (epc). Figure 1 summarizes the steps required to obtain EPC. After preprocessing the data, T1w and T2w images were filtered using adaptive non-local mean filtering technique 41 . Non-local mean technique measures the image intensity similarities by taking into account the neighboring voxels in a blockwise fashion, where filtered image is . For each voxel (x j ) the weight (ω) is measured using the Euclidean distance between 3D patches. The adaptive non-local mean filtering technique adds a regularization term to the above formulation to remove bias intensity of the Rician noise observed in MRI. Therefore the expected Euclidian distances between two noisy patches N i and N j is defined as: The Rician noise of the MRI images, calculated using robust noise estimation technique presented by Wiest-Daessle et al. 42 , was used as the noise level for non-local filtering 41 . To preserve PVS voxels while removing the noise, filtering was applied only on high frequency spatial noises. This was achieved by using a filtering patch with a radius of 1 voxel, which removes the noise at a single-voxel level and preserves signal intensities that are spatially repeated 41 . Finally, EPC was obtained by dividing filtered images (i.e. T1w/T2w).
PVS visibility was qualitatively compared across T1w, T2w, and EPC images in white matter and basal ganglia. PVS conspicuity was also assessed by comparing the PVS-to-white matter ratio in EPC images with that in T2, which was shown to provide a higher PVS contrast compared to T1w 24 . A number of PVS and non-PVS white matter voxels were randomly selected across 10 subjects (more than 50 voxels per subject) and the average PVS-to-white matter ratio was measured. expert reading and clinical scoring. PVS were independently rated in 100 randomly selected subjects of HCP by two expert readers (GB and NSB) on axial T2w and EPC images using a validated 5-point visual rating scale 43 in basal ganglia and centrum semi-ovale (0: no PVS, 1: 1-10, 2: 11-20, 3: 21-40, and 4: >40 PVS). GB is a medical doctor with more than 5 years of neuroimaging research experience, and NSB is a neuroradiologist with 11 years of experience in radiology. Readers were blind to the rating results of each other. For validating the automated technique, the readers reached a consensus in few cases with different scores. One subject in which the PVS number was scored as ">100" by one of the readers was excluded from the statistical analysis. The total PVS score for each subject was calculated as the sum of the basal ganglia and centrum semi-ovale scores. The correlation between the number of PVS counted in T2w and EPC was calculated using Pearson correlation coefficient. PVS total counts were also compared using paired t-test. Lin's concordance correlation 44 was used to determine the concordance between the two raters. In addition, inter-class correlation (ICC) estimates and their 95% confident intervals were calculated based on a mean-rating (k = 2), absolute-agreement, two-way mixed-effects model, as recommended in 45 . The statistical analysis was performed using SciPy library (version 1.2.0) on Python 3 and MATLAB statistics and machine learning toolbox. P-values smaller than 1e-25 were reported as p = 0.
Subsequently, we applied Frangi filter 26 to T1w, T2w, and EPC images using Quantitative Imaging Toolkit 61 , which was implemented similar to 20 . Frangi filter estimates a vesselness measure for each voxel s ( ) from eigenvectors λ of the Hessian matrix of the image: Default parameters of α = 0.5, β = 0.5 and c were used, as recommended in 26 . The parameter c was set to half the value of the maximum Hessian norm. Frangi filter estimated vesselness measures at different scales and provided the maximum likeliness. The scale was set to a large range of 0.1 to 5 voxels in order to maximize the vessel inclusion. The output of this step is a quantitative map of vesselness in regions of interest, is taken to be the maximum across scales, as suggested in the original paper 26 . The range corresponds the specific levels in scale space that are searched for tubular structure feature detector. Thus, the outputs across voxels comprise vesselness measured across a range of filter scales.
In order to obtain a binary mask of PVS regions, the vesselness map should be thresholded. The binary mask enables automated PVS counting, volumetric, and spatial distribution analysis. Given that the vesselness value could vary across modalities, the threshold was optimized for each input image separately. We used the number of PVS counted by the experts for threshold optimization because of the absence of a ground truth. Vesselness values were standardized using robust scaling, in which values were scaled according to the inter-quartile range (IQR) to avoid the influence of large outliers: min Then, the binary image of PVS mask was obtained by thresholding s ( ).
Then, the automated estimate of the total number of PVS was obtained by counting the number of connected components of the masked image P(s). Optimum thresholds were found by maximizing the concordance with the expert visual reading (concordance was used instead of absolute difference to optimize for global threshold and avoid biasing the threshold toward raters counting): a e a e t max ( , ) ( , ) t 0 where a and e are one-dimensional arrays of PVS counts across all subjects (n = 100), obtained from the automated (a) and expert (e) readings, respectively. Kandall's tau (τ) and Spearman's Rho (ρ) were used to measure concordance and correlation, respectively. Average of expert readings from EPC images were used for optimization. Throughout the manuscript, the optimum thresholds of 2.3, 2.7 and 1.5 were used for T1w, T2w, and EPC, respectively. After visual inspection, we noted that the imperfection of the white matter parcellation in periventricular and superficial white matter areas led to incorrected or missed PVS segmentation in white matter boundaries (an example is shown in Supplementary Fig. 1). Therefore, the subtraction of a dilated mask of ventricles from the PVS mask was applied to exclude the periventricular voxels and remove the incorrectly segmented PVS at the lateral ventricles-white matter boundary. After obtaining the final PVS mask, the number of PVS was obtained by counting the number of connected components of the PVS mask. Small components (<5 voxels) were excluded from automated counting to minimize noise contribution. The automated technique was applied on all subjects. Finally, one-way ANOVA was conducted to compare the effect of input image (T1w, T2w, and EPC) on the estimated number of PVS.
test-retest reliability. We also evaluated the test-retest reliability of the PVS quantification with MRI. Forty MRI data from HCP 35 that included scan-rescan were used for reliability analysis. EPC was derived, and the automated quantification pipeline was applied on the scan-rescan images (T1w, T2w and EPC images): identical parameters and threshold were applied on scan-rescan data. Then scan-rescan reliability was assed using ICC, Lin's concordance 44 and Pearson correlation analysis. ICC estimates and their 95% confident intervals were calculated based on a mean-rating (k = 2), absolute-agreement, two-way mixed-effects model.
Results
Evaluation 1: comparing the EPC with T1w and T2w. The PVS were more visible in EPC compared to T1w and T2w images (Figs 2 and 3, and Supplementary Fig. 2). The superiority of the EPC was evident in both white matter (Fig. 2) and basal ganglia (Fig. 3). EPC allowed the detection of PVS that were hardly identifiable in T1w and T2w (see yellow arrows in Fig. 2: PVS that could barely be spotted in T1w and T2w were evident in EPC). PVS were more conspicuous in EPC compared to T1w and T2w. Moreover, PVS-to-white matter contrast was significantly higher in EPC compared to that in T2w ( Supplementary Fig. 3: ~2 times higher).
There are a few cases, concordantly found by both readers, where PVS count is higher in T2w than EPC: in most of those cases, the difference was minimal (1-2 PVS) and did not change the PVS class. Only 4 cases had 6-10 more PVS in T2w than EPC. Upon revisit we noted that these are related to cases in which the detection of the PVS from noise by the visual reader was challenging (due to the small size of the PVS and given that those cases had particularly noisy T2 images). Readers confirmed that the EPC results for these cases are more reliable. ( 2) 2 7 (Fig. 6). Visual inspection showed that, as expected, the Frangi filter was able to detect the tubular structures of the PVS (Fig. 7). No statistical evidence was found to suggest the automated quantification of PVS using EPC is superior to those derived from T1w and T2w. PVS quantification (number of PVS) were significantly correlated across T1w, T2w, and EPC results (all at p < 0.0001) and all the automatic measurements reported a similar concordance level with the expert scores. Lin's concordance coefficient between automated PVS counts and expert scores was 0.81, the bias correction value (a measure of accuracy) was 0.88, and the Pearson correlation coefficient was 0.61 www.nature.com/scientificreports www.nature.com/scientificreports/ (p = 1.5e-11). The analysis of variance showed that the effect of input image (T1w, T2w, and EPC) on the number of PVS measured was significant F(2,294) = 48.56, p = 6.3e-19 which suggests that the automated technique for PVS quantification needs to be applied on the same image modality across study data.
Evaluation 4: test-retest reliability of automated quantification. Excellent test-retest reliability
was observed in PVS automated quantification regardless of the input image used (Fig. 8 and Table 1). Same thresholds, optimized on different subjects, were used for scan-rescan data. Lin' concordance coefficient between scan-rescan PVS measurements were 0.89, 0.94 and 0.83 for T1w, T2w and EPC, respectively. T2w images showed the highest concordance compared to other inputs. PVS measures were significantly correlated between scan-rescan images (r = 0.90, p = 2.8e-14, r = 0.95, p = 1.4e-20, and r = 0.85, p = 2.8e-11 for T1w, T2w and EPC, respectively). For all three inputs, no difference between the number of PVS in scan and rescan was observed.
Discussion
In this article, we presented a combined T1w-T2w approach to enhance the visibility of the PVS. EPC was utilized for both expert and computer-aided readings and was evaluated qualitatively and quantitatively. The proposed map (EPC) enhances the contrast and improves the conspicuity of the PVS, resulting in detection of a significantly larger number of PVS identified by expert readers. EPC benefits from the inverse signal profile of fluid on T1w and T2w images: when these images are combined together, a magnified PVS-tissue contrast can be obtained. EPC also uses a spatial non-local mean filtering technique, which has shown to be effective for mapping PVS 19 . We noted that even in the high-resolution T2w images of the human connectome project (0.7 mm 3 resolution), it is difficult to detect small PVS ( Supplementary Fig. 2), while they could be identified with this new technique.
According to the current standard visual rating scale for PVS 43 , most of the subjects in this cohort belonged to the class with the highest amount of PVS (category 4: >40 PVS). We noted that >80% of subjects on T2 are categorized as having a PVS class 3 (>50%) or 4 (>30%) according to the rating scale. Moreover, the inter-subject variability of PVS was large (standard deviation of ~15), despite the fact that subjects are all young healthy adults. This highlights the fact that the current rating approach has inherent limitations, because (1) a counting approach is highly dependent on the image resolution and quality and is difficult to generalize, (2) dichotomizing the PVS count reduces the statistical power and could underestimate the extent of variation, because considerable variability may be absorbed within each group 62 , (3) it does not consider the morphometric and spatial information of the PVS. In order to detect the intra-and inter-subject PVS alteration and ultimately to determine the role of PVS in different pathologies, there is a need to improve the current imaging and rating techniques.
With the advancement of the MRI technology, structural imaging in submillimeter resolution is achievable in a plausible time frame; for example, T1w MRI with 0.7 mm3 resolution can be obtained in 8 minutes at 3T 63 . Such imaging resolution enables to visualize PVS that were otherwise not apparent due to partial volume effect. With www.nature.com/scientificreports www.nature.com/scientificreports/ the added visibility of PVS comes the challenge of counting and mapping, as the visual rating becomes extremely laborious. Here we presented a pipeline that can be used to automatically map PVS.
The scan-rescan experiment showed that EPC is highly reliable, with no observed statistical difference across scan-rescan results. A trivial difference between scan-rescan PVS maps was observed, which are most likely due to (1) segmentation imperfection and image intensity differences of scan-rescan signal (e.g. due to subject motion) 64,65 , (2) normal physiological changes of PVS in the same subject, such as potential effects of time-of-day, sleep, and hydration on morphometric estimates of PVS [66][67][68][69] . It should also be noted that the preprocessing could affect the presence of the PVS (e.g. transformation of the MRI to a common space). Here we analyzed the data in the subject space, in which the MRI were AC-PC aligned using spline interpolation during the preprocessing and artifact correction steps 36 . In addition, T1w and T2w were co-registered, which also involves interpolation. These interpolations could affect PVS quantification. We noted that the automated PVS count was slightly higher in raw T2w images compared to the AC-PC aligned T2w images if the same threshold is used (the number of PVS was on average 1.1 higher in raw images compare to the aligned images, but not statistically significant p > 0.05). The optimum threshold, which depends on the quality of the data, therefore should be optimized based on the study data.
Whether PVS quantification is done by a neuroradiologist or automatically, an image with high PVS-tissue contrast is ideal. Currently, the modality of choice for PVS analysis is T2w, because it offers a higher contrast of PVS-tissue compared to T1w 24 . Yet, small PVS (<image resolution) are difficult to detect or separate from noise. Hence, PVS quantification is often limited only to enlarged PVS, despite the fact that physiological or pathological changes are expected to initiate in submillimeter scale 70,71 . In order to improve the mapping of the PVS, the MRI contrast of the PVS and the neighboring parenchyma should be increased. Besides the image processing approaches, PVS contrast can be enhanced through MRI technological improvement such as optimizing imaging sequence 24 and employing ultra-high field technology 19,24,25,72 .
Our quantitative PVS mapping and previous works showed that PVS can be mapped from an individual MRI contrast [15][16][17][18][19][20][21][22][23][24][25] . However, it should be noted that in the presence of pathology, additional image sequences (e.g. FLAIR) can be useful to discern PVS from pathological changes, such as white matter hyperintensities (WMH). Another advantage of a multi-modal approach for PVS quantification is the improvement of misclassification. For instance, vessels not surrounded by PVS are not easily distinguishable from vessels with PVS if only T1w is used, because both appear hypointense in this modality. T2w and EPC are able to solve this issue: in fact, in the absence of PVS, vessels appear hypointense in T2w and hyperintense in EPC, unlike vessels with PVS, which appear hyperintense in T2w and hypointense in EPC (see Supplementary Fig. 4). In addition, the correct identification of PVS can be achieved via the analysis of its morphometric characteristics, including size, shape, and anatomical location. Ballerini et al., for example, argued that Frangi filter ensures specificity in PVS segmentation given the tubular structure of the PVS.
It should also be noted that many current projects (e.g. human connectome project 35 ) already acquire both T1w and T2w modalities. Therefore, a technique that could benefit from the existing data could be valuable. We showed that in the presence of T1w and T2w, the combined contrast proved to enhance PVS visibility, which is of high clinical reading value. We did not aim to convey the message that one has to acquire both modalities to be able to resolve PVS. In fact, we noted that the automated segmentation was slightly less stable when EPC was used compared to T1w or T2w automated segmentation.
Two recent studies have also used multi-modal techniques for PVS segmentation and showed that it outperformed segmentation derived from a single modality 15,20 . Ballerini et al. applied segmentation on each modality and combined segmentations outputs using an AND operation. Boespflug et al. applied multivariate clustering, followed by morphometric filtering, to extract PVS from T1w, T2w, FLAIR and proton density images. It should be noted that the aim of these techniques was to improve the accuracy of the automated segmentation, but our study primarily aimed to propose a map that improves the visibility and detectability of the PVS, which can also make the visual scoring more accurate. Enhancement of PVS was also proposed using Haar transformation 19 and more recently using conventional neural network 73 , both on T2w images acquired using 7T MRI. Our approach focuses on enhancing the contrast of the PVS by combining T1w and T2w images and was optimized for 3T MRI, www.nature.com/scientificreports www.nature.com/scientificreports/ which is more accessible in comparison with 7T MRI. Regardless of the differences, we anticipate that EPC could be used as an input to aforementioned techniques. Here we also performed test-retest comparison to analyze the reliability of PVS automated quantification.
In addition to its potential clinical relevance, mapping PVS can be useful to improve the accuracy of other quantitative MRI techniques as well, because these could be affected by the partial presence of PVS in image voxels. Recently, ignoring PVS could systematically affect how quantitative MRI measures such as diffusion tensor imaging (DTI) and spin echo dynamic susceptibility contrast (DSC) measures can be interpreted 74,75 . Such contribution and its potential confounding effect may be ameliorated if PVS is mapped and/or included in the analysis.
A limitation of multi-modal combination techniques is that it requires additional scan time and therefore is more prone to subject motion, which could negatively affect the co-registration. An interleaved acquisition was shown to be highly effective to ameliorate the co-registration issue 29 . Another limitation of EPC is that it requires the same image resolution for T1w and T2w. These sequences are often acquired in different resolutions, particularly in clinical practices (T2w are often acquired with thicker axial slices). Future investigations could focus on determining the extent to which the resolution difference between T1w and T2w affects the EPC quality and whether an intra-subject co-registration could amend this limitation.
A limitation of the automated quantification approach is that its performance depends on the parcellation accuracy. Imperfection of the brain parcellation could affect the automated quantification (see an example in Supplementary Fig. 1). While false positive PVS in periventricular area could be removed by applying a dilated mask of the ventricles, the false negative PVS in the superficial area of the white matter are more challenging to remove. While superficial white matter missed PVS is not expected to affect the count, it could affect the volumetric estimates. Further efforts are required to explore the effect of brain parcellation on PVS mapping or to build computational tools that minimize the parcellation dependency.
conclusions
Our combined T1w-T2w approach (EPC) has demonstrated to enhance the visibility of the PVS, resulting in improvement of PVS mapping. EPC allowed both the expert readers and the computer-aided algorithm to achieve a more accurate and precise quantification of PVS. This novel method, which can be easily applied to a number of MRI datasets, aims to overcome the limitations of current MRI sequences in PVS detection and quantitative analysis. This is relevant not only to better characterize the role of PVS when they are enlarged in pathological conditions, but especially to perform quantitative research on PVS when they are small, such as in physiological and prodromal states.
Data Availability
We have used human connectome project (HCP) dataset, which is already available to researchers. | 2019-05-07T13:40:59.080Z | 2019-04-16T00:00:00.000 | {
"year": 2019,
"sha1": "0293eced90d9998631c016f04b16497a232810bd",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-48910-x.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0293eced90d9998631c016f04b16497a232810bd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Computer Science"
]
} |
225480772 | pes2o/s2orc | v3-fos-license | PREDICTORS OF RESEARCH CULTURE IN CITY SCHOOLS DIVISIONS AND ITS INFLUENCE TO TEACHER PERFORMANCE
This study was conducted to determine the predictors of research culture in city schools division in Laguna and its influence to teacher performance. A total of 898 elementary teachers of public schools in the City Schools Divisions of Binan, Cabuyao, Calamba and Sta. Rosa participated in the study. The major tool in gathering the data was an adopted questionnaire checklist composed of five parts; a. Teacher-Related Factors : b. School-Related Factors; c. Contextual Variable; d. Research Culture; e. Teachers’ Performance in Research. The study utilized the descriptive method of research. The method was used since the purpose of the study was to find out the predictors of research culture such as the teacher-related factors, school-related factors and contextual variable and their influence to students and teachers performance. It also employed quantitative approach that used frequency and percentage distribution, simple mean and Pearson correlation coefficient. Based on the results, Teacher-related
This study was conducted to determine the predictors of research culture in city schools division in Laguna and its influence to teacher performance. A total of 898 elementary teachers of public schools in the City Schools Divisions of Binan, Cabuyao, Calamba and Sta. Rosa participated in the study. The major tool in gathering the data was an adopted questionnaire checklist composed of five parts; a. Teacher-Related Factors : b. School-Related Factors; c. Contextual Variable; d. Research Culture; e. Teachers' Performance in Research. The study utilized the descriptive method of research. The method was used since the purpose of the study was to find out the predictors of research culture such as the teacher-related factors, school-related factors and contextual variable and their influence to students and teachers performance. It also employed quantitative approach that used frequency and percentage distribution, simple mean and Pearson correlation coefficient. Based on the results, Teacher-related factors, school factors and constraints variable are found to be predictors of research culture described by research behavior, research climate and research policy wherein teacher related factors have the greatest overall effect in both research behavior and research climate while school factors have the highest percentage of effect to research policy. Along teacher factors, skills greatly affected both research behavior and research climate. Therefore, it is concluded that teacher-related factors are the most influential and have the highest percentage of effect in research culture while school related factors only affect research policy which is a part of research culture; and research culture has not significantly influenced teachers' research performance. Hence, it is recommended that rewards and recognition in any form may be given to teacher-researchers who would represent the school in research forums and become research speakers during enhancement training or capability building; and the proposed action plan may be adopted for research implementation.
In the Philippines, the Department of Education has ordered school heads and administrators across the country to adopt the "enclosed Basic Education Research Agenda" which promotes the conduct of research in schools by teachers (DepEd Order No.39,s.2016). The purpose is to discover schools' issues and solutions and form a part of teachers' professional development and skills enhancement. By doing research, teachers are believed to improve their teaching practices for the betterment of the school and of the students' learning.
In addition, Deped Order No. 16, s. 2017 also known as Research Management Guidelines, provides guidance in the management and conduct of research initiatives at the national, regional, schools division, and school levels to further promote and strengthen the culture of research in basic education. This policy also covers instructions for eligible DepEd employees in availing of research funds. Although educational institutions in the Philippines have encouraged their teachers to be involved in research, as it is seen to be useful for their professional development (Morales,2016) [2]and to their teaching career, teachers are confronted with many issues that affect their motivation to undertake research.
As emphasized by the Department of Education (DepED), doing research has become one of the important professional development programs for teachers. Teachers both from private and public institutions are encouraged to conduct action research in order to identify and address the teaching and learning issues and concerns in the classrooms and in the school. Thus, doing research has become a part of every teacher's teaching evaluation and performance appraisal at the end of the school year (Ullah, 2016) [3].
The importance of doing research in the professional development of teachers and in their practices has been widely acknowledged in the literature. For one, it equips teachers and other education practitioners with the skills necessary for identifying what the problem is in a school, and knowing how to address that problem systematically (Hine, 2013). Two, it serves as an opportunity for educators to self-evaluate their teaching practices (Hong and Lawrence, 2011) [4]. Three, it allows teachers to make a change in their pedagogical practices that will have a positive impact upon teaching and learning (Mahani, 2012) [5].
However, despite its positive effects upon classroom teaching and learning, a number of studies have reported some factors that prevent teachers from doing research. Crowded teaching timetables, heavy teaching workloads (Kutlay, 2013;Morales, 2016) insufficient research training (Ellis &Loughland, 2016) [6], lack of research skills (Vasquez, 2017), lack of financial support (Biruk, 2013) and limited time to do research (Norasmah& Chia, 2016) [7] often constitute the primary challenges and concerns faced by teachers and other educators aspiring to undertake research. As presented in Table 1, majority of the respondents were from ages 40 years old and younger which is almost 69 percent of the total respondents, 772 or 85.9 percent of teachers in the city schools in Laguna were female which is far greater than male teachers with 126 or 14.1% only. More than half of the teacher respondents have baccalaureate 588 degree yet composed of 666 or 74.1 percent of the total teachers while only 8 or 0.9 percent have doctorate degree. Majority of them ( 66.6 percent) have been in the teaching service for 1 -10 years, only 17 or 1.9 percent have reached 31 -40 years in teaching profession and only 33 percent of them are considered seasoned teachers. Many of them are Teacher 1 which confirms that many of them are new to the teaching career as explained by the number of years in their profession and 7 or 0.8 percent of the total respondents are Master Teacher II. Table 2 reveals that the teacher-respondents have a moderate attitude in research activity as indicated by its composite mean of 3.24. Morever, the indicator, perception of research as a function of both physical and intellectual capitals (3.28) was given the greatest evaluation while indicator, attitude in research activities for one to become proficient, (3.17) the least. By doing research, teachers are believed to improve their teaching practices for the betterment of students' learning and for the school; they are school partners in solving school' issues and solutions to the problem, especially those that are related to teaching and learning processes. Therefore, research activity in school collectively builds capacity and intellectual capital for the benefit of all. (Berliner, 2002) [8]. This is the same with the study of Biruk (2013) [9] where teacher-participants held a positive attitude towards research.
Further, it also enables them to be complemented for their high performance in their profession and for the realization of the institution's philosophy, vision, and mission. Teachers can have the opportunity to develop and improve their teaching practices. It enables educators to follow their interests and their needs as they investigate what they and their students do. Teachers who practice research find out that it expands and enriches their teaching skills (Morales, 2016). Table 3 shows the teachers' efficacy in research activity. It can be deduced from the table that teachers have moderate self-efficacy in research, especially on the belief that they will succeed in whatever topic they would like to work on and hard work in doing research would pay off (3.82).The result is the same with the objective of the DepEd where teachers are believed to improve their teaching practices for the betterment of students' learning and for the school when they do research (DepEd, 2016). This may imply that they really would like to be indulged in research activity but needs a hard push because of the lesser evaluation that they gave on the indicators enumerated. But this desire is already a positive attitude towards research which needs more motivation from the school heads. Teachers need support from school management and authorities in order to start doing research (Biruk, 2013).
However, the teacher respondents are not sure whether their ability to do research activity grows with their effort and they can do research only with determination with mean value of 3.06 and 3.09, respectively. Their responses only confirm that they would like to do research but believe that they are inefficient and ineffective in that area. This is in consonance with what has been said by Ullah (2016) that conducting research in the country, especially in the public secondary schools, may be limited since only a few teachers have tried to do it. This is where the teachers should be encouraged. School heads must do something to reorient the teachers about research. Teachers must internalize that those that are involved in research, will see its importance in professional development and in their teaching career (Morales, 2016). Table 4 shows the motivation of the teacher-respondents in research activity. The findings show that they have moderate motivation because they are inspired by the help they get from their co-teachers who are also researchers (3.26). In addition, they also agree that they are proud to be a teacher researcher (3.22) and they do their best to achieve maximum level of performance in research (3.21). They also agree on all the indicators given but have given a lesser assessment. This implies implies that they are more motivated when somebody boosts their morale and have someone who will guide and assist them when doing research. In the study of Ricero (2018) [10], the reason of motivation to conduct research wherein the encouragement and support from their superior and coteachers achieved the highest approval. They feel the need and importance of conducting action research when they receive different forms of support. Dundar and Lewis (2018) [11] also reiterate that recognizing and praising the work of the teacher-researchers do, both the quality of the work and the effort they put into it help in boosting their morale and be motivated in research.
On the other hand, the least assessed idea was that they envy teachers who receive an award in research (2.62). Although, they are moderately motivated and they envy other teachers when they receive awards on research, this is the least thing that they will feel about research. If it is so, then, they are less likely to be mindful of the research 590 award that they might get and therefore, award is not the best motivating factor in enticing teachers to research. This result is contrary with that of Norasmah (2016) [12] that monetary incentives are not the only viable and effective instrument to induce successful research but awards is considered to further research performance. Awards have certain features that render them attractive in the academic setting. Award givers can subjectively evaluate overall performance, as long as this is done in a transparent and fair way. Further, awards motivate scholarsdue to their value in signaling research talent and motivation. It is valued because they convey appreciation and recognition on the part of colleagues and the public. They may thereby raise intrinsic motivation to do research and generate loyalty to the awarding institution. Table 5 illustrates the teacher-respondents skills in research activity. The table shows that teachers have moderate understanding in the relevant research methodologies and techniques and work on with data gathering procedures as well as statistical treatment of data as suggested by the mean average of 3.19 which is the highest among all others. Like the study of Brew (2013) [13], the public secondary and elementary school teachers in Antipolo have average level of research capabilities in writing different parts of a research proposal including methodologies and publishable research paper or article but less capable in applying American Psychological Associations format.
However, the respondents gave a little lower assessment on the belief that they demonstrate awareness of issues relating to the rights of researchers, research subjects, and others who may be affected by the study in terms of study's confidentiality, ethical issues, attribution, copyright, malpractice, ownership, data protection act (3.00). Nonetheless, these issues can be addressed through capability building seminar. As it is believed to be a significant contribution towards development research skills for teachers, there should be adequate research training, workshops, and other support should be given to teachers to motivate them to conduct research studies (Mills, 2012)[14] and also to minimize barriers in the development of research culture in an institution (Berliner, 2002). Skills are important to be developed because the lack of teachers' research skills and expertise limit themselves in doing research (Biruk, 2013). 591 Table 6 displays the schoolrelated factors that facilitate research activity. Teacher-respondents said they were moderately affected by these school factors as indicated by the composite mean of 3.05. However, they gave the higher rating to the adequacy of research facilities in school as indicated by its mean average of 3.19. With the presence of research facilities, researchers will not get a hard time complying with what is required in their study because of the accessibility of materials needed. Presence of research facilities in school is through the initiative of the school heads, this only means that the support of heads in the research activity of teachers is contributory to the research culture in the school (Evans, 2011). This supports Berliner's (2002) belief that lack of school research funding, unmanaged workload and presence of research materials are barriers in research culture.
On the other hand, appropriateness of materials needed for research is marked less by the respondents with a mean value of 2.86 but verbally described as agree. It seems that they are not fully satisfied with the materials they need when doing their research. It may refer to library facilities, updated materials, and computers with internet access, photocopier, printing facilities, proper working place for students, research journals and others. In that case, researchers have to visit different libraries and find reading materials that would support their studies and can contribute to develop the research interests of the teachers. They may also share with one another with what they have in school and start collaborating with one another so that they may also learn from the inputs of one another. Through such activities, teachers may have the tendency to continue and strive more to dwell into research and find someone who will support and act as adviser from the colleagues (Hardre, 2012) [15]. Table 7 depicts the constraints encountered by researchers. It can be observed from the result that most of them said that the condition of the city schools' unclear institutional research policy has moderately affected research culture as suggested by its mean value of 3.21. The teacher may not be aware of the research policy, but DepEd has ordered schools heads and administrators across the country to adopt the "enclosed Basic Education Research Agenda" which promotes the conduct of research in schools by teachers. The purpose is to discover schools' issues and solutions and form a part of teachers' professional development and skills enhancement. By doing research, teachers are believed to improve their teaching practices for the betterment of students' learning and for the school (Ulla, 2017). The result is also the same with what was found out in the study of Morales (2016) that challenges like tight teaching timetables and heavy teaching workloads resulted to limited research involvement.
592
However, the respondents have given the least rating on the premise that they have no spare time to do research (2.71). They do not completely accept this idea. It may be evident that they really have time only, there are some reasons for not absolutely engaging into research activity. In the study of Ulla, et.al (2017), teacher -respondents revealed that most of their time was spent on classroom teaching, marking papers, and preparing lessons which give them no time to do research. They stated specifically that if their teaching load would be reduced to 18 or 20 hours of teaching a week, they would be motivated to do research. Table 8 presents the research behavior of the teacher-respondents in the city schools in Laguna. It is suggested in the findings that they agree that the school has moderate research behavior as suggested by rating on the premise that they are seeking advice from the experienced colleagues to improve their research capability by its mean value of 3.18. It is good that teachers are asking help from fellow teachers because through such initiative, research culture can be developed within the school (Horodnic and Zait, 2015) [16]. When a researcher enters in research field, his/her colleagues, research fellows and friends are of great value to him/her. They are very helpful for the researchers to provide productive feedback, to encourage and to provide additional support that the researcher may need. It is also important to enhance academic productivity of new researchers.
Among the indicators given, in which the respondents agree on, they gave the least rating to the research behavior discussing research with researchers from foreign universities, institutes (3.04). This may be because, they have no foreign friends and they are uneasy and uncomfortable to talk with foreigners, especially in terms of research activity. They may consider discussing with school heads or co-teachers, instead that may create collaboration. Thus, collaboration in research activities provides opportunity to exchange knowledge and expertise among collaborators (Mawoki,2011) [17]. Although, they agree that schools have moderate climate because the department head can influence the research productivity and other academics by being a great exemplar of research behaviors (3.01) but this belief was given with the least rating. Although, some may observe such behavior from their school heads, but others may not that is the reason why the respondent have not considered this as the prevalent practice in their school. But studies have proved that if academics are stimulated by a leader, they will perform well in research. The leaders' research engagement, performance, and outputs have significant impact on the research motivation of academics because academics consider their leaders as good exemplars of what researchers should be (Nguyen, 2015) [18]. Table 10 shows the observed research policy in the city schools in Laguna. It can be seen from the result that the respondents agree that the schools have moderate research policy about the supporting fund provided by the school for research projects at college level and reward policy for academics who have good research outputs with a mean value of 3.06. They think that funds are very essential for the research to flourish. This is the DepEd way of establishing research culture within public schools. The Department provided policies and mandates that are largely geared towards the improvement of research productivity (Ricero, 2018). Nguyen (2015) also said that research funding was one of the most important factors that motivated academics to engage in research. Sufficient funding for research contributes to both the quantity and the quality of research outputs. Also, rewards are another factor that move teachers to do research because other study mentioned that lack of financial support to teachers makes them feel demotivated and not interested to conduct research studies. In this present study, the teachers said that allocation of budget for teachers and research incentives inspire and motivate teachers to practice their research skills (Alonso et al. 2010) [19].
Then again, the results indicated that they agree that the school is moderate in giving the support funds for publishing articles in international conferences and for research projects at university level (3.01) but these practices were given the lowest rating. This implies that not all teachers are recipients of this funding. They may have limited access to international conferences because of these conferences were attended by private schools. In DepEd Order 16, series 2017 entitled Research Management Guidelines states that Department of Education continues to promote and strengthen the culture of research in basic education. The department establishes the Research Management Guidelines (RMG) to provide guidance in managing research initiatives in the national, regional, schools division, and school levels. The enclosed policy also improves support mechanisms for research such as funding, partnerships, and capacity building. International conference is not mentioned. Consistent to this, the department launched the monthly research forum as a venue where teachers can have their research presentation (deped.gov.ph). It can be seen from the result that there are several researches proposed but most of them did not prosper and remained to be just a proposal. With this result, teachers have to be encouraged and motivated so that proposed studies will successfully be presented and later on utilized considering that teachers are the most significant contributors in promoting research culture (Mills, 2012). Researches will only be useful when they will be put into practice. Challenges and other reasons for not continuing the researches proposed should be solved so that research culture will be established within the school. Research enables educators to follow their interests and their needs as they investigate what they and their students do. Teachers who practice teacher-research find that it expands and enriches their teaching skills and puts them in collaborative contact with peers that have a like interest in classroom research (Hine, 2013). Teacher research can change a teacher's practice, but it can also have a profound effect on the development of priorities for school-wide planning and assessment efforts as well as contribute to the profession's body of knowledge about teaching and learning. The relationship of teacher-related factors to research culture is shown in table 12. As indicated in the table12, teacher-related factors such as self-efficacy, motivation, attitude and skills are significantly related to research behavior, research climate and research policy which is the indicators of research culture. The result implies that the teachers factors has affected the research culture in the schools. When the teachers have high self-efficacy, high attitude towards research, with very good skills in writing their research and high motivation, then, that can surely build higher research culture. Table 13 shows the relationship between school related factors to research culture. The school factors such as financial support, workload, adequacy of facility, lack of materials and administration support are significantly related to research culture as described by the research behavior, research climate and research policy. The result also implies that the school factors have affected the research culture in the schools.
The result only supports what had been found out by Nguyen (2015) in his study that research self-efficacy, research self-competence, financial support for research, and research grants were the significantly related to research culture. Moreover, Kutlay (2012) [20] investigated the influences of personal factors of the teachers in Argentina, Australia, Brazil, Canada, China, Finland, Germany, Hong Kong Italy, Malaysia, Norway, the UK, and the USA to research. The researchers found out that the higher their personal get, the higher the research productivity of academics is. Table 14 presents the relationship between contextual variable to research culture. It can be seen that contextual variable is moderately related to research culture as illustrated by the research climate, research behavior and research policy. The result also suggests that there is significant relationship between these variables as suggested by the computed Pearson values. The constraints encountered during the research activity have affected the research culture. The more problems they encountered, there is likely that they will not continue their research work. Conducting research in the country, especially in the public secondary schools, may be limited since only a few teachers have tried to do it because of the different problems they experienced during the process of doing their studies (Ullah, 2016). Their tight teaching timetables and heavy teaching workloads are some of these problems. Although educational institutions in the Philippines have encouraged their teachers to be involved in research, as it is seen to be useful for their professional development, teachers are confronted with many issues that affect their motivation to undertake research (Morales, 2016). Predictors of research culture as to research behavior in the city schools in Laguna are presented in table 15. In terms of teacher-related factors, school-related factors and contextual variable, all of these are predictors of research behavior, however, of the three variables, teacher-related factors have the highest effect as indicated by its overall effect of 35.2 (R 2 ) percent or 34.9 (Adj. R 2 ) percent predicts the research behavior while contextual variable has only 19.2 (R 2 ) percent effect or 19.1 (Adj. R 2 ) percent predicts schools research behavior. This implies that when teachers are equipped with these factors, better is the chance that they will have the initiative to do research. It is the 596 will of the teacher which has the greater chance that influences the research desire in the school. Among the skills, attitude, motivation and self-efficacy of the teachers, the most influential factor is the skills of the teachers as implied by its regression coefficient of 0.355 which has an effect to the schools' research behavior while selfefficacy has the least effect of 0.062 only. The result is the same with that of Nguyen (2015). The findings showed that research self-efficacy, research self-competence, financial support for research, and research grants were the significant predictors of the allocation of effort to research. Among them, research self-efficacy, research selfcompetence which can be considered as teacher-related factors were the most significant predictors indicated by all respondents. Table 16 indicates the predictors of research culture as to climate. It can be seen that in terms research climate in the school, teacher-related factors, school factors and contextual variable are significant predictors, but then again, teacher-related factors have the greatest effect as suggested by its overall effect of 27.8 (R 2 ) percent or 27.4 ( Adj. R 2 ) percent that the research climate can be predicted through the teacher related factors while contextual variable has the least effect which is 12.3 percent or research climate can be predicted by 12.2 percent. With regards to the teacher-related factors such as attitude, motivation, self-efficacy and skills, skills have the greatest effect which is indicated by its computed regression coefficient of 0.324 percent whereas motivation has the least effect that is 0.092. It can be inferred from the result that teachers' skills are what matter most in many research activities. Some other factors like motivation, attitude are present and yet their research skills are important factors in building a supportive and collaborative research climate within the school because it gives an edge over those who have difficulty in such endeavour (Lertputtarak, 2008) [21]. Moreover, the result is the same with what was found out by Vasquez (2017) [22] t that five predictors such as academic degree, rank, administrative position, motivation to develop knowledge and learning from research findings and skills and problems encountered, contributed significantly to predicting the research culture. With regards to research policy which is shown in table 17, teacher-related factors, school-related factors and contextual variable are significant predictors, however, school-related factors have the greatest overall effect among the given three variables which is 18.1 percent (R 2 ) or it can be said that research policy can be predicted by schoolrelated factors by 18.2 percent (Adj. R 2 ). In terms of teacher-factor, skills affected school's research policy as suggested by the computed regression coefficient of 0.355. On the contrary, contextual variable has the least effect which is 10.2 percent (R 2 ) or it could also mean that contextual variable can predict research policy by 10.1 percent (adj. R 2 ). The result is understandable considering that the policy followed in research is given and being implemented by the school. The research funding, guidelines, research incentives and other policies are controlled by the school, therefore, in terms of research policy, the school has the full control of it. The teachers will only have to follow and comply with what is being asked for them. Table 18 shows the predicted values in relation to research completed based on the full logistic regression model. Specifically, this shows how many cases are correctly predicted: 837 of teachers with no research were observed and are correctly predicted to be with no research completed; similarly, none among the teacher-respondents observed with research and is correctly predicted with research as well. Teachers were observed with no research but are predicted with research; likewise, 61 teachers were observed with completed research but are still correctly predicted to be with no research. However, in the subsequent table of significant influence for teacher's research performance, not even a single predicting variable showed significance. But in consideration of the assumed predicted influence of each factors affecting teacher's research performance, the exponentiation of the coefficients (the odds ratio) is worth to look into.
Teachers' performance in research activities is not affected by the existing research culture in the school. Unlike what has been found out in the study of Vasquez (2017) where five predictors such as academic degree, rank, administrative position, motivation to develop knowledge and learning from research findings and skills and problems encountered, contributed significantly to predicting the research culture. These five predictors also accounted for a substantial amount (37.2%) of the variance in research culture. Administrative responsibilities may negatively affect research activities, because they reduce the time available for research. The other three components of motives were in fact rated higher than the desire to develop knowledge and to learn from research findings, yet their contribution to predicting research culture beyond the other variables turned out to be insignificant. It is, therefore, no wonder that the contribution of conducting research due to a commitment to the policy and to predicting research culture was found to be non-significant, although it was highly rated.
Conclusions:-
In relation to the presented summary of findings, the following conclusions were derived: teacher-related factors, school-related factors and constraints variable have influenced research culture. Hence, the null hypothesis emphasizing non-significance between the aforementioned variables is rejected; of the three variables, teacherrelated factors, school-related factors and constraints variable which are all predictors of research culture, teacherrelated factors are the most influential and with the highest percentage of effect in research culture while school related factor only affect research policy which is a part of research culture. Hence, the null hypothesis is rejected; research culture have no significant influence teachers' research performance. Hence, there is no sufficient evidence to reject the null hypothesis.
Recommendations:-
In the light of the given summary of findings and derived conclusions, the following recommendations were drawn:District supervisors and school heads may conduct different activities that will enhance teachers' attitude, motivation and skills in research like mentoring program, capability seminar, and enhancement seminar during teachers' meeting with invited speakers; the school head must encourage and require the teachers especially the master teacher to conduct basic/ action research; the school heads may look for institutions that will help in training teachers, funding research activities and in attending free local and international research conferences; School heads may give enticing incentives, rewards, and recognition to the greatest number of researches completed, presented, published and utilized; and the proposed action plan that is designed to enhance, sustain, and raise the bar of research culture is recommended for implementation. | 2020-08-06T09:05:57.122Z | 2020-07-31T00:00:00.000 | {
"year": 2020,
"sha1": "2fc813a9aeda604a807d032c8608f0c79b5f9a95",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21474/ijar01/11314",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "70ed03a8a8ca5dfbc10e2a78297969c483e8b5c9",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
4704405 | pes2o/s2orc | v3-fos-license | Efficacy and safety of canagliflozin as add‐on therapy to teneligliptin in Japanese patients with type 2 diabetes mellitus: Results of a 24‐week, randomized, double‐blind, placebo‐controlled trial
Aims To investigate efficacy and safety of the sodium–glucose co‐transporter 2 (SGLT2) inhibitor canagliflozin administered as add‐on therapy to the dipeptidyl peptidase‐4 (DPP‐4) inhibitor teneligliptin in patients with type 2 diabetes mellitus (T2DM). Materials and methods We conducted a multicentre, randomized, double‐blind, placebo‐controlled, phase 3 clinical trial in Japanese patients with T2DM who had inadequate glycaemic control with teneligliptin. Patients were randomized to receive teneligliptin 20 mg plus either canagliflozin 100 mg (T + C, n = 70) or placebo (T + P, n = 68) once daily. The primary endpoint was the change in glycated haemoglobin (HbA1c) from baseline to week 24. Other endpoints included changes in fasting plasma glucose, body weight, proinsulin/C‐peptide ratio, homeostatic model assessment 2‐%B and adverse events. Patients also underwent mixed‐meal tolerance tests. Results The difference between the T + C and T + P groups for HbA1c change from baseline to week 24 was −0.88% (least‐squares mean, P < .001). Fasting plasma glucose, body weight and the proinsulin/C‐peptide ratio were significantly lower in the T + C group than in the T + P group. Homeostatic model assessment 2‐%B improved with T + C compared with T + P. The T + C group exhibited a decrease in the 2‐hour postprandial plasma glucose and plasma glucose area under the curve (AUC)0‐2h in a mixed‐meal tolerance test. No significant between‐group differences were observed for C‐peptide AUC0 ‐2h or glucagon AUC0 ‐2h after meals. Incidences of adverse events were 60.0% and 47.1% in the T + C and T + P groups, respectively. No hypoglycaemia was observed. Conclusions Canagliflozin administered as add‐on therapy to teneligliptin was effective and well tolerated in Japanese T2DM patients.
| INTRODUCTION
Dipeptidyl peptidase-4 (DPP-4) inhibitors promote insulin secretion in a blood glucose-dependent manner by increasing active glucagonlike peptide-1 (GLP-1) levels, thereby decreasing glucose levels without causing hypoglycaemia. In Japan, several DPP-4 inhibitors, including teneligliptin, 1,2 are available and are used in a wide range of patients with type 2 diabetes mellitus (T2DM). Recently, it was reported that DPP-4 inhibitors are more potent in lowering glycated haemoglobin (HbA1c) levels in Asian populations with T2DM, including Japanese patients, compared with Western (Caucasian) populations. 3,4 The difference in efficacy of DPP-4 inhibitors between these populations may be related to differences in the pathophysiology of T2DM. For example, Caucasian patients are more likely to develop insulin resistance associated with obesity which then leads to T2DM, whereas Japanese patients more often exhibit a decrease in insulin secretion capacity that leads to the development of T2DM. [5][6][7][8] The safety of DPP-4 inhibitors, particularly the absence of hypoglycaemia, together with the difference in T2DM pathophysiology, possibly contributed to the rapid increase in prescriptions for DPP-4 inhibitors for T2DM in Japan since their launch in 2009. [9][10][11] Sodium-glucose co-transporter 2 (SGLT2) inhibitors, novel antidiabetic agents, have a selective inhibitory effect on SGLT2, a protein expressed in the proximal renal tubules. They enhance urinary glucose excretion, which consequently reduces hyperglycaemia. The efficacy and safety of SGLT2 inhibitors, either as monotherapy or in combination with other drugs, in Japanese patients with T2DM has been reported in several studies. [12][13][14][15][16][17][18][19] Because a large proportion of patients with T2DM in Japan are taking a DPP-4 inhibitor, SGLT2 inhibitors are often prescribed as add-on therapy to DPP-4 inhibitors in clinical practice. 20 SGLT2 inhibitors block glucose reabsorption in the kidney, and their use in combination with a DPP-4 inhibitor brings together distinct mechanisms of action for reducing glucose levels. 21 Thus, combination therapy with a DPP-4 inhibitor and SGLT2 inhibitor would be expected to achieve a potent plasma glucose-lowering effect.
Previous clinical trials investigating the use of SGLT2 inhibitors as add-on therapy to DPP-4 inhibitors in Japanese patients with T2DM and inadequate glycaemic control have been limited to open-label studies, for example, that by Inagaki et al. 14 To date, there have been no randomized, controlled trials examining the efficacy of an SGLT2 inhibitor added on to a DPP-4 inhibitor in Asian patients. Therefore, to evaluate the efficacy and safety of SGLT2 inhibitors as add-on therapy to DPP-4 inhibitors, we conducted a randomized, double-blind, placebo-controlled clinical trial in which Japanese patients with T2DM and inadequate glycaemic control with teneligliptin monotherapy along with diet and exercise were randomized to either the SGLT2 inhibitor canagliflozin or placebo once daily as an add-on to teneligliptin.
| Ethical statement
The Institutional Review Boards at all participating institutions approved the study after reviewing its ethical, scientific, medical and pharmaceutical validity. The study was conducted in accordance with the ethical principles of the Helsinki Declaration, the Japanese "Law for Ensuring the Quality, Efficacy, and Safety of Drugs and Medical Devices," Good Clinical Practice and the study protocol. The participating institutions are listed online (Appendix S1). The trial was registered at ClinicalTrials.gov (NCT02354235).
| Patients
Informed consent was obtained from all patients prior to enrolment in the trial. Japanese patients with T2DM, aged 20 to 75 years, who had undergone a diet and exercise regimen and had received teneligliptin 20 mg monotherapy once daily for at least 8 weeks prior to initiation of the run-in period were enrolled. Patients using antidiabetic drugs other than teneligliptin were also eligible, providing the other antidiabetic drug was withdrawn for an 8-week washout period; that is, they used only teneligliptin for at least 8 weeks before the run-in period ( Figure S1, Supporting Information). Other inclusion criteria were HbA1c ≥ 7.0% and < 10.5% at initiation of the run-in period and by week 2 of the run-in period, with a difference of ≤0.5% in HbA1c between those time points, and fasting plasma glucose (FPG) of ≤270 mg/dL at initiation of the run-in period. Exclusion criteria and withdrawal criteria are given online (Tables S1 and S2, Appendix S1, respectively).
| Study design, treatments and blinding
The study design is shown in Figure S1, Appendix S1. The study was a multicentre, randomized, double-blind, placebo-controlled trial. During the 4-week run-in period, all patients were administered teneligliptin 20 mg orally and placebo once daily before breakfast. At the start of the treatment period, all patients continued to take teneligliptin 20 mg and were randomized 1:1 in a double-blind manner by a permuted block method to receive either placebo (T + P) or canagliflozin (T + C) 100 mg (the approved dose in Japan), administered orally once daily before breakfast. After the treatment period, patients were observed for an additional 2 weeks (post-treatment observation period), during which they continued to receive teneligliptin 20 mg orally once daily before breakfast.
At the end of the 4-week run-in period (hereafter referred to as baseline) and at the end of the treatment period, patients underwent a mixed-meal tolerance test after a ≥10-hour fast (water was permitted). After basal blood sampling, patients consumed (within 15 minutes) a standard test meal (energy content, 500 kcal; carbohydrate, 60%; lipid, 25%; protein, 15%). Blood samples were obtained at 0.5, 1 and 2 hours after beginning the meal.
| Efficacy outcomes
The primary endpoint was change in HbA1c from baseline to the end of the treatment period. Secondary endpoints included the following: plasma glucose, C-peptide and glucagon areas under the curve from 0 to 2 hours (AUC 0-2h ) (preprandial to postprandial) and change in C-peptide AUC 0-2h /plasma glucose AUC 0-2h ratio.
| Safety
Safety endpoints were adverse events (AEs), hypoglycaemia, laboratory values (haematology, blood biochemistry and urinalysis), electrocardiogram and vital signs. AEs and drug-related AEs were classified according to MedDRA (J. version 18.1) System Organ Class and Preferred Term. For each event, the number of patients and the incidence were calculated. Hypoglycaemia was classified according to the criteria summarized in Appendix S1.
| Determination of sample size
For the change in HbA1c from baseline to the end of the treatment period, with the assumption that the mean difference to be detected in the T + C group compared with the T + P group was −0.50%, and considering that a decrease of >0.3% is a clinically significant change in HbA1c 22 with the SD estimated to be 0.8% based on a previous study, 13 55 patients per group were required (t test) to ensure a power of 90% with a 2-sided significance level of 0.05. Therefore, taking into consideration the safety evaluation and the number of withdrawals, the target sample size was determined to be 140 patients (70 patients per group).
| Statistical analysis
All statistical analyses were performed using Windows SAS (v.9.2 or later version). A 2-sided test was used, with the significance level set at α = .05. Data that were not measured or were immeasurable because of sample issues were handled as missing data. The missing value was imputed with the last available value, using the last observation carried forward (LOCF) approach.
Efficacy was analysed using the full analysis set. For measurements at the end of the treatment period, descriptive statistics, change from baseline to end of treatment period for each group, 95% confidence interval (CI) of the mean for each group, between-group difference (T + C − T + P group) and 95% CI of the difference were calculated. The impact of the baseline measurement on changes in each efficacy endpoint was determined by analysis of covariance using the baseline measurement as the covariate. For the primary endpoint, the least square mean (LS mean) and standard error (SE) of the LS mean were calculated for each group. The point estimate of the between-group difference in LS mean (T + C group − T + P group) as well as the SE, 95% CI and P value were also calculated.
For each secondary endpoint, the change (percent change) from each measurement time point to end of the treatment period (except for HbA1c and evaluation parameters of the mixed-meal tolerance test) was analysed in the same manner as the primary endpoint. The proportions of patients achieving HbA1c < 7.0% and HbA1c < 8.0% in each group at end of the treatment period were calculated, along with the between-group difference (T + C group − T + P group) and P value (Fisher's exact test).
Safety analysis was performed on the safety analysis set, which included all randomized patients except those who did not receive any dose of canagliflozin or placebo in combination with teneligliptin during the treatment period or patients for whom no safety data were collected after randomization.
| Patients
The dispositions of patients included in each analysis set are shown in Figure S2, Appendix S1. Of the 185 patients who provided informed consent, 177 patients enrolled in the study and received teneligliptin 20 mg and placebo once daily during the 4-week run-in period. A total of 47 patients discontinued prior to the treatment period. The remaining 138 patients were randomized to receive placebo (T + P group, n = 68) or canagliflozin 100 mg (T + C group, n = 70) for the treatment period. All patients were included in the full analysis set and the safety analysis set. Seven patients in the T + P group and 3 in the T + C group withdrew from the study during the treatment period; reasons for discontinuation were patient request (n = 3), determination of ineligibility by the investigator because of AEs (n = 3) and development of myocardial infarction, congestive cardiac failure, unstable angina or cerebrovascular disorders (n = 1).
Sixty-one patients in the T + P group and 67 in the T + C group completed the treatment period.
Demographic and other baseline characteristics are shown in Table 1. Age, body mass index (BMI) and baseline HbA1c and FPG values were comparable between groups.
| Efficacy
Changes in HbA1c from baseline to week 24 and changes over time in both groups are shown in Table 2 and Figure 1, respectively. The LS mean AE SE change in HbA1c from baseline to week 24 (LOCF) was −0.10% AE 0.10% in the T + P group and −0.97% AE 0.10% in the T + C group, with a significant between-group difference of −0.88% (P < .001) ( Table 2). HbA1c in the T + C group rapidly decreased from week 4 to week 12 and then remained low to week 24. By contrast, in the T + P group, HbA1c decreased only slightly to week 24 ( Figure 1). In addition, the T + C group showed a significant decrease in HbA1c at each time point compared with the T + P group (P < .001 for all time points).
The proportion of patients achieving HbA1c < 7.0% at week 24 was 19.12% in the T + P group and 40.91% in the T + C group.
The proportion of patients achieving HbA1c < 8.0% was 30.43% in the T + P group and 80.00% in the T + C group. There was a significant difference in these proportions between the T + C group and the T + P group for both HbA1c targets (P = .008 for HbA1c < 7.0% and P < .001 for HbA1c < 8.0%).
Changes in secondary endpoints from baseline are shown in Table 2. A significant difference in FPG, compared with the T + P group, was observed in the T + C group (−38.8 mg/dL, P < .001). Significant differences between the 2 groups with regard to absolute and percent change in body weight were also seen (−1.51 kg and −2.33%, respectively; P < .001 for both).
Changes in fasting proinsulin/C-peptide ratio and HOMA2-%B, as markers of β-cell function, were significantly greater in the T + C group compared with the T + P group (P < .001 for both). There was no significant difference between the 2 groups for change in fasting glucagon. Compared with the T + P group, the T + C group showed significant increases from baseline to week 24 in both fasting total adiponectin and fasting HMW adiponectin (P = .011 and P = .043, respectively).
Changes in the time courses of plasma glucose, C-peptide and glucagon after the mixed-meal tolerance test at baseline and at week 24 are shown in Figure 2. Compared with the T + P group, the T + C group showed improvements in plasma glucose levels at 0.5, 1 and 2 hours after the meal at week 24 ( Figure 2A). The time courses of C-peptide ( Figure 2B) and glucagon ( Figure 2C) in the mixed-meal tolerance test were similar for the T + P and T + C groups at baseline and at week 24. Values for each parameter of the meal tolerance test are shown in Table S3, Appendix S1. In the T + C group, the 2-hour postprandial plasma glucose decreased by 60.1 AE 4.9 mg/dL from baseline to week 24; this reduction was significantly greater than that seen in the T + P group (9.2 AE 5.1 mg/ dL), with a difference of −50.9 mg/dL (P < .001). The T + C group also showed larger decreases in plasma glucose AUC 0-2h (−105.9 AE 7.6 hÁmg/dL) from baseline to week 24 compared with the T + P group (−5.6 AE 8.0 hÁmg/dL), with a difference of −100.3 hÁmg/dL (P < .001).
There was no significant difference in change from baseline in Cpeptide or glucagon AUC 0-2h at week 24 between the 2 groups. However, change in the C-peptide AUC 0-2h /plasma glucose AUC 0-2h ratio from baseline to week 24 was significantly greater in the T + C group compared with the T + P group (P < .001).
| Safety
During the treatment and post-treatment observation periods, AEs occurred in 47.1% and 60.0% and drug-related AEs occurred in 11.8% and 10.0% of patients in the T + P and T + C groups, respectively. Serious AEs occurred in 2.9% and 1.4% of patients in the T + P and T + C groups, respectively. Serious drug-related AEs occurred in 1 patient in the T + P group. AEs leading to discontinuation occurred in 2 patients in each group, and drugrelated AEs leading to discontinuation occurred in 1 patient in each group. There were no AEs leading to death. AEs are shown in Table 3.
No hypoglycaemia was observed in either group. A urinary tract infection-related AE and a fracture were reported in 1 patient in the T + P group and 1 patient in the T + C group, respectively. Increased blood ketone bodies occurred in 2.9% of patients in both the T + P and T + C groups. All of these cases were mild in severity and no cases of ketoacidosis were observed. Cardiovascular-related AEs occurred in 2.9% and 1.4% of patients in the T + P and T + C groups, respectively. Skin disorder-related AEs occurred in 2.9% and 10.0% of patients in the T + P and T + C groups, respectively. One patient in the T + C group developed a rash of moderate severity and discontinued canagliflozin; all other events were mild in severity. AEs related to gastrointestinal disorders occurred in 1.5% and 14.3% of patients in the T + P and T + C groups, respectively. All these events were mild in severity. Two patients in the T + P group developed impaired hepatic function.
Genital infection, osmotic diuresis-related AEs, dehydration and pyelonephritis, which are associated with use of canagliflozin, 23,24 were not observed in the T + C group. Furthermore, no intestinal obstruction or interstitial pneumonia, which are associated with use of teneligliptin, [25][26][27] occurred in the T + C group. Transient increases in total ketone bodies were observed in both groups; however, no patients experienced ketone body increase-related symptoms leading to discontinuation. There were no noteworthy changes in other laboratory values, electrocardiogram findings or vital signs (data not shown).
| DISCUSSION
This randomized, placebo-controlled trial investigated the efficacy and safety of canagliflozin as add-on therapy in Japanese patients who had inadequate glycaemic control with teneligliptin monotherapy. Add-on canagliflozin to teneligliptin was associated with greater reductions in HbA1c, FPG and postprandial glucose Both total adiponectin and HMW adiponectin, which were measured as exploratory endpoints, significantly increased in the T + C group compared with the T + P group. This increase in adiponectin may be the result of reduction in body weight in the T + C group.
C-peptide is secreted with insulin at a ratio of 1:1. C-peptide was selected as a marker of insulin secretion because canagliflozin might influence insulin clearance but does not affect the kinetics of Cpeptide secretion. 34 In the T + C group, the proinsulin/C-peptide ratio decreased significantly from baseline to week 24 compared with the T + P group. Decreases in the proinsulin/C-peptide ratio with canagliflozin as monotherapy or as combination therapy (including concomitant use with a DPP-4 inhibitor) were previously reported in studies conducted in Japanese patients with T2DM. 13,14 Furthermore, we observed an improvement in HOMA2-%B in the T + C group, suggesting an improvement in pancreatic β-cell function, which confirms the findings of previous studies of canagliflozin monotherapy or in combination with a DPP-4 inhibitor. 14 In the mixed-meal tolerance test, a reduction in postprandial plasma glucose occurred in the T + C group. Although there was no difference in the C-peptide AUC 0-2h between groups, the C-peptide AUC 0-2h /plasma glucose AUC 0-2h ratio was increased. These results suggest that canagliflozin may improve β-cell function in the setting of postprandial hyperglycaemia. Taken together, these findings suggest that canagliflozin, which has insulin-independent glucoselowering activity, may reduce glucose toxicity and the burden on β-cells, leading to an improvement in β-cell function. With regard to safety, the incidence of AEs was higher in the T + C group than in the T + P group. Although the incidence of AEs related to gastrointestinal disorders and skin disorders was higher in the T + C group, only 1 gastrointestinal event and 3 skin disorders were considered to be drug-related in the T + C group. Examination of the details of AEs related to gastrointestinal or skin disorders revealed no specific individual AEs with a disproportionately high incidence. AEs commonly associated with SGLT2 inhibitors, such as genital infection or osmotic diuresis, were not observed in this study.
Furthermore, no hypoglycaemia was observed in either group and there were no new safety signals that have not already been reported for canagliflozin or teneligliptin.
This study had some limitations. First, it was conducted in Japanese patients only. As there are differences in diet as well as T2DM pathophysiology between Japanese and Western populations, including insulin secretion capacity, the study findings may not be extrapolated to other ethnic groups without consideration of these points. In addition, because the present study had a 24-week treatment period, the long-term safety and efficacy of canagliflozin as an add-on therapy to teneligliptin is unknown and needs to be evaluated. Finally, the sample size of this study might be considered small, but the study was appropriately powered based on a sample size calculation, and was designed in accordance with local and international recommendations for clinical trials of antidiabetic drugs.
In conclusion, canagliflozin as add-on therapy to teneligliptin significantly improved HbA1c and postprandial plasma glucose in this randomized clinical trial in Japanese patients with T2DM, suggesting that the treatment led to an improvement in β-cell function and relief from burden on β-cells. We observed no deviations from the known safety profiles of both teneligliptin and canagliflozin in this combination treatment study. These results confirm the efficacy and tolerability of canagliflozin add-on therapy to teneligliptin in Japanese T2DM patients.
SUPPORTING INFORMATION
Additional Supporting Information may be found online in the supporting information tab for this article.
How to cite this article: Kadowaki | 2018-04-03T05:12:36.954Z | 2017-03-31T00:00:00.000 | {
"year": 2017,
"sha1": "c82abb373238457062a6a2e5ca37f69e1ddf2c1d",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/dom.12898",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "c82abb373238457062a6a2e5ca37f69e1ddf2c1d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
216190732 | pes2o/s2orc | v3-fos-license | Sustainability accounting and corporate social responsibility in Turkey and in its region
Research question: As a newly developing area, this paper aims investigate recent developments and applications of social responsibility practices in Turkey and in the neighboring regions in a comparative way. Motivation: The sustainability reporting is relatively a new subject in Turkey both in practice and in academic circles. The introduction of Corporate Social Responsibility as a concept in Turkey does not have a long history. The last five years witnessed a significant change in terms of corporate social responsibility (CSR) applications. Increasing number of companies started to implement sustainability accounting and corporate social responsibility practices. Companies began to adopt corporate social responsibility by considering it as a tool for medium-and long-term success. As a reflection of growing interest in sustainability and CSR, the number of NGO’s and consultancy firms operating in this field has also increased recently. In 2014, Sustainability Index was created within the Istanbul Stock Exchange (ISE). Idea: The understanding of the social responsibility applications and priority areas vary between different countries depending on their economic, social and political dynamics. In order to see the intra-regional variations, in this paper, we compare corporate social responsibility and sustainability applications in Turkey and in the neighboring regions by selecting Russia and Greece in the same region. Data: For the analysis, we collect recent sustainability reports of Turkish companies from GRI database and make a content analysis to evaluate practical applications of CSR and sustainability in Turkey. Then, we discuss corporate social responsibility applications in Russia and Greece. In the first part of our paper we examine corporate social responsibility and sustainability applications in Turkey by making an analysis of sustainability reports and CSR activities in Turkey. In the following part we discuss corporate social responsibility applications in the neighboring regions and make a comparison with Turkey. To this end, sustainability reports of Russian Federation and Greek companies in GRI database are analyzed for CSR activities. In the final part of our paper, we make a comparative analysis of CSR applications in
Introduction
"Social responsibility can only become reality if more managers become moral instead of amoral or immoral."(Caroll, 1991) The sustainability idea firstly emerged as an economic concept and sustainability and sustainable economic development are used interchangeably, leaving social and environmental dimensions of sustainability in an inferior position.Starting from mid 1970's, people started to talk about the dangers of massive economic development for the environment and for the whole society in general.Currently the idea of sustainability evolved into a more sophisticated idea of considering sustainability as the rights of future generations and leaving them a "clean" environment and living conditions while mankind continues its economic development.
The awareness of sustainable development also increased as a result of efforts of various international organizations and NGO's.United Nations has played a pioneering role in sustainable development issues.Sustainable development first came to attention of the world in a global scale with 1972 Stockholm Conference of UN in which attending countries discussed for the first-time negative impacts of economic development on the environment.The 1982 UN Summit in Rio de Janeiro was a turning point where UN Convention on Biodiversity, Convention to Combat Desertification and Framework Convention on Climate Change were opened for signature of attending states.
As part of ongoing efforts of United Nations to promote sustainability, UN General Assembly defined 17 sustainable development goals in 2030 Agenda of United Nations as a blueprint to achieve a better and more sustainable future for the world.The goals address the global challenges including poverty, inequality, climate, environmental degradation, prosperity, peace and justice.OECD is the other international organization working actively on sustainable development by monitoring the sustainability through its sustainable development indicators.
Advances in management systems, reporting standards, use of technology and growing attention on environmental, social and economic sustainability accelerated the importance of sustainability issues among companies.According to Chiu (2010), CSR reporting has moved from dominantly environmental reporting to more general sustainability reports encompassing various areas of social interest such as community and social impact, and human capital reporting.Three key developments witnessed over the last five years include the development of autonomous or standalone CSR reports, the acceptance and adoption of standardized reporting guidelines, in particular, those developed by the Global Reporting Initiative ("GRI"), and the growth of CSR ratings and thirdly the development of the assurance industry for CSR reports.
International organizations and initiatives have also played a guiding role for companies in forming sustainability reports which can be defined as written public declarations of companies on their social, environmental and corporate management activities.Currently, reporting of CSR is realized according to four different reporting frameworks: 1-G4 reporting framework of Global Reporting Initiative (GRI) 2-Communication on Progress (COP) reporting of UN Global Compact 3-Integrated Reporting (IR) framework of International Integrated Reporting Council (IIRC) 4-Carbon Disclosure Project (CDP) reporting of Carbon Disclosure Project.It is observed that most companies prefer to use the G4 reporting framework for CSR reporting.
As a reflection of growing awareness on sustainable development, countries started to develop sustainability indexes according to social, environmental and economic performances of companies.In the late 1990's Dow Jones sustainability index was launched as a first global indicator on sustainability.
Contemporary understanding of sustainability is based on three main pillars: economic, social and environmental.The concept of corporate sustainability is seen as an indispensable part in achieving these three pillars of sustainability in that the terms of sustainability and corporate sustainability are used synonymously.Corporate sustainability offers various tools to transform conceptual idea of sustainability into more practical and concrete results and actions.The concepts of sustainable economic development, corporate social responsibility (CSR), corporate accountability and stakeholder theory has developed under the general heading of corporate sustainability among which corporate social responsibility having the utmost importance (Pekdemir, 2013).Studies of Caroll made important contributions to the formation of contemporary understanding of the corporate social responsibility.Carroll (1991) specified economics, law, ethics and philanthropic domains as the main elements of corporate responsibility that companies should realize in their CSR performance to achieve community demands and acceptance.These so called four responsibilities then formed CSR pyramid of Carroll.In general definition, corporate social responsibility refers to ethical, social and environmental responsibilities of companies while undertaking their everyday business activities in which broad range of activity areas are covered such as environmental protection, healthcare, education, social justice and equality, social rights and workforce equality.
Emergence of sustainability reporting and studies on sustainability in Turkey coincides with global developments on sustainability.Turkey signed international treaties of UN on environment and ecological protection in 1990's.In 2004, National Sustainable Development Commission was formed to formalize national strategies and plans on sustainable development.In 2014, Sustainability Index was created within the Istanbul Stock Exchange (ISE).Government and companies in Turkey consider sustainable development as a way to increase international competitiveness of Turkish companies in global arena.Government promotes sustainability reporting and sustainable development issues as part of its national development plan.
As stated above the sustainability reporting is relatively a new subject in Turkey both in practice and in academic circles.Academic studies on the CSR and sustainability studies in Turkey started to gain momentum after 2010 focusing on mainly two broad categories of content analysis of CSR reports in selected sectors in Turkey and the studies attempting to measure the impact of CSR applications on the company's performances.This paper aims to contribute academic literature on Turkey by making a recent and thorough review of CSR reports of Turkish companies in terms of their contents and making a comparison with its neighboring countries in order to see the intra-regional variations in CSR reporting that is lacking in the current literature.
In this paper, we will follow a comparative analysis of corporate social responsibility and sustainability applications in Turkey and in the neighboring regions.The following of our paper is structured in four parts.In the first part, current literature and academic studies on corporate social responsibility are presented.The first part of the paper also covers historical development of sustainability and CSR reporting in Turkey in order to give a complete picture of CSR as a newly developing area in Turkey.In the second part, we examine sustainability and corporate social responsibility applications in Turkey.For our analysis, we collect recent sustainability reports of Turkish companies from GRI database and make a content analysis to evaluate practical applications of CSR and sustainability in Turkey.Then, we discuss corporate social responsibility applications in the neighboring region.For this aim we selected Russia and Greece as representatives for analyzing sustainability and CSR applications in the region since both countries have similar levels of development in the region.In the final part of our paper, we make a comparative analysis of CSR applications in Turkey and in the neighboring regions.
Literature review
The ideas of ethical, environmental, social responsibilities of companies and sustainable development originate from developed countries of the west especially from North America.First academic writings and research on CSR and sustainability mainly focused on relationship between the social responsibilities and financial performance of the companies.First definitions of CSR can be found in Bowen's (1953) and Barnard's studies in 1950's and 60's.In Barnard's extensive work called "functions of the executives" (1968), he defines CSR as "economic, legal, moral, social and physical aspects of business environment".Carroll (1979) made a more elevated definition of CSR and developed a four-level understanding of CSR as economic, legal, ethical and discretionary (philanthropic) responsibilities of a company.Caroll (1979) (Beelde & Tuybens, 2015), CSR and bank risk profile (Gambetta et al., 2017).Jain et al. (2011) analyzed the link between psychosocial risk management and SCR practices through questionnaire and interviews across Europe.
When we look at the academic literature on CSR and sustainability studies in Turkey, we see proliferation of the academic studies after 2010 which coincides with the leaping of the number of Turkish companies publishing CSR reports.Two mainstream academic literatures in Turkey on CSR and sustainability revolve around content analysis of CSR reports in selected sectors in Turkey, including a comparative analysis with other countries and econometric models aiming to measure the impact of CSR applications on company's overall performance.
As one of the first academic studies on the CSR reports of the Turkish companies, Robertson (2009) compared social responsibility applications and reporting in Turkey, Singapore and Ethiopia and proposed that there is need for redefining CSR for developing countries by taking cultural, social, political, and economic factors into consideration, rather than standardizing CSR activities in all over the world.Ertuna and Tukel (2009) conducted a content and impact analysis of CSR disclosures of Turkish companies in ISE50 index and found that social responsibility activities are mainly realized as a part of traditional philanthropic activities of family owned companies as well as profit-oriented activities.Later, Nuhoglu and Wan (2012) studied corporate social responsibility disclosures of 271 manufacturing companies in Turkey and China in the year 2009 concluding that though both countries present rather low levels of CSR practices, Chinese companies' scores are lower than Turkish companies' scores, especially in the domain of economy and society.
With the increasing number of Turkish companies involving in CSR reporting, academic studies on the content analysis of the reports have also increased.As parallel to content analysis of CSR reports, some academic studies started to evaluate the impact of the CSR reports on the performances of the companies in Turkey.Aras et al. (2011) analyzed interaction between value added intellectual capital (VAIC) and CSR on a sample of manufacturing companies listed on the Istanbul Stock Exchange (ISE) during the period 2007-2008 and they could not find any significant relationship between CSR and VAIC during the period analyzed.Arsoy et al. (2012) investigated relationship between CSR and company financial performance of 28 companies in Turkey listed in Istanbul Stock Exchange Corporate Governance Index and found that companies having good financial performance also have better social responsibility scores.Ozcelik et al. (2014) studied the impact of financial performance, firm size, firm risk, type of ownership on CSR of selected companies in ISE and found a significant relationship between firm size and corporate social responsibility.Akdoğan et al. (2017) reviewed corporate social responsibility reporting in the Turkish banking industry between the years 2012-2014 to measure the relationship between CSR and corporate governance.They found that corporate governance level has significant impact on the level of CSR reporting in Turkish banking sector.Değer and Aydogan ( 2018) developed an econometric model measuring the impact of CSR on company financial performance and argue that companies with higher CSR score have higher ROA and Tobin's Q performance.It can be argued that the results of the recent academic studies generally support the idea that CSR reports impacts positively company performances.
As can be seen from the review of the literature in Turkey, the literature on CSR is newly developing area in Turkey and needs to be contributed with the new academic studies in the field as the number of Turkish companies publishing CSR reports has been increasing with the growing interest in CSR reporting among Turkish companies.This paper aims to contribute academic literature in Turkey by making a recent and thorough review of CSR reports of Turkish companies in terms of their contents as well as making a comparison with its neighboring countries in order to see the intra-regional variations in CSR reporting that is lacking in the current literature.Russia and Greece are selected as the peer companies for the regional analysis since both countries share more similarities in the region in terms of economic development level and social structures with Turkey as compared to the other countries in the adjacent region.
Important milestones in the development of sustainability and CSR in Turkey
The concepts of sustainability and corporate social responsibility first came to focus of Turkish companies and academics in the late 1900's and early 2000's.Turkish government was in the leading role in bringing sustainability issues into the attention of public awareness and business environment through signing of various international treaties of the United Nations.Staring from 2018, the coverage of XUSRD Index was extended to 50 companies in ISE-50 Index.Also, companies in ISE-100 Index are voluntarily allowed be included in XUSRD Index.November-October period is defined as the yearly index period ii .
The companies in the XUSRD Index are evaluated in terms of international sustainability criteria and their adherence to environmental, corporate social responsibility and corporate governance principles by Ethical Investment Research Services Limited (EIRIS).EIRIS use only publicly available information in its evaluations.
The evaluation of EIRIS is made in three steps.In the first step, as of mid-year (June 30th), EIRIS prepares a company profile for each company quoted in Istanbul Stock Exchange Index (ISE) based on publicly available information iii on policies and activities of these companies with regards to environment, biodiversity, climate change, structure of board of directors, human rights, fight against bribery.Then, EIRIS sends prepared profiles to the companies for their evaluations.In the second step companies review their profiles and send back to EIRIS with their recommendations and updates.In the last step EIRIS finalize company profiles considering also recommendations and updates of the companies.In this step, EIRIS also defines list of companies to be included in sustainability index according to index selection criteria.In order to be included in the index, rates of companies should exceed certain thresholds in the predefined selection criteria.
The latest price data are used for the index calculation.
Implementation of sustainability reporting and CSR in Practice
Emergence of the concept of sustainability and corporate social responsibility in Turkey in late 1990's and early 2000's coincide with increasing integration of Turkish companies with international economy.Turkish companies started to initiate environmental protection, promotion of social welfare and justice and educational projects to gain competitive advantage in global arena and to increase their brand value in the eyes of national and international customers.Turkish government also promoted corporate social responsibility and sustainability projects of the companies as they see it as an indispensable part of sustainable economic development of the country.
Although there have been considerable developments in CSR and sustainability reporting in Turkey in the last decade, the total number of companies preparing sustainability reports still remains low.According to results of a comprehensive research made by SUSCR, which covers the top 501 companies in Turkish economy in 13 different sectors, the ratio of non-financial reporting among 501 companies is 25.5%.38% of these companies report non-financial information within their annual reports, while 43% of these companies prepare separate sustainability reports or other non-financial reports.In terms of sectoral breakdown, technologycommunication, healthcare-FMCG and energy are the top three sectors in preparing non-financial reports.(EU Sustainability Reporting National Review Report for Turkey, 2016).The low percentage of sustainability reporting among Turkish companies can be attributed to the fact that according to Turkish regulations, companies are not obliged to report their sustainability activities and CSR projects and to prepare sustainability reports on a regular base.Those companies having international connections, adhering to corporate governance principles and giving importance to brand value voluntarily prepare sustainability reports and involve in CSR projects.
Turkish companies have started to prepare sustainability reports from 2004.The first sustainability report that we came across in GRI database was prepared by Aksa Akrilik for the year 2004.Aksa Akrilik is a textile and apparel producing company in Turkey and the report is named as "2004 Sustainable Development Report".Aksa Akrilik was the first and only company that prepared sustainability report in 2004 and the number of companies preparing sustainability report increase two in 2005.
The year 2011 was the turning year for sustainability reporting in that after 2011 the total number of companies preparing sustainability reports in GRI database jumped 38 which also coincides with the growing awareness in the Turkish society and business environment of the importance of social responsibility projects and activities.
Until 2010, companies had used various names to report their CSR activities like sustainability report, sustainable development report, corporate social responsibility report, corporate responsibility report, citizenship report, UN Global Impact Progress Report or as an appendix to their annual report.Starting from 2010, the naming of the reports has standardized and currently most companies name their reports as "Sustainability Report".
Research methodology
In order to evaluate the practical application of sustainability reporting and corporate social responsibility in Turkey, we conducted research in Global Reporting Initiative (GRI) database for sustainability reports for Turkish companies.It is seen that 36 Turkish companies published their sustainability reports in GRI Database in 2016.
For our research, we selected 15 companies among 36 for our analysis, each representing different sectors.The analysis of CSR literature shows that nature and type of CSR initiatives are one of the important concerns among scholars.In line with the relevant literature, in order to see nature and type of CSR initiatives in Turkey and in the neighboring region, we analyze the sustainability reports according to type of CSR activities and the main CSR activity areas.We also want to analyze the main motivations of the companies in undertaking CSR activities and how widespread CSR reporting framework is among companies which are missing in the literature.Three criteria are defined to analyze sustainability reports of the companies, as we explain next.
Content analysis of sustainability reports in Turkey
We analyze the sustainability reports of selected 15 Turkish companies gathered from GRI database according to 1-focus areas in CSR activities 2-overall aim of the company expected from CSR activities 3-types of sustainability reporting frameworks (See Appendix 2).
Priority areas of Turkish companies in CSR projects shows Turkey's strive for achieving economic development more than cultural and social development.Expectations of companies from realizing CSR projects also show that companies consider CSR projects as a tool to increase their competitive abilities locally and internationally, to increase net profit level and their overall company value.
Companies expect direct economic and financial results from CSR activities rather than realizing them as a result of a more elevated understanding of being a responsible company towards its society and environment.
Among all CSR activities, supporting education is by far the most prominent area in CSR projects for Turkish companies, reflecting the idea in Turkish society of considering education as a main vehicle for realizing economic development.
Regardless of its sector, almost all companies undertake different CSR projects to support various educational and training activities especially towards children and young people through organizing training facilities and fund-raising activities.
Family owned large conglomerates realize their philanthropic activities for education through their family owned foundations.
Projects to support status of women and disabled people in society are another focus of activity in CSR projects.It includes women empowerment projects, supporting women's entrepreneurship, easy access to financial resources for women, supporting disabled people financially and through equipment providence.
Environmental projects have relatively less weight in total CSR activities of the companies as compared to other activity areas although almost all companies undertake some level of environmental projects like waste management, recycling and clean energy.Supporting art, cultural and sport activities in society is the least preferred CSR activity area for Turkish companies.
In terms of reporting frameworks, Turkish companies prepare sustainability reports in conformity with GRI G4 and UNGC COP reporting.There is less preference for other reporting frameworks like integrated reporting and CDP reporting.
Practical application of CSR and sustainability reporting in Russia
The first sustainability reports in Russian Federation published in GRI database was prepared by North-West Timber Company in 2003.The report was firstly prepared as "2003 Environmental Report".Following the first report, Lukoil and Norilsk Nikel started to prepare sustainability reports in 2004 and 2005.Recently, there are 230 listed companies in Russian Exchange MOEX and the number of sustainability reports in GRI database published by Russian companies was 80 by the end of 2016 as heavily dominated by energy companies, though most reports are written in Russian that does not allow well satisfactory content analysis of the reports with a limited sample amount, we can still draw conclusions from our analysis of the reports and see the main activity areas and drivers of CSR initiatives in Russia.
Analysis of the English written sustainability reports of Russian companies in GRI database as of 2016 reflected social and economic priority areas in Russian society.
Although there were nuances in CSR activities among companies according to sectoral differences, we see that CSR activities accumulated in certain priority areas in Russian Federation.
The common priority areas in CSR activities in Russia can be classified as increasing workplace conditions, safety and employee rights, implementing corporate governance and anti-corruption policies, reducing water consumption, carbon emission and recycling projects, supporting children and especially disabled children in society, promoting family and general health in society and sponsoring sports and cultural activities.
In general, it is seen that energy, water consumption, environmental projects and workplace safety are considered by Russian companies important CSR activity areas.Russian companies also give importance to implementing corporate governance principles and anti-corruption measures.Sponsoring sports and cultural activities like cinema and theatre also constitute important area for CSR.On the other hand, we cannot see any CSR project for promoting education and women empowerment in Russia.Russian companies prefer to invest in projects for general empowerment of society and family, without making any gender differentiation.
Practical application of CSR and sustainability reporting in Greece
The sustainability reports in GRI database were published by Greek companies starting from 2002.Vodafone Greece and Motor Oil Hellas were the first sustainability report publishing companies in 2002.As of 2016 the number of sustainability reports publishing company in GRI database is 53 and the number of listed companies in Greek Exchange is 196, though, as Russian Federation, absence of the English written sustainability reports representing all sectors in Greece limits selection of our sample.
Review of the English written sustainability reports published in GRI database shows that focus area for Greek companies in CSR activities revolve around helping children and children's foundations, promoting employee development and workplace conditions and protecting environment with recycling projects, reducing energy, water consumption and carbon emission level.There are also CSR projects to protect cultural sites and support historical events.Issues of education, gender equality and equality in society in general, promoting family and health in society do not get much attention among Greek companies as a CSR activity focus.This can be partly attributed to the fact that Greece has comparatively high level of social development as a European Union member country, although she is geographically adjacent to Turkey and Russian Federation.
Coverage of the reporting
The table below shows the number of companies publishing sustainability reports in Turkey, Russian Federation and Greece by 2016 v .It is seen that a relatively high number of companies prepares sustainability reports in Russia and Greece as compared to Turkey.Nearly 35% of all listed companies in Russian Federation and 27% of all listed Greek companies are preparing sustainability reports for the year 2016.While the ratio of sustainability report preparing companies among all listed companies in Turkey is around 11% although the number of listed companies in Turkey are relatively higher than Russia and Greece.The existence of low coverage among Turkish companies shows that sustainability reporting is a newly developing area in Turkey and there is still big room for development in the field of sustainability reporting.
Main CSR activity areas
Content analysis of the sustainability reports between Turkey, Russia and Greece highlights existence of differences between these countries in terms of CSR activity areas.CSR activity areas in Turkey underline Turkey's strive for achieving social and economic development as a newly developing country.The CSR focus areas in Turkey reflect existence of an understanding in Turkish society that prioritizes economic development over social development.It is believed that in order to achieve economic development, education and innovation should be supported.For this reason, we see more focus on education and innovation projects issues among Turkish companies and less focus on environmental although all of them pursue some level of environmental projects like recycling, waste management.Turkish companies consider CSR projects as a means to support Turkey's economic and social growth and development.As stated in most sustainability reports of Turkish companies, Akbank Chairman states that "In addition to maintaining strong support for Turkey's economic growth, Akbank also contributed to the country's social development through its focus on education, culture, art, entrepreneurship, and volunteering as prioritized social investment areas."(Akbank Sustainability Report, 2016) The biggest activity area for Turkish companies is characterized by investing in educational facilities and supporting women and disabled people in society.
Education is considered by almost all Turkish companies analyzed as indispensable element of corporate social responsibility.Mainly the conglomerates and their affiliated companies like Zorlu Holding, Akbank, Şişecam, Turkcell take the leading role in educational support activities.Zorlu Holding states their motto as "constructing roofs for young people" in giving support to education of young people (Zorlu Holding Sustainability Report, 2016).The other focused area for CSR is supporting innovation especially among younger generation.
Many companies undertake innovation support activities similar to "The CaseCampus Program" of Akbank which is a joint venture with Endeavor Turkey attended by young people from all parts of Turkey dreaming of starting their own business (Akbank Sustainability Report, 2016).
Apart from education and innovation, Turkish companies initiate various CSR activities to support women in society and women empowerment projects.Stronger Young Women project of retail group Boyner aims to promote the continuing education of young female high school graduates, age 18-24, who grew up in orphanages and are exposed to social and economic discrimination.The project teaches them how to be ready for the business market by developing their skills and personal development (Boyner Sustainability Report, 2016).Similar To Boyner, Doguş Oto is the other company promoting the women empowering initiatives as stated in their reports: "we are striving to create exemplary opportunities for women's employment in the business world by updating our human resources policy, increasing women's participation in the business world, becoming a company preferred by women employees, and bringing different perspectives to our organization."(Doğuş Oto Sustainability Report, 2016) CSR projects aiming to support family and elderly people in society are least focused areas in Turkey since family and patriarchal ties are still very strong in Turkey as a traditional and family-oriented society.Traditional structure of Turkish society also defines intensity of CSR activities.Even though CSR is a newly developed concept in Turkey which is imported from developed countries of the west, the number and magnitude of CSR projects in Turkish companies are high as compared to the number and magnitude of CSR projects in Russia and Greece.It can be said that existence of a traditional philanthropic culture in Turkish society intensifies CSR activity in Turkey.
CSR activities in Russia concentrate on energy saving and environmental protection projects which can be attributed to dominance of Russian economy by energy sector.
As one of the biggest employer company in Russian Federation, Lukoil implements a big scale environmental and industrial safety program across Russian Federation.
The other important CSR activity areas in Russia are supporting family and society in general.Due to structure of Russian society, Russian companies prioritize CSR projects to support family in society and children's foundations.Aeroflot initiates a support program for WW2 veterans which aims to give material aid to WW2 veterans in Russia.The same company also charity programs for children and disabled people in society (Aeroflot Sustainability Report, 2016).The Russian companies also put emphasis on implementing anti-corruption and corporate governance principles together with increasing workplace conditions and safety measures.Supporting cultural and sport activities are another focus area for Russian companies.Education and women are not considered as main CSR activity areas in Russia.
Environmental projects constitute focus area for CSR activities in Greece.Unlike Russian companies, Greek companies invest in environmental projects mainly because of requirements of European Union regulations.Social aspect of CSR in Greece is limited with promoting employee development and workplace conditions.There are also CSR projects for supporting and sponsoring cultural activities, sports in society and helping children's foundations.Similar to Russia, education and women empowerment does not considered as CSR activity area by Greek companies also.
The analysis of CSR activity areas indicates the variations in socio-economic structure of the selected countries.Turkey's social structure as a patriarchal and predominantly Muslim populated society differentiates her CSR activity areas from the neighboring countries of Greece and Russia.While environmental projects and the projects aiming to promote place of family and elderly people in society are the main CSR activity areas in Greece and Russia, Turkish companies place less emphasis on social CSR activities like promoting family, protecting elderly people in society, supporting cultural events, sports and sponsoring activities.
Conclusion
As Argodona and von Weltzien Hoivik (2009) state, corporate social responsibility represents a complex notion for corporate actors and there is no such thing as a onesize-fits-all CSR solution.The analysis of CSR activities in Turkey in comparison with the CSR activities in Russian Federation and Greece shows that there is no unique understanding of CSR among these countries.Socio-economic development levels and social structures of the countries shape the understanding of CSR.In their comprehensive studies that compares selected European clusters composed of 178 corporations from various European countries, Maon et al. (2017) puts that: "the notion of CSR as a contextualized concept, shaped by socio-political drivers, and contributes by bridging macro-level, socio-political facets of CSR with its meso-level, organizational implications." CSR priority areas and activities in Turkey highlight socio-economic development level and social-religious structure of Turkish society which is also reflected in practical application of CSR projects implemented by Turkish companies.
It is seen that Turkey's strive for achieving economic development shapes whole understanding of CSR in Turkey.Turkish companies prefer to invest in CSR projects that will directly contribute to the economic development of the country.For this reason, supporting education of children, younger people and innovation are prioritized as CSR activity areas, while environmental projects and cultural activities remain as less focused areas.In addition to achieving economic development, projects which aims to promote place of women in society are the other prominent CSR activity area in Turkey due to social and religious structure of Turkish society.
In this sense, CSR activity and priority areas can be seen as an indicator for social structure and socio-economic development level of any country.Ertuna and Tukel (2009) classify Turkey "a country in between traditional and global" in terms of CSR activities and sustainability.
Country specific socio-economic structures of Russian Federation and Greece shape understanding of CSR in a different way.Environmental protection, enhancing workplace conditions and safety, implementing corporate governance principles, supporting family in society and sponsoring sport and cultural activities are seen main CSR priority areas in Russia Federation and Greece leaving education, innovation and women empowerment projects in an inferior place.
In this study, we conducted our research on CSR applications and practices based on sustainability reports in GRI database leaving the actual practices and behaviors of the companies on CSR out of the scope of this study.There should be more studies in the future focusing on actual practices and strategies of the companies that is reflecting the real CSR behaviors of the companies in Turkey and in the neighboring region.
Involving in energy saving and sustainable energy projects iv It is also noteworthy to mention that efforts of the Turkish exchange Borsa Istanbul to support sustainability reporting paved the way in that large amount of companies publishing sustainability reports are listed companies.However, the total number of companies preparing sustainability reports on regular bases is still low considering the fact the total number of listed companies exceeds 300 Turkey as of our research date v The same data is also researched for South Korea, Germany and France to gain further insights while making comparisons.The number of listed companies is 2.114 in South Korea, 450 in Germany and 465 in France.The total number of companies publishing sustainability reports in GRI in 2016 is 60 in South Korea, 145 in Germany and 185 in France (coverage ratios are 3%, 32% and 40% for three countries respectively).
The period between 2003-2007 was an intense period in which national actions and policies being implemented.In 2004, Turkish National Sustainable Development Commission was gathered.National Action Plan on sustainability was prepared in 2005 in accordance with the terms of international treaties Turkey has signed.In this period Turkey has also prepared the first EU Integrated Environmental Adaption Strategy and started to become an active member of European Environment Agency.In order to foster sustainable development and increase awareness in the business about sustainable development and corporate social responsibility, the Business Council for Sustainable Development Turkey (BCSD Turkey) was founded in 2004 under the leadership of 13 private sector entities.BCSD Turkey is the local network and partner of the World Business Council for Sustainable Development (WBCSD) in Turkey, and it is in a strong cooperation with its parent organization.The Council shares knowledge on sustainability with its members and stakeholders through the activities of its working groups.Currently BCSD Turkey has 58 member companies from 19 different sectors composing one third of Turkey's Gross National Product (GNP).In line with UN sustainability goals, BCSD Turkey has continuing its activities in four activity areas as 1-Transition to Low Carbon Economy and Efficiency 2-Sustainable Agriculture and Access to Food 3-Sustainable Industry and Circular Economy 4-Social Inclusion One of the most prominent steps in Turkey in the field of sustainability in general and corporate social responsibility in particular is the introduction of the Sustainability Index in Istanbul Stock Exchange (ISE) Turkey signed UN Convention on Biodiversity in 1991, UN Convention to Combat Desertification in 1994 and UN Framework Convention on Climate Change in 2004.Turkey also signed Kyoto protocol in 2009 which aims to strengthen enforcement of UN Framework Convention on Climate Change that commits state parties to reduce greenhouse gas emissions to fight against global warming.i .BIST Sustainability Index was first initiated in 4.11.2014(XUSRD) aiming to evaluate use of economic, social, environmental factors and corporate governance principles by the companies and to make investors aware of these companies.In 2014, XUSRD Index was covering 30 companies in ISE-30 Index in which for each company daily price and return calculations are launched.Starting value of the XUSRD Index was accepted as closing price of ISE-30 Index as of 3.11.2014that was 98.020,09.Maximum percentage is restricted with 15% for one share.
Total number of shares of i in time t Hit= Ratio of Shares in actual circulation to the Total number of shares for share i in time t Kit = Coefficient of share I in time t Dt = Value of exchange rate of index in time t Bt= Denominator of index in time t | 2020-04-02T09:19:13.530Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "1f5b12c0997f50c6f693775710d9012777be8fd7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.24818/jamis.2020.01001",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d96d70b75bb7de9f27d0a3568b556dbfaf70c827",
"s2fieldsofstudy": [
"Environmental Science",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
247720874 | pes2o/s2orc | v3-fos-license | SPINK5 is a Tumor-Suppressor Gene Involved in the Progression of Nonsmall Cell Lung Carcinoma through Negatively Regulating PSIP1
This study aimed to elucidate how SPINK5 affects the malignant phenotypes of NSCLC and the molecular mechanism. NSCLC and adjacent normal tissues were collected to detect the differential level of SPINK5. The influence of SPINK5 on pathological indicators of NSCLC was analyzed. Cellular functions of NSCLC cells overexpressing SPINK5 were assessed by CCK-8, EdU, and transwell assay. By confirming the downstream target of SPINK5, its molecular mechanism on regulating NSCLC was finally explored through rescue experiments. SPINK5 was lowly expressed in NSCLC tissues, and it predicted tumor staging and lymphatic metastasis. In vitro overexpression of SPINK5 declined proliferative and migratory rates in NSCLC cells. PSIP1 was verified as the target gene binding SPINK5, and they displayed a negative correlation in NSCLC tissues. Overexpression of PSIP1 was able to reverse the inhibited proliferative and migratory potentials in NSCLC cells overexpressing SPINK5. SPINK5 level has a close relation to tumor staging and lymphatic metastasis in NSCLC. It serves as a tumor-suppressor gene that inhibits proliferation and migration of NSCLC through negatively regulating PSIP1.
Introduction
Lung carcinoma is the main reason for cancer-associated death and its incidence is in the high place [1,2]. According to tumor statistics released by the American Cancer Society in 2018, lung carcinoma is the leading cause of both male and female cancer deaths, accounting for 25% of total cancer deaths [3,4]. In China, the incidence and mortality of lung carcinoma are in the first place [5,6]. NSCLC, including lung adenocarcinoma and squamous cell carcinoma, is the most commonly diagnosed subtype of lung carcinoma (80-85%) [7]. At present, the 5-year survival of NSCLC has not been significantly enhanced although medical technologies are progressed [8,9]. Comprehensively understanding the molecular mechanism of NSCLC is beneficial to develop novel therapeutic targets [10,11]. SPINK5 (serine peptidase inhibitor, Kazal type 5) is secreted and synthesized in human pancreatic acinar cells, exerting biological functions of trypsin inhibitor, growth factor-like phenotypes, and autophagy inhibitor [12,13]. It is located on chromosome 5, containing 4 exons and 3 introns [12,13]. A mature SPINK5 gene has 56 amino acids, which is connected by 3 disulfide bridges [14]. Recent research has shown the vital function of SPINK5 in pancreatitis and tumors [15][16][17]. To date, no evidence reported the effects of SPINK5 in lung cancer. rough bioinformatic analysis, SPINK5 was predicted to bind PSIP1. is study aims to explore the role of SPINK5/PSIP1 axis in regulating the malignant progression of NSCLC, therefore providing a novel idea in clinical diagnosis and treatment.
Patients and NSCLC Samples.
A total of 46 NSCLC patients undergoing surgical resection in Shuguang Hospital affiliated to Shanghai University of Chinese Medicine were retrospectively analyzed. ey did not have preoperative anticancer treatment. T stage of NSCLC was defined by UICC criteria. NSCLC and the paired paracarcinoma tissue were harvested during surgery and stored in liquid nitrogen. Each patient was followed up after discharge for their general condition, symptoms, and imaging examination through telephone and outpatient review. is study was approved by the research ethics committee of our hospital and complied with the Helsinki Declaration. Informed consent was obtained from patients.
Transfection.
Transfection plasmids were synthesized by GenePharma (Shanghai, China). Cells were cultured to 40-60% density in a 6-well plate and transfected using Lipofectamine 2000 (Invitrogen, Carlsbad, CA, USA). After 48 h cell transfection, cells were collected for verifying transfection efficacy and functional experiments.
Cell Proliferation
Assay. Cells were inoculated in a 96well plate with 2 × 10 3 cells/well. At 24, 48, 72, and 96 h, optical density at 450 nm of each sample was recorded using the cell counting kit-8 (CCK-8) kit (Dojindo Laboratories, Kumamoto, Japan) for plotting the viability curves.
Transwell Migration Assay.
Cell suspension was prepared at 5 × 10 5 cells/mL. 200 μL of suspension and 700 μL of medium containing 20% FBS were, respectively, added on the top and bottom of a transwell insert and cultured for 48 h. Migratory cells on the bottom were induced with methanol for 15 min, 0.2% crystal violet for 20 min, and captured using a microscope. Five random fields per sample were selected for capturing and counting migratory cells.
Western
Blot. Cells were lysed in radioimmunoprecipitation assay (RIPA) (Beyotime, Shanghai, China) on ice for 15 min, and the mixture was centrifuged at 14000× g, 4°C for 15 min. e concentration of cellular protein was determined by the bicinchoninic acid (BCA) method (Beyotime, Shanghai, China). Protein samples with the adjusted same concentration were separated by sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) and loaded on polyvinylidene fluoride (PVDF) membrane (Millipore, Billerica, MA, USA). e membrane was cut into small pieces according to the molecular size and blocked in 5% skim milk for 2 h. ey were incubated with primary and secondary antibodies, followed by band exposure and grey value analyses.
Statistical Analysis. Statistical Product and Service
Solutions (SPSS) 22.0 (IBM, Armonk, NY, USA) was used for statistical analyses and data were expressed as mean ± standard deviation. Differences between groups were compared by the t-test. Clinical significance of SPINK5 in affecting NSCLC was analyzed by the chisquare test. P < 0.05 was considered as statistically significant.
Clinical Significance of Downregulated SPINK5 in NSCLC.
e results of qRT-PCR showed that the expression level of SPINK5 in NSCLC tissues was significantly lower than that in adjacent tissues (Figure 1(a)). In addition, compared with BEAS-2B cells, the SPINK5 level was much lower in NSCLC cells (Figure 1(b)). According to the median level of SPINK5 in 46 NSCLC tissues, recruited patients were divided into the high expression group and low expression group. e relationship between SPINK5 expression and pathological parameters of NSCLC patients was analyzed. As given in Table 1, lymphatic metastasis and high T staging were more likely to occur in patients with low expression of SPINK5. erefore, SPINK5 may be used as a new biological indicator to predict the malignant progression of NSCLC.
Overexpression of SPINK5 Inhibited NSCLC to Proliferate and Migrate.
In NSCLC cell lines H1299 and SPC-A1, the SPINK5 overexpression model was first constructed. e qRT-PCR results suggested that transfection of pcDNA-SPINK5 could significantly upregulate SPINK5 in NSCLC cells (Figure 2(a)). Subsequently, CCK-8 assay uncovered that compared with those transfected with pcDNA-NC, viability remarkably decreased after overexpression of SPINK5 in H1299 and SPC-A1 cells (Figure 2(b)). Consistently, EdU assay also yielded the conclusion that overexpression of SPINK5 attenuated the proliferative rate in NSCLC cells (Figure 2(c)). In addition, the migratory ability in H1299 and SPC-A1 cells was reduced after overexpression of SPINK5 (Figure 2(d)).
PSIP1 Was Highly Expressed in NSCLC Tissues and Cell
Lines and Bound to SPINK5. To further explore the way of SPINK5 to alleviate the malignant progression of NSCLC, we predicted the potential downstream gene of SPINK5. A binding site pairing to SPINK5 sequences was identified in PSIP1 3′UTR. In H1299 and SPC-A1 cells overexpressing SPINK5, PSIP1 was markedly downregulated (Figure 3(a)). Converse to the expression characteristic of SPINK5, PSIP1 was upregulated in NSCLC tissues and cell lines (Figures 3(b) and 3(c)). Moreover, a negative correlation was discovered between SPINK5 and PSIP1 levels in NSCLC tissues. Converse to the expression characteristic of SPINK5, PSIP1 was upregulated in NSCLC tissues and cell lines. Later, dual-luciferase reporter assay uncovered that overexpression of PSIP1 was only able to decrease luciferase activity in pmirGLO-SPINK5-WT, verifying the binding between SPINK5 and PSIP1 (Figure 3(d)).
SPINK5/PSIP1 Axis Inhibited NSCLC to Proliferate and Migrate.
Rescue experiments were carried out to elucidate the synergistical regulation of both SPINK5 and PSIP1 in cell phenotypes of NSCLC. Co-overexpression of SPINK5 and PSIP1 resulted in a lower level of SPINK5 in H1299 and SPC-A1 cells than those overexpressing SPINK5 (Figure 4(a)). Subsequently, both CCK-8 and EdU assays showed that compared with single transfection of pcDNA-SPINK5, the proliferative capacity of NSCLC cells was markedly enhanced by cotransfection of pcDNA-SPINK5 and pcDNA-PSIP1 (Figures 4(b) and 4(c)). A higher number of migratory cells was detected in NSCLC cells co-overexpressing SPINK5 and PSIP1 than those overexpressing SPINK5 only (Figure 4(d)).
Discussion
Although most patients have received the active treatment, the 5-year survival of NSCLC is lower than 15% [5][6][7]. For the diagnosis and treatment of NSCLC, there is still a lack of adequate gene targets. Hence, it is very necessary to screen for more suitable biomarkers of NSCLC with high specificity and sensitivity [8][9][10]. In recent years, with the progress of genetic engineering and proteomics, the discovery of new tumor markers has become possible [10,11]. SPINK5 is a secreted polypeptide, which is able to inhibit the activities of various serine proteases such as trypsinogen [12][13][14][15]. Subsequent research found that SPINK5 participates in the development of human cancers through EGFR signaling owing to the similar structure to that of EGF [15][16][17]. A previous study demonstrated that SPINK5 was differentially expressed between ever and never smokers, with concordant higher expression in ever smokers [18].
In the present study, SPINK5 was downregulated in NSCLC tissues in comparison to normal ones. Its level was identified to have a relation to tumor staging and lymphatic metastasis of NSCLC. erefore, it is speculated that SPINK5 may serve an anticancer role in NSCLC. Subsequently, H1299 and SPC-A1 cells with a relatively low expression of SPINK5 were selected to generate SPINK5 overexpression models by transfection of pcDNA-SPINK5. Overexpression of SPINK5 resulted in decline of proliferative and migratory potentials of NSCLC.
As PSIP1 is predicted and verified to be the downstream gene of SPINK5, its biological function in affecting NSCLC cell behaviors was analyzed. PSIP1 was upregulated in NSCLC tissues and negatively correlated to SPINK5. In addition, overexpression of PSIP1 could reverse the inhibited proliferative and migratory potentials in NSCLC cells overexpressing SPINK5. Taken together, SPINK5 and its target PSIP1 synergistically alleviated the malignant progression of NSCLC through a negative feedback loop.
In conclusion, SPINK5 level has a close relation to tumor staging and lymphatic metastasis in NSCLC. It serves as a tumor-suppressor gene that is responsible for inhibiting NSCLC proliferation and migration through negatively regulating PSIP1.
Data Availability
e datasets used and analyzed during the current study are available from the corresponding author upon request.
Conflicts of Interest
e authors declare that they have no conflicts of interest. Journal of Healthcare Engineering | 2022-03-27T15:11:06.175Z | 2022-03-25T00:00:00.000 | {
"year": 2022,
"sha1": "d8736d75bbc0a754a5c890bbb1a7004538a729b0",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jhe/2022/2209979.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "be7ca613b1d33c13d5f664d1a8c495aded570e1b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219424509 | pes2o/s2orc | v3-fos-license | A New Approach to Intertemporal Choice: The Delay Function
: The framework of this paper is intertemporal choice, which traditionally has been studied with preference relations and discount functions. However, the interest of econophysics in this topic makes time become a central magnitude. Therefore, the aim of this paper is to introduce the concept of delay function and, by using this tool, to analyze the concept of impatience and the different types of inconsistency. In behavioral finance, consistency is correlated with the concept of symmetry because, in this case, the indifference between two rewards does not change when the same delay is added to their respective availability dates. Moreover, we have shown the way to derive a discount (respectively, delay) function starting from the expression of its corresponding delay (respectively, discount) function by requiring some suitable conditions for this construction. Finally, we have deduced the concept of instantaneous variation rate and Prelec’s measure of inconsistency in terms of the delay function.
Introduction
Intertemporal choice involves making decisions between several alternatives whose monetary amounts take place at different instants of time. From the point of view of economics, Samuelson [1] carried out the first study on intertemporal choice when introducing the so-called Discounted Utility (DU) model. However, the latest studies in the field of behavioral economics, econophysics, and neuroeconomics have revealed several limitations of the DU model.
In effect, among the former disciplines, econophysics is achieving a great relevance due to the high number of techniques from physics which are being applied to economics: theoretical macroeconomics (wealth distributions), microstructure of financial markets (order book modeling), econometrics of financial bubbles and crashes, etc. [2]. More specifically, within this wide field of research, physics has been applied to finance (returns of financial assets-fat tails, volatility clustering, autocorrelation, etc.) [3,4]. However, in this paper we are interested in the description of intertemporal choice from the perspective of "time" (see, e.g., the work by Zauberman [5][6][7]) which is the main instrumental variable in econophysics. In effect, one of the main concepts of intertemporal choice is discount function which is present in several physical processes characterized by the loss of properties of a given system. For example, the reduction through time of temperature in a system or the decrease of individuals in an organism can be described in the same way as the loss of acquisitive power suffered by a monetary unit (i.e., temporal discounting).
In this context, Cajueiro [8] demonstrated that the q-exponential function, present in the deformed algebra inspired in the Tsallis' nonextensive thermodynamics, could be used to model discount functions in intertemporal choices. The introduction of this discount function is justified when analyzing some situations of decreasing impatience which lead to the phenomenon of dynamic inconsistency.
In effect, the impatience has been defined by Takahashi [9] as a strong preference for small, immediate rewards over large, delayed ones. In the study of this concept, many researchers have shown that subjects discount delayed losses less fast than they do delayed gains [10,11]. These results were completed with further analysis based on the asymmetry observed in making decisions of discounting tasks about gains and losses [12,13]. Specifically, clinical studies have been conducted with magnetic resonance imaging which are consistent with the idea that an asymmetric activity pattern underlies the process of discounting future gains and future losses [14].
However, it is also important to take into account the situations in which the subject changes his/her initial decision when the reward is delayed over the time, that is to say, people may exhibit inconsistency when making intertemporal choices.
The main measure of inconsistency was given by Prelec [15] who considered that the degree of decreasing impatience was represented by the convexity of the Napierian logarithm of the discount function. The drawback of this index is the difficulty in measuring it, whereby Rohde [16,17], starting from an indifference pair, defined the so-called hyperbolic which allows us to distinguish between strongly and moderately decreasing impatience. Anchugina et al. [18] extended the Prelec's result from the mixture of two functions to a finite number of DI functions.
On the other hand, in the Multifractal Model of Asset Returns (MMAR), introduced by Mandelbrot [19], the multifractality of returns results from a deformation of time [20] because the so-called business time is dictated by the density of transactions. In the same way, the magnitude "time" in intertemporal choice has been deformed to explain certain variations of impatience (inconsistency). For example, Cajueiro [8] and Takahashi [21] deformed time in the hyperbolic discount function, giving rise to the aforementioned q-exponential discounting. Several analyses in econophysics have shown the relation between the roles of psychophysical effects of time perception and the anomalies in intertemporal choice. In effect, [22] introduces a discussion of the current influence of time perception on intertemporal choice by exploring different representations. In particular, Lu and Li [23] study the psychophysics presented in the consumer's preferences. On the other hand, recent studies by Takahashi et al. [24] used Tsallis' statistics-based econophysics to show that the q-exponential discount function may continuously parameterize a subject's consistency in intertemporal choice. This result was generalized by Cruz and Muñoz [25] to any discount function, based on the deformed algebra developed in the Tsallis' nonextensive thermostatistics. Later, Cruz and Ventre [26] and Cruz et al. [27] deformed time by means of the Steven's "power" law in a subadditive discount function in order to obtain S-inverse curves. In this context, Webb [28] provides a novel model to study the inverse-S discounting behavior. Indeed, the analysis of time with delay functions will help us to better understand those mechanisms of intertemporal choice centered on time such as deformation of time or several types of decreasing impatience. This paper is organized as follows. In the current section we have contextualized the topic of inconsistency within the fields of econophysics and intertemporal choice. In Section 2, the concept of impatience and the different types of inconsistency are analyzed starting from the concept of delay function. In Section 3, we show the way to derive a discount (respectively, delay) function starting from the expression of its corresponding delay (respectively, discount) function by requiring some suitable conditions for this construction. In Section 4, we use the concept of delay function to derive the so-called instantaneous variation rate and Prelec's index. Next, in Section 5 we introduce a set of characterizations of the different types of impatience starting from the definition of delay function. Finally, Section 6 summarizes and concludes.
The Delay Function
The aim of this paper is to analyze the concept of impatience and the different types of inconsistency by using the so-called delay function which will be defined in this section. Therefore, the delay function becomes a new element for the mathematical treatment of intertemporal choice. Consequently, it is necessary to derive the expression of the discount function underlying to the intertemporal choice process, starting from the concept of delay function and vice versa. Additionally, it is well known that preferences can be embedded in this framework. In this way, several scholars have paid attention to the different mathematical conditions to be satisfied by these preference relations (for a summary, see the paper by Baucells and Heukamp [29]). Consequently, intertemporal choice can be viewed from three different perspectives.
In effect, intertemporal choice can be treated by means of discount functions or, alternatively, with preference relations. However, as shown by Figure 1, intertemporal choice can also be related to delay functions by arising the need for the development of some procedures to generate both discount and delay functions starting from a preference relation (steps (1) and (2)) and a delay from a discount function, and vice versa (step (3)).
Discount functions
Delay functions The economic literature on the first step is very prolific, the most famous representation theorem being due to Fishburn and Rubinstein [30]: if order, monotonicity, continuity, impatience, and separability hold, and the set of rewards X is an interval, then there are continuous real-valued functions u on X and F on the time interval T such that (s, t) (l, t ) if, and only if, u(s)F(t) ≥ u(l)F(t ).
Additionally, u(0) = 0 and u is increasing, whilst F is decreasing and positive.
In what follows, we are going to introduce the concept of delay function [31]. Let s be a small reward and l a large reward. A delay function, denoted by Φ l (s, ·), gives the delay which makes the subject indifferent between the amount s at each time t and the amount l at time Φ l (s, t) > t (see Figure 2), viz: More formally, let M = R + be the set of rewards and T = R + ∪ {0} be the set of times corresponding to the dated rewards involved in an intertemporal choice process. The following paragraph continues with Definition 1. In order to make financial sense, a delay function must satisfy the following conditions: Observe that the conditions (i) and (ii) guarantee that, if s < l, then t < Φ l (s, t). Takeuchi [32] instead introduces the so-called equivalent delay function Observe that this last definition is more limited than Definition 1 since, in an equivalent delay function, always t = 0. A delay function allows to characterize the concepts of decreasing and increasing impatience (see Section 5). Before this, in Sections 3 and 4, we are going to develop the step (3) of Figure 1. Firstly however, we need to introduce the following definition.
Definition 2. A discount function is a continuous map
satisfying the following conditions: is strictly decreasing with respect to z. (iv) F(m, z) is strictly increasing with respect to m.
In particular, if F(m, z) = mF(1, z), we will say that F is separable. Obviously, in this case, condition (iv) of Definition 2 is not necessary. The discount function indicates the present value of a reward available at a certain time point.
Discount and Delay Functions
The aim of this section is to derive a discount (respectively, delay) function starting from the expression of its corresponding delay (respectively, discount) function by previously requiring some suitable conditions for these constructions.
Discount from Delay Functions
In this subsection, we are going to introduce the methodology to derive discount functions starting from delay functions. Let Φ be a delay function and define M m := {s ∈ M such that s ≤ m}. Suppose that, for every m ∈ M, the partial delay function is surjective. Observe that, moreover, this function is bijective since, by condition (iii) of Definition 1, Φ m (·, 0) is injective. In this case, we define and Remark 1. Observe that, by Definition 1, Equation (2) can be written as (F(m, z), 0) ∼ (m, z) which is the intuitive idea underlying the concept of discount function.
On the other hand, Equation (2) can be rewritten as follows: Now, we can enunciate the following proposition.
Proof. Firstly, let us see that F is well defined. In effect, for every m ∈ M, the partial delay function Φ m (·, 0) is bijective (see the beginning of Section 3.1) whereby its reciprocal, Φ −1 m (·, 0), is also a bijective function.
Consequently, F(m, z) is strictly increasing with respect to m.
This completes the proof of this proposition.
Example 2. Let us consider the following delay function
where k > 0. One may easily check that the four conditions of Definition 1 hold. In this case, we can obtain the discount function defined by In effect, we can write: where we obtain F(m, z) = m 1 + kz .
Example 3.
Let us consider the following delay function Φ l (s, t) = tan arctan t + ln l s .
One may easily check that the four conditions of Definition 1 hold. In this case, we can obtain the discount function defined by F(0, z) = 0 and Φ m (F(m, z), 0) = z.
In effect, we can write: where we obtain F(m, z) = m exp{− arctan z}.
Once we have deduced how to obtain a discount function starting from its corresponding delay function, we will then introduce the characterization of the indifference of rewards by using a delay function and also the particular case in which the discount function is separable. In effect, the indifference of rewards can be characterized by the following proposition.
where • is the composition of functions.
Therefore, we have shown that the indifference of rewards is characterized by the delay function.
Remark 2.
Observe that expression (4) is consistent since Φ −1 x (·, 0)(r) ∈ M. Moreover, observe that, given two amounts x and y, the indifference (x, r) ∼ (y, Φ y (x, r)) must be restricted to those times r such that the delay function can be decomposed as the composition of the reciprocal of Φ x (·, 0) and Φ y (·, 0), applied to time r.
One may easily show that Therefore, On the other hand, and so Equation (4) holds.
Definition 3.
A delay function Φ is said to be linear if, for every (s, l, t) ∈ X × T and every k > 0, Φ kl (ks, t) = Φ l (s, t).
Proposition 3.
A delay function Φ is linear if, and only if, its corresponding discount function F is separable.
Corollary 1.
If the delay function Φ is linear, then (x, r) ∼ (y, r ) if, and only if, x y = F(r ) F(r) .
Proof. In this case, by Equation (2), one has Φ y (xF(r), 0) = r and so where we obtain which is the expression of the well-known discount ratio for separable discount functions.
Example 5.
The delay function of Example 3 satisfies the condition of linearity. Obviously, in this case, (x, r) ∼ (y, r ) implies xF(r) = yF(r ) and then
Delay from Discount Functions
Before presenting the methodology to obtain a delay function starting from the expression of its corresponding discount functions, we are going to introduce the following concepts of regular and singular discount functions, which are necessary to develop this subsection.
Definition 4.
A discount function F is said to be regular if, for every m ∈ M, Im(F(m, ·)) =]0, m]-that is to say, if lim z→∞ F(m, z) = 0. Contrarily, a discount function F is said to be singular if and given (s, t) and l, with s < l, Φ l (s, t) is defined such that (see Figure 4) from which we obtain In the particular case in which F is separable, Φ l (s, t) can be obtained as follows: Obviously, L(s) ≤ L(l) (see the two dotted horizontal lines in Figure 5). In this case, the procedure described in Definition 4 is only possible for those values of t such that L(l) < F(s, t) = F(l, Φ l (s, t)) < s. The question arising now is whether the delay function corresponding to a discount function-which comes from a delay function-is the starting delay function. The answer is affirmative since, given a delay function Φ, its associated discount function is given by
Rewards
In order to determine the delay function Φ corresponding to the just-obtained discount function, we start from the indifference (s, t) ∼ (l, Φ l (s, t)) and then F(s, t) = F(l, Φ l (s, t)).
where we obtain (see Equation (4)
The Instantaneous Variation Rate
In this section, we are going to derive the expression of the so-called instantaneous variation rate corresponding to the interval [t, t ], denoted as v(t, t ), by using delay functions. This parameter is a measure of inconsistency introduced by Cruz and Muñoz [33] based exclusively on time. But before, we are going to introduce some comments to relate the instrumental variables used here with physics. In finance, the force of discounting is measured by the instantaneous discount rate, given by Observe that the discount rate is the derivative of the Napierian logarithm of the discount function because, in finance, we are interested in relative magnitudes. In effect, take into account that δ(t) can be written as On the other hand, the acceleration of discounting necessarily involves the derivative of δ(t) which leads to, among other measures, the degree of convexity by Prelec. Observe that all aforementioned physical and financial processes describe the temporal evolution of a system. Figure 6 schematizes the content of Sections 4 and 5 of this paper.
In effect, consider the indifference relation: where s < l and, consequently, t < Φ l (s, t). If the availability of the reward s is delayed until moment t + σ, with σ > 0, the delay Φ l (s, t + σ) now satisfies the following indifference relation: (s, t + σ) ∼ (l, Φ l (s, t + σ)). If the discount function underlying the intertemporal choice is separable, the first indifference can be written as sF(t) = lF(Φ l (s, t)).
Observe that v depends on l, s, and t. However, it is easy to demonstrate that v is only a function of t and t , as stated at the beginning of this subsection. In effect, take into account that, in case of separability, the following equality holds:
Prelec's Measure of Inconsistency
In this subsection, we are going to derive the expression of Prelec's measure of inconsistency by using delay functions. In effect, Equation (7) can be written as By subtracting 1 to both sides of Equation (9), one has As Φ s (s, t) = t, then ∂Φ s (s,t) ∂t = 1 and so Equation (10) can be written as By dividing both sides of Equation (11) by l − s, one has Letting l → s, the left-hand side of Equation (12) is On the other hand, when l → s, by the continuity of Φ, Φ l (s, t) → t and so the right-hand side of Equation (12) is Therefore, Equation (12) would remain ∂Φ l (s, t) ∂l l=s and, consequently, Observe that the left-hand side of Equation (13) is Prelec's measure of inconsistency, denoted by P(t). Consequently, this parameter can be written as a function of the delay function as follows: Example 6. Let us consider the hyperbolic discount function where k > 0. In this case, the delay function is Simple calculations lead to Finally, if the discount function F(t) is separable, the following equality holds: Therefore,
Example 7.
Let us consider the discount function In this case, Φ 1 (m, t) = tan(arctan t − ln m).
Types of Impatience with Delay Functions
In this section, we are going to characterize the different types of impatience. More specifically, the concepts of strongly and moderately decreasing and increasing impatience will be defined by using the concept of delay function. To do this, let us start from an arbitrary indifference pair [34]: where 0 < s < l, t < t , σ > 0, and τ > 0. Recall that the different types of impatience can be defined in the following way: By applying Definition 1, one has (s, t) ∼ (l, Φ l (s, t)).
then Φ l (s, t + σ) = t + τ holds. Therefore, the different types of impatience can be described by using the notation provided by the delay function: (i) Decreasing impatience holds if σ < τ. In this case, So, -More specifically, moderately decreasing impatience holds if also tτ < t σ. Simple algebra shows that this condition is equivalent to -On the other hand, strongly decreasing impatience holds if also tτ ≥ t σ. Now, this condition is equivalent to (ii) Increasing impatience holds if (iii) Finally, constant impatience (stationarity) holds if
Characterizing Constant, Decreasing, and Increasing Impatience
Throughout this section, we are going to introduce different results which relate the different types of impatience to the involved delay function implicit in the process of intertemporal choice. Firstly, delay functions satisfying constant impatience or stationarity are the solutions of the following functional equation: where Φ l (s, t) := f (t). The general solution of the functional Equation (15) is where ψ necessarily satisfies the following conditions: • ψ(s, s) = ψ(l, l) = 0. • ψ(s, l) is strictly increasing with respect to l. • ψ(s, l) is strictly decreasing with respect to s.
In summary, given two rewards s and l (s < l), a delay function satisfying constant impatience or stationarity (15) can be expressed as where ψ satisfies the former three conditions. Therefore, the indifference relation remains as (s, t) ∼ (l, Φ l (s, t)) = (l, t + ψ(s, l)).
Now, we are going to demonstrate that function (16) satisfies the conditions required to be considered a delay function: is strictly increasing with respect to t, since obviously t 1 < t 2 implies • Φ l (s, t) is strictly increasing with respect to l, since l 1 < l 2 implies is strictly decreasing with respect to s, since s 1 < s 2 implies ψ(s 1 , l) > ψ(s 2 , l) and so Φ l (s 1 , t) > Φ l (s 2 , t).
Finally, making t = 0 in Equation (16), then t = 0 + ψ(F(l, t ), l) and, therefore, the expression of a stationary discount function can be derived: where f (t) = t + 1 and ψ(l, s) = l 2 − s 2 . Therefore, and the discount function corresponding to this delay function is
Characterizing Strongly and Moderately Decreasing Impatience
According to Theorem 1 in [36], an individual exhibits strongly decreasing impatience if, and only if, for every t < t , λ > 1 and 0 < s < l, (s, t) ∼ (l, t ) implies (s, λt) (l, λt ). By applying the definition of delay function to the indifference, one has (s, t) ∼ (l, Φ l (s, t)).
Moreover, the preference can be written as follows: (s, λt) (l, λΦ l (s, t)) and, consequently, Now, we are going to analyze a specific delay function exhibiting strongly decreasing impatience. To do this, let f (t) := Φ l (s, t) denote the general solution of the functional equation: In this case, it can be shown that where ξ necessarily must satisfy the following conditions: • ξ(s, s) = ξ(l, l) = 1. • ξ(s, l) is strictly increasing with respect to l. • ξ(s, l) is strictly decreasing with respect to s.
Therefore, given two rewards l and s (0 < s < l), a specific delay function reflecting strongly decreasing is Φ l (s, t) ≥ tξ(s, l).
Now, we are going to analyze the discount functions exhibiting moderately decreasing impatience. In effect, according to Corollary 1 in [36], an individual exhibits moderately decreasing impatience if, and only if, for every t < t , k > 0, λ > 1 and 0 < s < l, (s, t) ∼ (l, t ) implies (s, t + k) (l, t + k) but (s, λt) (l, λt ). By applying the definition of delay function to the indifference, one has (s, t) ∼ (l, Φ l (s, t)).
Conclusions
This paper analyzed the concept of impatience and the different types of inconsistency from the point of view of a delay function. It is well known that intertemporal choice can be treated by means of discount functions or, alternatively, with preference relations, but in this paper, we derived the expression of the discount function, underlying the process of intertemporal choice, starting from the concept of delay function. To do this, given a dated reward (s, t) and an amount l (0 < s < l), a delay function Φ assigns a time, denoted by Φ l (s, t), such that (s, t) ∼ (l, Φ l (s, t)).
Firstly, with this novel methodology, we showed the way to derive the discount function associated to a given delay function satisfying some suitable conditions. This discount function is defined such that F(0, z) = 0 and is given by Equation (3). It was demonstrated that F(m, z) satisfies the conditions to be a discount function. As a consequence of this result, we characterized the indifference between rewards by using the delay function.
Secondly, we obtained the delay function corresponding to a discount function by considering the definitions of regular and singular discount functions. In effect, the delay function derived from a discount function is defined by Equation (6). We demonstrated that the delay function coming from a discount function, which also derives from a delay function, is the original delay function.
Thirdly, we derived the expression of the so-called instantaneous variation rate and then the measure of inconsistency by Prelec, using the delay function associated to the process of intertemporal choice.
Once we displayed the existing relationship between delay and discount functions, we introduced some characterizations of the different types of inconsistency (increasing impatience, and moderately and strongly decreasing impatience) starting from the concept of delay function. Finally, different methodologies to generate delay functions were proposed as a first step to continue the implementation of delay functions in modeling intertemporal choice from an empirical point of view. Obviously, this work will be left for further research. | 2020-05-21T09:12:48.470Z | 2020-05-13T00:00:00.000 | {
"year": 2020,
"sha1": "7c94c1c63789235229836ef0cd136af58d15c682",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-8994/12/5/807/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "41e51f9d8af567a618546eb5b8024a61b88277e8",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
56109442 | pes2o/s2orc | v3-fos-license | Luminous type IIP SN 2013ej with high-velocity Ni-56 ejecta
We explore the well-observed type IIP SN 2013ej with peculiar luminosity evolution. It is found that the hydrodynamic model cannot reproduce in detail the bolometric luminosity at both the plateau and the radioactive tail. Yet the ejecta mass of 23-26 Msun and the kinetic energy of (1.2-1.4)x10^51 erg are determined rather confidently. We suggest that the controversy revealed in hydrodynamic simulations stems from the strong asphericity of the Ni-56 ejecta. An analysis of the asymmetric nebular H-alpha line and of the peculiar radioactive tail made it possible to recover parameters of the asymmetric bipolar Ni-56 ejecta with the heavier jet residing in the rear hemisphere. The inferred Ni-56 mass is 0.039 Msun, twice as large compared to a straightforward estimate from the bolometric luminosity at the early radioactive tail. The bulk of ejected Ni-56 has velocities in the range of 4000-6500 km/s. The linear polarization predicted by the model with the asymmetric ionization produced by bipolar Ni-56 ejecta is consistent with the observational value.
INTRODUCTION
The luminous type IIP supernova (SN IIP) 2013ej in M74 (Kim et al. 2013) gets into the limelight because of its early discovery, close location (D ≈ 9 Mpc), and the unusual behavior. Specifically, at the plateau phase the flux declines more rapidly than normally in SNe IIP (Huang et al. 2015), which brought about a confusion with the classification of SN 2013ej, so one can meet type IIP (Huang et al. 2015), type IIP/IIL (Mauerhan et al. 2017), and type IIL (Bose et al. 2015) (below we prefer to use type IIP/L). Even more amazing is the luminosity evolution at the radioactive tail: decline at the early radioactive tail is more rapid than the 56 Co decay rate, which is a typical feature of SNe IIP (Dhungana et al. 2016). Moreover, the decline rate shows a slowdown after about 200 days instead of a decline acceleration. Yuan et al. (2016) attribute the rapid decline at the radioactive tail to either a low ejecta mass, a high explosion energy, an extreme outward mixing of 56 Ni, or a combination of these factors; the decline slowdown is not discussed. An interesting feature is the width and pronounced asymmetry of the nebular Hα emission; the latter could indicate the bipolar 56 Ni ejecta (Bose et al. 2015) by analogy with SN 2004dj (Chugai et al. 2005). In this regard the detected ⋆ E-mail: utrobin@itep.ru linear polarization at the level of ∼1% (Mauerhan et al. 2017;Kumar et al. 2016) is also reminiscent of SN 2004dj. The X-ray emission detected by Chandra and Swift indicates the mediocre wind density -characteristic of a red supergiant (RSG) (Chakraborti et al. 2016).
A good observational coverage of the light curve starting from the shock breakout and an extended set of spectra make SN 2013ej highly valuable object for the hydrodynamic modelling and the study of asymmetry effects in SNe IIP. Two hydrodynamic models for SN 2013ej have been reported (Huang et al. 2015;Morozova et al. 2017). However, both do not fit expansion velocities at the photosphere, which poses a question, whether supernova parameters are reliably inferred. The issue of adequate hydrodynamic parameters (ejecta mass, explosion energy, and pre-SN radius) therefore remains on the agenda. Note that both mentioned hydrodynamic models also do not touch the issue of the radioactive tail. The luminosity evolution at the radioactive tail, however, is closely related to the 56 Ni mass and mixing and therefore should be considered as a crucial observational constraint for the SN 2013ej model.
Here we address the issue of hydrodynamic parameters of SN 2013ej upon the basis of a standard one-dimensional (1D) hydrodynamic modelling of the bolometric light curve and expansion velocities. We also consider the problem of the 56 Ni distribution imprinted in the nebular Hα profile and in the luminosity decline rate at the radioactive tail. We start with the hydrodynamic modelling assuming different possibilities for the 56 Ni mass and mixing (Section 2). We then consider an effect of the bipolar 56 Ni jets on the nebular Hα profile and the radioactive tail (Section 3). The ejecta model and the recovered 56 Ni distribution are used for the calculation of the polarization which in turn is compared to the observed polarization (Section 4). Results are summarized and discussed in Section 5. Below we adopt, following Dhungana et al. (2016), the distance to SN 2013ej D = 9.0 Mpc and the reddening E(B − V) = 0.09 mag.
HYDRODYNAMIC MODELLING
The light curve of SN 2013ej with a luminous plateau indicates that the pre-SN star was a red supergiant (RSG). Although at first glance it is advisable to explode a RSG model prepared by the evolutionary computations, the previous hydrodynamic simulations have demonstrated that the explosion of the evolutionary RSG is not able to describe all the essential features of the light curve (Utrobin & Chugai 2008). This problem in fact was first revealed for the peculiar type IIP SN 1987A. To produce sensible fit of the light curve, mixing at the He/H composition interface should be manually adjusted (Woosley 1988).
We therefore, as usually, choose a nonevolutionary hydrostatic RSG pre-SN model as the initial data for hydrodynamic simulation. A density profile and macroscopic mixing between metal, helium, and hydrogen components in such a model are adjusted to fit the photometric and spectroscopic data. Strictly speaking, the nonevolutionary model thus prepared is not a pre-SN in proper sense of this term. This model should be thought rather as an outcome of the Rayleigh-Taylor (RT) mixing caused by the shock propagation following the explosion. Yet for convenience we call this nonevolutionary model the "pre-SN". Remarkably, recent three-dimensional (3D) simulations of the SN explosion of a RSG star show that the RT mixing modifies both the density and composition gradients at the composition interfaces in accord with the nonevolutionary pre-SN model used formerly in 1D simulations of the normal type IIP SN 1999em (Utrobin et al. 2017).
The explosion simulation of SN 2013ej utilizes the 1D hydrodynamic code with the radiation transfer (Utrobin 2004). The explosion is initiated by a supersonic piston applied to the bottom of the stellar envelope at the boundary with the collapsing 1.4 M ⊙ core. As a result of an extended simulations we come to alternative models m23e1.2 and m26e1.4 (Table 1) in Figs. 1 and 2, respectively. The pre-SN model m23e1.2 has similar density and composition distributions except for 56 Ni: here, unlike model m26e1.4, the 56 Ni abundance is assumed constant by mass, which means that the 56 Ni is center concentrated. In both models the pre-SN radius is 1500 R ⊙ , while the ejecta mass and the explosion energy are different.
The early detection of SN 2013ej provides an opportunity to verify the fiducial model m26e1.4 by comparing the calculated R-band light curve with the observations at the shock breakout stage (Fig. 3). The model R-band light curve fits the data fairly well including the first R-band observation of SN 2013ej (Lee et al. 2013) and the R-band photometry of Dhungana et al. (2016). Identifying the initial steep jump of the R-band light curve at the shock breakout with a non-photometric detection of SN 2013ej on July 24.125, which was reported by C. Feliciano on the Bright Supernovae website 1 , fixes the explosion at MJD=56494.705 (July 21.705), i.e., 2.195 days earlier compared to the estimate of Dhungana et al. (2016). We emphasize the difference between the explosion and shock breakout moments. Henceforth the SN age is counted from the adopted explosion moment. A low luminosity of the pre-SN model before the shock breakout at 2.42 days naturally accounts for the absence of a detectable emission on July 23.54 (Shappee et al. 2013).
Model m23e1.2 with the low amount (0.02 M ⊙ ) and moderate mixing of radioactive 56 Ni fits the bolometric light curve at the plateau stage (Fig. 4a), but fails at the radioac- tive tail where the calculated light curve declines slower compared to the observations. The latter is directly related to the deep location of 56 Ni. The problem is solved in model m26e1.4 with the larger mass and more extended distribution 56 Ni (Table 1). The radial distribution of 56 Ni in this case (Fig. 5) is a spherical representation of the bipolar 56 Ni jets, which are recovered below from the Hα line and the radioactive tail. Solving the problem of the radioactive tail, model m26e1.4, however, shows the 20% larger luminosity at the photospheric stage. The excess stems from the spherical distribution of 56 Ni in the model, which ignores the fact that the bulk of 56 Ni resides in the rear hemisphere (see Section 5). With these reservations model m26e1.4 is more preferred compared to m23e1.2.
Along with the bolometric light curve, the velocity at the photosphere level (photospheric velocity, for short) is another observable that should be met to constrain the hydrodynamic model. The photospheric velocity can be recovered from either absorption minima of weak metal lines or the profile modelling of stronger isolated lines. At the early phase we use the photospheric velocities obtained from Si ii 6355Å line (Dhungana et al. 2016). A photospheric velocity of 4550 km s −1 on day 44 2 is found using the Monte Carlo simulation of Na i 5890, 5896Å doublet. Photospheric velocities of 2400, 2400, 2200, and 1900 km s −1 on days 69, 73, 78, and 94, respectively, were inferred from the Fe ii 6148Å line profile. The uncertainty of these velocity values does not exceed ±100 km s −1 . Both hydrodynamic models reproduce the evolution of photospheric velocity (Fig. 4b) with a marginally better fit in the case of model m26e1.4.
NEBULAR Hα LINE AND 56 NI JETS
The broad appearance and pronounced asymmetry of the nebular Hα emission suggests that the ejecta likely harbors bipolar 56 Ni ejecta (Bose et al. 2015) likewise in the type IIP SN 2004dj (Chugai et al. 2005). Here we explore this conjecture for SN 2013ej with the intention to infer parameters of the 56 Ni distribution from the Hα profile and the radioactive tail of the bolometric light curve.
The exponential density distribution ρ ∝ exp(−v/v 0 ) is assumed for the ejecta mass of 26 M ⊙ and the kinetic energy of 1.4×10 51 erg, in line with the fiducial model m26e1.4. Note that the exponential density distribution fits the results of hydrodynamic modelling (Fig. 5). The hydrogen abundance X = 0.7 is assumed to be uniform throughout the ejecta. The 56 Ni distribution is set by a central spherical component with the mass M s , the outer velocity v s , and the bipolar conical ejecta (jets) with the half-opening angle ψ and the inclination angle θ; in other respect the jets may be different. In the near hemisphere the jet (dubbed "blue jet") contains M b of 56 Ni and lies in the velocity range v b1 < v < v b2 ; the "red jet" in the rear hemisphere contains M r of 56 Ni and lies in the velocity range v r1 < v < v r2 . The 56 Ni density of each component is assumed to be uniform to minimize a number of free parameters. The 56 Ni presumably does not affect the background ejecta density.
Parameters of the 56 Ni distribution are constrained by the two major observables: the nebular Hα profile and the luminosity evolution of the radioactive tail; both sets of data are taken from Dhungana et al. (2016). The distribution of the energy deposition throughout the ejecta is calculated in a single flight approximation for gamma-quanta with the absorption coefficient k γ = 0.06Y e , where Y e is a number of electrons per nucleon (Kozma & Fransson 1992). Positrons from the 56 Co decay presumably deposit their energy on-thespot with the annihilation quanta taken into account in the gamma-ray emission of 56 Co decay. The deposited energy is assumed to transform into radiation instantly. This approximation is valid until ∼900 days when a freeze out effect gets noticeable (Fransson & Kozma 1993). The Hα emissivity is assumed to be proportional to the local deposition rate ǫ. This assumption is sufficient to obtain the Hα profile in relative fluxes. However, to take into account Thomson scattering effects in the emission profiles the distribution of electron concentration is also needed.
The local electron number density n e can be determined from the ionization balance ǫ/w = αn 2 e , where w is the work per one hydrogen ionization, and α is the recombination coefficient for upper levels n > 2, i.e., for the recombination case C, appropriate at the early nebular stage of SNe IIP; the electron temperature of 5000 K is assumed. The standard value w = 36 eV was obtained for the collisional ionization losses of fast electrons in the hydrogen (Dalgarno et al. 1999). In SNe IIP a significant amount of deposited energy goes into the ultraviolet radiation that is able reionize hydrogen from the second level thus reducing the w value. One can use observational data of SN 1987A at the nebular epoch to infer a modified value of w. At the nebular stage, each hydrogen recombination creates one Hα photon, so if the Hα quanta escape, then the Hα luminosity is related to the bolometric luminosity as L(Hα) = (hν/w)L bol (here hν is the energy of the Hα photon). This ratio is violated at the early nebular stage, when the Hα radiation is affected by an absorption. In SN 1987A the Hα is saturated at t < 200 days which is indicated by the plateau in the Hα light curve during 150 − 200 days (Hanuschik 1991). On day 200, when the saturation effect becomes negligible, the SN 1987A Hα and bolometric luminosities suggest L(Hα) ≈ 0.1L bol (Hanuschik 1991), which in turn implies w ≈ 20 eV; this value is used below.
The optimal set of parameters of the 56 Ni ejecta is given in Table 2. The Hα profile and the radioactive tail of the bolometric light curve are described fairly well with the found model of the bipolar 56 Ni ejecta (Fig. 6). Remarkably, the model accounts for both the fast decline after day 110 and the slowdown of decline after day 200, the fact emphasized by Dhungana et al. (2016). In our search of the optimal parameters we bound ourselves with the requirement of the minimal 56 Ni mass consistent with the Hα profile and the radioactive tail. The upper limit of the 56 Ni mass cannot be reliably inferred by this kind of modelling because a highvelocity 56 Ni (v > 7000 km s −1 ) does not affect significantly the Hα profile and the radioactive tail at the nebular stage. Figure 6. Bipolar 56 Ni jet model (see Table 2) for the radioactive tail and the nebular Hα profile. Solid line is the model luminosity deposited by radioactive decay, dashed line is the total luminosity released by radioactive decay, dotted line is the power deposited by positrons, and crosses are the bolometric data reported by Dhungana et al. (2016). Inset in the left upper corner shows the model configuration of 56 Ni components with respect to observer (on the left); the circle depicts the level in the ejecta at the velocity of 10 000 km s −1 . Right inset shows the calculated Hα (thick solid line) compared to that observed by Dhungana et al. on day 135 (thin solid line).
56 NI JETS AND POLARIZATION
The aspherical ionization pattern produced by the 56 Ni jets should result in the linear polarization due to the Thomson scattering likewise in SN 2004dj (Chugai 2006). The question arises, whether the polarization in our model is able to account for the observed polarization in SN 2013ej (Mauerhan et al. 2017;Kumar et al. 2016). We consider only the nebular phase to minimize effects of the radiation transfer in the opaque ejecta. The polarization is calculated using the Monte Carlo technique (Chugai 2006). The model suggests that a quasi-continuum optical radiation is emitted with the rate proportional to the local deposition rate, the approximation valid at the nebular phase.
The computed polarization for the adopted ejecta and Table 2). Vertical line marks the optimal inclination for the jet model, dashed lines show the observational range from two choices of interstellar polarization (Mauerhan et al. 2017).
the 56 Ni jets model on day 107 is plotted in Fig. 7 as a function of the cosine of the inclination angle. For the optimal inclination angle θ = 56 • the model polarization lies in the range between the observed polarization values obtained with alternative assumptions about the interstellar polarization (Mauerhan et al. 2017). The plot demonstrates that our model predicts the linear polarization consistent with the observational data. We conclude therefore that most (if not all) of the observed polarization seems to be related to the bipolar 56 Ni ejecta. A doubt may arise concerning the assumed collinearity of 56 Ni jets. It may well be that the 56 Ni ejecta could form as a result of the RT instability caused by the shock propagation (e.g. Wongwathanarat et al. 2015). In this case the opposite 56 Ni ejecta generally are not collinear. Although our model of collinear jets agrees with the Hα and polarization data, we admit that this does not rule out the jets non-collinearity.
DISCUSSION AND CONCLUSIONS
Our aim has been to recover parameters of the hydrodynamic model for the unusual type IIP SN 2013ej and to explore the effects of a possible asymmetry of the 56 Ni ejecta. We failed to find a unique spherical hydrodynamic model that would be able to fit well the light curve at both stages, the plateau and the radioactive tail. The model with 0.02 M ⊙ of the center-concentrated 56 Ni describes the photospheric stage fairly well, but fails at the radioactive tail. The model with the twice as large amount of 56 Ni residing at high velocities excellently fits the radioactive tail, but overproduces the luminosity at the plateau by a factor of ≈1.2. Yet this model is preferred because the latter mismatch stems from the spherical representation of the bipolar 56 Ni jets. In the real 3D picture with the bulk of radioactive 56 Ni residing in the far hemisphere, the flux from the near hemisphere should be lower than in the spherical model due to the occultation effect. A correct description of the light curve Table 3. Hydrodynamic models of type IIP supernovae.
of SN 2013ej at the photospheric stage therefore requires a multi-dimensional radiation hydrodynamics. The hydrodynamic modelling of SN 2013ej lead us to conclude that the ejecta mass is 23.1 − 26.1 M ⊙ , the explosion energy is (1.18 − 1.40) × 10 51 erg, the 56 Ni mass is 0.020 − 0.039 M ⊙ , with the upper limits being preferred. The pre-SN should be a very extended RSG star with the radius of 1500 R ⊙ . The large pre-SN radius in our model is responsible for the luminous broad initial peak of the light curve. It is notable that the circumstellar (CS) shell with the radius of 1300 − 1500 R ⊙ and mass of ∼1 M ⊙ invoked by Morozova et al. (2017) can be considered as a proxy for our extended RSG model. Other SN parameters of both models cannot be meaningfully compared, because the information on the photospheric velocities in the model of Morozova et al. (2017) is lacking. Yet we note that our ejecta mass is twice as large.
It is instructive to confront the parameters of SN 2013ej with those of another luminous type IIP SN 2004et (Table 3). Both have similar ejecta masses and pre-SN radii. However, the explosion energy and the 56 Ni mass of SN 2013ej are by a factor of ∼1.6 lower than those of SN 2004et. Moreover, the maximal velocities of 56 Ni are dramatically different: 1000 km s −1 in SN 2004et vs. 6500 km s −1 in SN 2013ej. This disparity indicates that the explosion outcome in SNe IIP can be significantly different even for comparable pre-SN masses.
An intriguing point is the physics behind the steep decline of the light curve of SN 2013ej at the photospheric stage. While the initial broad luminosity peak postpones the cooling and recombination wave regime, this does not explain the lack of the flat plateau at the later photospheric stage. In contrast, SN 2004et after about day 35 shows a flat plateau (Sahu et al. 2006). The reason for the different behavior of SN 2013ej at this stage is hidden in the density distribution of a pre-SN model. Specifically, at the transition between the helium core and the hydrogen envelope the density gradient in the pre-SN model of SN 2013ej varies with the radius more smoothly compared to that of SN 2004et. We attribute this distinction to the different outcome of the RT mixing during the explosion, viz., in SN 2013ej mixing was more vigorous than in SN 2004et. This conjecture seems to be supported by the low velocities of the 56Ni matter in SN 2004et (v max Ni = 1000 km s −1 ) and the high 56Ni velocities in SN 2013ej (v max Ni = 6500 km s −1 ). Whether the proposed (Fig. 8) place SN 2013ej into the high mass region at the lower boundary of both scatter plots. Now the sample of well-observed SNe IIP studied hydrodynamically in a uniform way (Utrobin & Chugai 2015) mounts up to ten and this reinforces our former impression that the SN IIP mass distribution is skewed towards high masses with an apparent deficit of SNe IIP in the range of 9 − 15 M ⊙ . This in turn brings about a tension with general wisdom that stars with the main-sequence masses of 9 − 25 M ⊙ produce SNe II. The tension is alleviated, if one admits that that SNe from the mass range of 9 − 15 M ⊙ are very faint, so they escape detection. This serious issue requires a further thorough study.
The model of extended asymmetric 56 Ni ejecta of SN 2013ej is found to be compatible with the polarization data at the early nebular epoch. Mauerhan et al. (2017) suggest that the Thomson scattering in an oblate ellipsoidal envelope and the dust scattering in the CS envelope could be responsible for the polarization in SN 2013ej. In this regard we notice that the mechanism of the asymmetric ionization related to the CS interaction invoked by Mauerhan et al. (2017), cannot noticeably modify the asymmetry produced by the bipolar 56 Ni ejecta. Indeed, the X-ray luminosity associated with the CS interaction on day 114 is three orders lower than the bolometric luminosity at the same epoch (cf. Chakraborti et al. 2016).
The bipolar structure of 56 Ni in SN 2013ej is reminiscent of SN 2004dj (Chugai et al. 2005) with an exception that the 56 Ni amount in SN 2013ej is twice as large and the maximal velocity of 56 Ni is also a factor of two larger. The high velocity of the 56 Ni ejecta in SN 2013ej is a challenge for the explosion mechanism. In the neutrino-driven explosion mechanism, high velocities of the 56 Ni-rich matter in SNe IIP are explained as an outcome of the RT instability that accompanies the shock wave propagation in the inner parts of the exploding star. Recent 3D simulations of an explosion of a 15 M ⊙ RSG model demonstrate that the RT plumes of 56 Ni are able to protrude up to 4000 − 5000 km s −1 (Wongwathanarat et al. 2015). However, the simulations show also that in a more massive star of 20 M ⊙ the 56 Ni velocities are a factor of 1.5 lower. Thus, at the moment it is not clear, whether the RT mechanism is able to account for the 56 Ni velocities of ∼6000 km s −1 in SNe with the mass of ∼25 M ⊙ . It is noteworthy that the velocity extent of 56 Ni in SN 2013ej is not extreme among SNe IIP: in SN 2000cb the 56 Ni matter seems to be mixed up to 8400 km s −1 (Utrobin & Chugai 2011). In this regard it may well be that apart from the RT mixing some additional asymmetry of the explosion could be involved in the buildup of high velocities of 56 Ni. One way or another, the asymmetry and high velocities of the 56 Ni ejecta in SNe IIP should be considered as crucial observational constraints for the explosion mechanism. Finally, note that the bipolar highvelocity 56 Ni ejecta in SN 2013ej is not unique phenomenon: the well-known high-velocity bipolar jets in Cas A (SN IIb) are the firmly established feature that apparently has an explosion origin (Fesen & Milisavljevic 2016).
For SN 2013ej our estimate of the progenitor mass (dubbed "hydrodynamic mass") is significantly larger than 8−15.5 M ⊙ (dubbed"pre-SN luminosity mass") inferred from pre-explosion HST F814W-filter image (Fraser et al. 2014). The SN IIP progenitor mass problem was recovered and discussed first in the context of SN 2005cs (Utrobin & Chugai 2008), in which case hydrodynamic mass turned out to be significantly larger than the ZAMS mass from archival image (Maund et al. 2005). Since then the mass problem is reproduced every time when both mass estimates are available. The exception is SN 1987A, for which both methods provide similar mass estimates of ∼20 M ⊙ . This in turn indicates that the mass problem arises possibly only in cases when the SN IIP event is related to the RSG explosion.
The conflict between hydrodynamic and pre-SN luminosity masses suggests that either the former or latter method is in error. Our hydrodynamic code in case of SN 1999em -classic SN IIP -produces similar SN parameter values (Utrobin 2007) to those obtained by independent codes of Baklanov et al. (2005) and Bersten et al. (2011). It should be emphasized that the codes use different treatment of radiation transport: one-group radiation transfer (Utrobin 2007), multi-group radiation transfer (Baklanov et al. 2005), and flux-limited equilibrium diffusion approximation (Bersten et al. 2011). This indicates the robustness of the recovered parameter values provided that the observables (bolometric light curve and photospheric velocities) are adequately modeled. It should be highlighted that in some cases hydrodynamic studies of SNe IIP are limited to the light curve modelling, ignoring photospheric velocities. In this approach the ejecta mass and the explosion energy cannot be reliably determined. It is equally dangerous to use approximation formulae for the parameter esti-mates, since such an approach ignores the early behavior of the luminosity and expansion velocity, which contain an additional information on the ejecta mass, explosion energy, and pre-SN radius.
Until recently a sensitive point of the hydrodynamic approach was a "manual" composition mixing and density smoothing between the metal core, the helium core, and the hydrogen envelope of a pre-SN star, although that was well constrained by the light curve at the late plateau. The justification for the manual mixing is the inability of 1D hydrodynamics to treat the Rayleigh-Taylor instability and mixing initiated by the shock propagation in the pre-SN. The recent study dedicated to mixing issue on the basis of the 3D RSG explosion (Utrobin et al. 2017) shows that the manual mixing applied earlier in 1D model of SN 1999em (Utrobin 2007) is fully consistent with the mixing produced by the 3D hydrodynamics. At the moment we do not see another way to seriously improve our hydrodynamic model.
Alternatively, the pre-SN luminosity mass could be underestimated. The most apparent reason is that the pre-SN light might be obscured by a dusty CS envelope (Utrobin & Chugai 2008;Smartt et al. 2009). This possibility finds some support in the growing number of SNe IIP with signatures of a confined dense CS envelope in early spectra, e.g., SN 2006bp (Quimby et al. 2007) and SN 2013fs (Yaron et al. 2017). These data indicate a dense CS shell at the radii 10 15 cm. To illustrate the point, let us estimate possible optical depth of the CS shell of the mass M s at the radius R s = 10 15 cm. The equilibrium temperature of the shell for the stellar luminosity of 3 × 10 5 L ⊙ , characteristic of a 25 M ⊙ RSG star, is then ∼900 K, i.e., below the silicate dust condensation temperature (1000 − 1200 K) for the relevant gas pressure. We adopt the standard dustto-gas ratio of 10 −2 and the optical properties of a silicate dust (Draine & Lee 1984), which suggest the absorption efficiency Q a = 0.4(a/λ) for the grain radius a of about 10 −5 cm. The optical depth of the CS shell is then τ = 2.2(M s /10 −3 M ⊙ )(R s /10 15 cm) −2 (0.8µm/λ). The estimate shows that the shell with the mass M s ∼ 10 −3 M ⊙ , produced by a sensible RSG mass-loss rate, could maintain significant absorption of the pre-SN light.
Another independent progenitor mass estimate for SN 2013ej (12 − 15 M ⊙ ) is obtained from the nebular oxygen doublet [O I] 6300, 6364Å (Yuan et al. 2016). The accuracy of this method depends on the uncertainty of the adopted density, 56 Ni distribution, and molecules (i.e., CO and SiO) abundance of the O-rich matter. Note that the cooling by molecules is ignored in the current model. Furthermore, the adopted 56 Ni distribution set by centrally concentrated sphere is an oversimplification in the case of SN 2013ej. Therefore, although this analysis of nebular spectra is an interesting alternative approach, there remain uncertainties that cast shadow on the reliability of the progenitor mass estimate. | 2018-12-16T07:48:47.932Z | 2017-09-16T00:00:00.000 | {
"year": 2017,
"sha1": "ac730a77691344b8bc959086a096bfe5922320f7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1709.05573",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ac730a77691344b8bc959086a096bfe5922320f7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
547908 | pes2o/s2orc | v3-fos-license | Integrative review on the non‐invasive management of lower urinary tract symptoms in men following treatments for pelvic malignancies
Summary Aim To develop a non‐invasive management strategy for men with lower urinary tract symptoms (LUTS) after treatment for pelvic cancer, that is suitable for use in a primary healthcare context. Methods PubMed literature searches of LUTS management in this patient group were carried out, together with obtaining a consensus of management strategies from a panel of authors for the management of LUTS from across the UK. Results Data from 41 articles were investigated and collated. Clinical experience was sought from authors where there was no clinical evidence. The findings discussed in this paper confirm that LUTS after the cancer treatment can significantly impair men's quality of life. While many men recover from LUTS spontaneously over time, a significant proportion require long‐term management. Despite the prevalence of LUTS, there is a lack of consensus on best management. This article offers a comprehensive treatment algorithm to manage patients with LUTS following pelvic cancer treatment. Conclusion Based on published research literature and clinical experience, recommendations are proposed for the standardisation of management strategies employed for men with LUTS after the pelvic cancer treatment. In addition to implementing the algorithm, understanding the rationale for the type and timing of LUTS management strategies is crucial for clinicians and patients.
What's known
Lower urinary Tract symptoms (LUTs) are a constellation of symptoms that are common in men who have been treated for pelvic malignancies, not only as a result of their disease but also as a consequence of cancer treatment. Symptoms such as urinary incontinence, frequency and urgency are often reported by men as the most bothersome. Pelvic therapies include treatment for prostate, bowel and bladder cancers. Many men continue to experience long term symptoms over many years and this can have a negative effect on recovery and subsequent quality of life. Conservative management strategies are defined for LUTS but these are mainly developed and evaluated in general populations with current guidelines based on benign disease. The evidence base for such conservative management of LUTS after pelvic cancer treatment is small and inconsistent and may not be appropriate for LUTS from different causality.
What's new LUTS after cancer treatment is a significant problem for cancer survivors especially as more men are surviving cancer treatment. Symptoms can occur for many years after cancer therapy and incidence and timing of LUTs depends on treatment type and extent of predictive factors prior to treatment. LUTS after cancer treatment includes both urinary incontinence and lower urinary tract symptoms which can be concurrent and impacts on men's quality of life. Awareness of the treatments that men have received is important in defining LUTS management and the pathway of care. Assessment, appropriate pharmacotherapy, behavioural and lifestyle management can improve symptoms. While many men recover spontaneously over time a proportion of men require long term management of LUTs. Simple assessment, use of behavioural strategies, drug management and consistent follow up can help reduce the burden of this symptom for men after cancer treatment. Symptoms of LUTS that persist after 3 months of conservative treatment and impact on men's quality of life should be referred to specialist urology teams.
Introduction
There are currently more than 2 million people in England living with cancer and this number is increasing as cancer survival improves (1). Men with prostate cancer account for much of the male survival; 41,700 men were diagnosed in 2011 and 8 in 10 of these will survive for 5 or more years. Other pelvic cancers such as bladder and bowel cancer account for 30,500 men diagnosed per year, making pelvic cancer a substantive area of disease burden in the male population (1). Many of these men continue to experience symptoms that impact on quality of life such as urinary and bowel problems, haematuria, rectal bleeding, pain and sexual dysfunction (2). Common symptoms as a result of cancer therapy have been addressed in substantive reviews (3)(4)(5)(6). Urinary symptoms despite the high prevalence in men after cancer treatment and the links to negative effect on quality of life (7) have not yet been addressed.
Lower urinary tract symptoms (LUTS) may be divided into storage, voiding and postmicturition (8,9). These are common problems and up to 3.4 million men in the United Kingdom live with LUTS and the prevalence rates for various types of LUTS after cancer therapy ranges from 3.7% to 52.2% (7,10). Most of these men are managed within primary care, with either conservative lifestyle measures or medical treatment (10). LUTS is a complex group of symptoms, which are difficult to define. However, the NICE guidelines on LUTS published in 2010 define the symptoms as shown in Table 1 (8). Generally, a symptom-based approach is used in classifying LUTS. According to the International Continence Society (ICS), LUTS can be divided, similarly to NICE guidelines, into storage symptoms, voiding symptoms and symptoms experienced postmicturition, although ICS also includes urinary incontinence (UI) and postmicturition dribble in its definition (Table 1) (9). Symptoms such as UI, frequency, urgency and nocturia are often the most bothersome of LUTS (11). Most clinical trials in pelvic cancer patients tend to assess only UI as an outcome measure rather than LUTS as a cluster of symptoms. Overall, the management of LUTS remains an area requiring improvement for cancer survivors and impacts considerably on quality of life for men. In addition, the problem is under-reported.
Lower urinary tract symptoms are closely associated with erectile dysfunction (ED) (12). A large multinational survey showed that the prevalence of ED increased with increasing severity of LUTS (13). Preclinical evidence suggests that there are common pathophysiological mechanisms underlying the development of both ED and LUTS (14). Indeed, in a recent review, Kirby et al. have recommended that physicians should be aware of the sexual adverse effects of many treatments, which are currently recommended for LUTS and that sexual function should be evaluated prior to commencement of treatment, and monitored throughout the treatment to ensure that the choice of drug is appropriate (14). The evidence-base for conservative management of LUTS after treatment for pelvic cancers is small and characterised by variations in patient characteristics. Furthermore, although guidelines exist for treating men with LUTS, these are not specific to cancer patients and are based on benign disease causality (15). Here we review the conservative interventions, which can improve LUTS in men who have had treatment for pelvic cancers. We aim to provide recommendations based on clinical evidence and best clinical practice.
Known risk factors such as prior transurethral resection have been well evidenced, but probable risk factors such as central nervous system damage and/or cognitive impairment is less well defined in the cancer population but is essential in the assessment of LUTS. Many older cancer patients may already have high anticholinergic loading from other medications such as tricyclic antidepressants and ACE inhibitors which may impact on LUTS. These should be considered in the assessment intervention and management (27,28).
LUTS after cancer treatment
Lower urinary tract symptoms can significantly reduce men's quality of life, and may point to serious pathology of the urogenital tract (8). Age is an important risk factor for LUTS and the prevalence of LUTS increases as men get older (8). LUTS can be indicative of prostate cancer and patients with other pelvic cancers should be assessed accordingly. LUTS can also be a complication following the treatment for pelvic cancers such as colorectal, bladder and prostate cancer (16). Indeed, cancer survivors are far more likely to suffer from UI than the general population, although very little data exist on the impact of UI on quality of life among cancer survivors, especially in elderly populations (29). The large Prostate Strategic Urologic Research Endeavor (CaPSURE) study in 3056 prostate cancer survivors demonstrated that decline in the urinary function was independently associated with satisfaction with prostate cancer care (30).
In cancer patients, LUTS can have distinctive pathology and causality because of a combination of different factors than in patients with benign disease. Around 20% of patients develop or continue to have UI within 2 years of prostectomy for prostate cancer (16). The symptoms of LUTS can develop months to years after the treatment for pelvic cancers. Hence, regular assessment of LUTS in cancer survivors is necessary. Structural abnormalities of the bladder such as rigidity and changes in bladder size can also influence bladder capacity. Overactive bladder (OAB) is one of the main contributing factors of LUTS (32). This may be caused by the direct effects of cancer therapies or prior disease. This is because of injury to neural pathways to and from the bladder and from a partial denervation of the bladder muscle causing excitability and an involuntary rise in pressure within the bladder resulting in frequency and urgency of passing urine (32).
Postprostatectomy LUTS
Patients undergoing prostatectomy are more likely to have stress UI (SUI) than those undergoing radiotherapy (RT) at 2 and 5 years, although there are no significant between-group differences at 15 years (30). Continence improves progressively until 2 years from Radical prostatectomy (RP) but some patients can become incontinent later (16,33). In clinical practice, however, clinicians tend to observe that this improvement takes place over the first year; and therefore, surgical interventions should be considered after 1 year. Therefore, an important question to ask a patient during assessment is the duration of incontinence. The criterion of pad use discriminates well between men with a limited reduction in their QoL (no or one pad used) and those with a markedly affected QoL (> or =2 pads/day) (33). The 24-h pad weight can also be employed, and is usually a better marker for severity of incontinence, allowing it to be divided into mild, moderate and severe (34). Furthermore, only one-third of leak-free and pad-free continent patients prior to treatment return to the same state at 2 years after treatment (26). Postprostatectomy UI is most often caused by dysfunction of the urethral sphincter (either from injury of striated muscle fibres or the innervating nerve fibres) and/or detrusor dysfunction, leading to stress and/or urgency incontinence, respectively (35)(36)(37). The reported SUI rates one year after RP vary between 5% and 48.0% (38). In addition, especially during the first year after RP, OAB symptoms are common in up to 77% of patients, but generally resolve over time (39).
Finally, the relatively small number of men treated with RT before prostatectomy have higher incidence found that the rate of incontinence was 5.5% when the surgery was performed before RT and 33% when performed after RT (40). In another study of 60 patients, given external RT after radical prostatectomy (RP), no difference was observed in terms of UI in the 24-month follow-up (41).
Post RT and chemotherapy LUTS
Radiotherapy with or without chemotherapy for pelvic malignancies may result in LUTS (21). Overall, severe late effects occur in ≤ 10% of patients with prostate or bladder cancer (21). Acute side effects that occur during RT usually resolve within a few months (21). Long-term symptoms attributable to global injury include dysuria, frequency, urgency, contracture from fibrosis, spasm, reduced flow and incontinence (21). More focal injury includes haematuria, fistula, obstruction, ulceration and necrosis (21). A prospective study of 614 patients with localised prostate cancer treated with RP, external conformal RT and brachytherapy (BT) was carried out to compare treatment impact on health-related quality of life (HRQL) (42). In each treatment group, HRQL initially deteriorated after treatment with subsequent partial recovery. Compared with the BT group, RP patients had worse UI scores (p < 0.001). Prostatectomy patients had significantly better urinary irritation scores than BT patients (p < 0.001) (42).
The bladder is particularly sensitive to certain cytotoxic drugs, leading to cystitis, fibrosis and occasionally diminished bladder volume leading to symptoms of urinary frequency, dysuria, haematuria and sphincter dysfunction (4). LUTS occurs in an estimated 71% of patients receiving maintenance BCG for bladder cancer (4). Intravesical mitomycin C (MMC) has also been known to exacerbate LUTS in these patients (43)(44)(45). In general, patients on chemotherapy appear to be more prone to UTIs (46)(47)(48) and primary care physicians must be aware of this in order to assess and manage these patients appropriately.
Neo-adjuvant and Adjuvant ADT in combination with RT LUTS
Adjuvant androgen suppression with hormonal therapy did not increase rectal or urinary dysfunction in the RADAR trial, designed to determine whether adjuvant androgen suppression, bisphosphonates and radiation dose escalation for localised prostate cancer may improve oncologic outcomes (49). Stone et al. showed that pretreatment ADT in patients receiving BT may decrease treatment-related urinary symptoms in patients who have a large prostate and an International Prostate Symptom Score (IPSS) of 15 or greater (50). Grant et al. showed that patients on RT plus ADT achieved baseline urinary symptoms more rapidly than the patients on RT alone (51). However, Crook et al. demonstrated that despite 2-6 months of prior hormonal therapy before RT; late urinary morbidity was seen in 27% of men following prostate BT (52).
Treatment with degarelix or goserelin + bicalutamide has demonstrated relief of LUTS in patients with moderate and severe voiding problems at baseline (53)(54)(55). Another study of 104 patients, on 3.75 mg leuprolide acetate at 4 week intervals for a total of 12 weeks, demonstrated that leuprolide treatment significantly improved daytime urinary frequency, despite a deterioration in physical, role and sexual function (56).
LUTS after high-intensity focused ultrasound and cryotherapy
Owing to technological advancements in high-intensity focused ultrasound (HIFU) procedures, long-term follow-up of patients has demonstrated improved urinary symptoms after treatment (57). A recent study assessed the impact of HIFU on lower urinary tract by comparing pre-and postoperative symptoms and urodynamic changes. Following HIFU, detrusor overactivity, decreased bladder compliance and urge incontinence were observed. However, these symptoms were also observed in 20% of patients before surgery. There was a progressive improvement in all storage and voiding patterns at 6-month follow-up, although patients with high prostate volume and long procedure length suffered from urge incontinence during long-term follow-up (58). Limited urinary and rectal morbidity have been observed in other longterm follow-ups after HIFU (≥ 12 months) (59)(60)(61). According to clinical observations, LUTS tend to improve quite quickly after treatment.
Reported postsalvage cryosurgery UI rates range from 0% to 83% (62)(63)(64)(65)(66)(67). Generally, men treated with cryotherapy report higher prevalence of urinary symptoms compared with RP in the short term, but the symptoms improve or disappear after ≥ 3 months (68,69). A 2-year follow-up observational study of 10,928 men comparing BT vs. cryotherapy demonstrated that cryotherapy was associated with more urinary complications than BT (70). In general, urinary complications after HIFU or cryotherapy are more common and more severe in patients previously treated for prostate cancer (usually by RT) vs. treatment naive patients (71).
Rationale for guidance development
The management of male LUTS after pelvic cancer therapy consists of three different approaches: con- servative management, pharmacotherapy and surgical treatment. Existing guidance for the management of LUTS in primary care is based on benign prostatic disease (28) and therefore does not take into account the causality and differences in treatment-related effects from cancer treatment. There is a need for specific guidance to manage LUTS symptoms in the population affected by the spectrum of male pelvic cancers. This review critically explores the evidencebase for the assessment and management of LUTS in male cancer patients and provides guidance regarding the non-invasive interventions (conservative management and pharmacotherapy) which can be used in primary care specifically and advice on when referral to specialist urological services is warranted. Our aim was to extract only the most recent studies and not to overlap with the evidence published in NICE LUTS Clinical Guideline (8). This guide is aimed at non-specialist clinicians working with men after pelvic cancer therapy in primary care follow-up. It discusses LUTS in men following the most commonly used treatments for cancer.
Literature analysis
A systematic review of the literature was conducted to investigate the evidence-base for the non-invasive management of LUTS in men following pelvic cancer treatment. The interventions covered by this guidance include lifestyle changes, exercise and oral medications.
Web of science, Medline, Cinahl, Psycinfo, Cochrane database and Embase were searched using various combinations of the following terms: lower urinary tract symptoms and/or treatment and/or bladder cancer and/or rectal cancer and/or prostate cancer and/or (names of specific drugs) and/or brachytherapy/radiotherapy/cryotherapy/HIFU/androgen deprivation therapy and urinary incontinence.
Only original publications and systematic reviews were sourced; however, literature reviews were also retrieved and hand searched for individual studies and all relevant papers extracted. Publications were included if they described an intervention for any area of LUTS. Interventions included were professional guided management such as pharmacological treatment, as well as self-management interventions including all behavioural management approaches. Publications, which did not include at least one intervention for treating LUTS after pelvic cancer treatment, were excluded. The search included papers published from 2000 to 2014. The studies identified and used in this literature analysis were graded using the Oxford Centre for Evidence-based Medicine Levels of Evidence. The findings of the literature analysis were integrated with authors' clinical experience to provide recommendations outlined in this review.
Literature search overview
The literature search identified 41 articles for the final analysis. Twenty-four papers were included that concerned the behavioural interventions using Pelvic Floor Muscle Exercises (PFME), 10 described pharmacological interventions, six articles described other interventions and one article described containment devices. The selection criteria for the 17 studies included: articles in English, studies which utilised an intervention for LUTS, studies with adult male patients treated for pelvic cancers. Non-invasive treatments were included (conservative management, e.g. lifestyle, behavioural interventions such as exercise or diet and pharmacological interventions). Both randomised and non-randomised studies were used (Table 3). A meta-analyses of the data was not performed due to; Inconsistent definitions, measurement tools & diverse timings used in identified studies; as well as low number of articles identified for each intervention.
Studies and patient characteristics from the literature analysis
In total, 8951 patients were included in the 41 selected studies ( Table 3). Most of the studies were randomised controlled studies. Follow-up ranged from 1 week up to 12 months while management duration ranged from 30 days before treatment to ≥ 1 year after treatment (Table 3).
Of the 30 studies assessed for management of UI after surgery, one study assessed recovery of sphincter/pelvic function after surgery and one study assessed management of cystitis after surgery. Four studies evaluated management of UI after RT/BT; one study evaluated management of urinary tract infections after RT and four studies specifically looked at management of LUTS after RT/BT. A wide range of assessments were used across the studies identified, and all the studies were conducted within differing ranges of time points; hence, the outcomes of the studies are difficult to compare directly. Instead, narratives of their key findings are presented in the discussion with author recommendations. Trial participants were adult males, who had received treatment for pelvic cancers, and who had experienced some type of LUTS subsequent to the cancer therapy. Much of the literature was prostate cancer specific and fewer studies explored LUTS in men Table 3 Study and patient characteristics selected from the literature analysis with bladder and bowel cancer. Table 4 summarises efficacy analysis of current management strategies for LUTS in men after pelvic cancer treatment, as identified by the literature analysis.
Discussion
The literature analysis identified a range of intervention studies, but it must be noted that most of the research have focused on UI rather than the wider extent of LUTS. Therefore, few studies explore the full range of LUTS symptoms as defined.
Assessment of LUTS
Assessment is fundamental in the management of LUTS as well as recognising the impact and bother urinary symptoms may have for the individual (71).
The majority of the studies identified in this review assess UI rather than LUTS and the wider perspective of the patient. Pad tests/daily pad usage, IPSS and self-reported continence were generally used to assess LUTS/UI in most of the clinical studies identified. NICE guidelines recommend the use of bladder diaries as a cost-effective tool for assessing incontinence (though not specific to cancer patients), and this gives information on voiding and frequency (8). These tools are developed for generic LUTS and are research focused and so, may be difficult to use routinely in practice. The need for objective measurement and to provide a baseline for assessing change and patient outcomes is essential.
In clinical practice, the IPSS is routinely used to assess LUTS, and contains three questions regarding storage symptoms and four on obstructive voiding symptoms and one on LUTS impact on quality of life. However, IPSS does not assess incontinence, which is why more detailed questionnaires are often needed. The International Consultation on Incontinence Questionnaire (ICIQ) and ICIQ-LUTS is a validated questionnaire for evaluating quality of life and urinary symptoms. It explores in detail the impact on patients' lives of LUTS and can be used as an outcome measure to assess impact of different treatment modalities. Hence, the ICIQ-LUTS is useful for assessing QoL; ICIQ UI for incontinence and ICIQ OAB for storage and LUTS symptoms. Asking patient about their symptoms is also important as questioning which often identifies the impact and adherence with interventions that may not be measured in LUTS scores (107). Common questions which GPs can use include: • Do you experience any loss of urine when coughing or sneezing?
• Do you experience any loss of urine after voiding completion?
• Do you have to arrange your day around finding toilets because of urinary frequency?
• Do you experience disturbed sleep at night because of needing to pass urine frequently?
The aims of assessment are to: identify reversible factors that may be contributing to or causing symptoms contributing to LUTS, understand the level of distress or bother and impact for the individual, identify those men who may need more specialist assessment or intervention such as urology or clinical nurse specialist, or continence adviser referral and to develop a baseline prior to referral and as an evidence-based plan of treatment for the individual.
Assessment recommendation
• General assessment including self-reported incontinence.
• Self-reported continence can be complemented with one of the validated questionnaires, e.g. IPSS, ICIQ-LUTS QoL for QoL; ICIQ UI for incontinence and ICIQ OAB for storage LUTS symptoms. • Dipstick analysis for haematuria.
Additional assessment.
• Bladder ultrasound for identifying residual and structural issues.
• Flow rate and measurement of urodynamics (usually available through community Continence nurse services).
Conservative management
Conservative management of LUTS (specifically UIthe most studied LUTS symptom) includes lifestyle interventions, pelvic floor muscle training (PFMT) with or without biofeedback, and bladder training. Lifestyle interventions include moderating fluid intake, avoidance of known bladder irritants such as caffeine and alcohol, weight loss, and smoking cessation; however, these interventions are less researched (108).
Current guidelines recommend behavioural therapies and lifestyle changes as first line treatments for urinary problems although there is no specific guidance for cancer patients (8,28). Behavioural techniques include bladder retraining techniques for example progressive voiding schedule together with relaxation and distraction for urinary urgency. habits, reducing bladder irritants from the diet, fluid intake management, weight control, smoking cessation and management of bowel regularity (109). These common techniques were not considered in the review. Behavioural interventions which have multicomponent elements of training such as PFME were considered in this review as part of treatment approaches.
Pelvic floor muscle training
Most of the publications found in this analysis primarily focus on UI rather than LUTS and the interventions have been studied mainly in prostate cancer patients rather than other pelvic cancers. Therefore, this may under report the impact of PFMT on the wider profile of urinary symptoms. In addition, the recommendations proposed for management of LUTS after pelvic cancer treatment are based on clinical practice as well as clinical evidence-base.
PFMT before/after surgery
Most of the randomised controlled trials identified contain information on PFMT after prostatectomy. Strengthening PFMs plays a significant role in recovery after surgery. A Cochrane review, published in 2012, which assessed the effects of 'conservative' management for UI after prostatectomy, concluded that there remains no clear support that conservative management of any type for postprostatectomy UI is either helpful or harmful, whether delivered as treatment to men who are incontinent or as prevention to all men undergoing RP (72). It must be noted that the Cochrane review did not stratify studies in early vs. late initiation PFMT or preoperative vs. postoperative PFMT or physiotherapist-guided (with/without biofeedback) vs. standard care PFMT. These factors have been addressed briefly below.
Early vs. late PFMT
In a quasi-experimental study of 47 postsurgery patients randomised to PFMT vs. no PFMT, Lin et al. showed that that urinary control in the exercise group was better than in the non-exercise group although UI decreased significantly in both groups (79). The difference observed between the two groups was attributed to patient education regarding pelvic floor exercises by a nurse prior to and after surgery. Patients were stratified in the two groups after catheter removal, suggesting early management may improve outcomes. Similarly, Van Kampen, et al. demonstrated significant improvements in the duration and degree of continence with PFMT vs. placebo therapy if PFMT was initiated at catheter removal (88). Other studies also demonstrate beneficial effects of PFMT if initiated early after RP (19,85,92).
Five studies identified in our review investigated the effectiveness on UI if PFMT is initiated up to 30 days preoperatively (73)(74)84). One study even demonstrated benefits of PFMT if initiated a day before RP, although the sample size was quite small and a larger study would be needed to investigate this further (87). In addition, the investigators compared physiotherapist-assisted vs. non-physiotherapist-assisted PFMT, hence the role of starting PFMT one day before surgery is not clear. A trial of 180 men, however, demonstrated no significant benefits in terms of UI symptom improvement between PFMT initiated 3 weeks preoperatively (3 sessions) or at catheter removal (79). However, the QoL trend was in favour of preoperative PFMT (non-significant) (79).
In the Men After Prostate Surgery randomised control trial over 700 men underwent PFMT (four sessions with a therapist over 3 months vs. standard care and lifestyle advice only) 6 weeks after surgery. In this trial, PFMT was not shown to be therapeutic or cost-effective in improving urinary continence (80). Of the patients in the intervention group, 148 of the 196 patients reported some form of incontinence at the 12-month mark. In the control group, 151 of the 195 patients reported some UI (difference not significant) (80). However, it must be noted that patients often buy containment devices themselves and costs of those were not included in this study and the study authors recommend that their cost-effectiveness data should be interpreted with caution (80). In another study (n = 208), PFMT intervention for persistent long-term UI after RP (initiated ≥ 1 year after surgery) showed that 8 weeks of behavioural intervention (with or without biofeedback and pelvic floor muscle stimulation), resulted in significantly fewer incontinence episodes compared with a delayed-treatment control (81). The effect was durable up to 12 months after treatment. Wille et al. found that PFMT, electrical stimulation and biofeedback did not affect continence even when initiated at catheter removal (89). Taken together, these studies suggest some evidence for PFMT if initiated just before or soon after RP, e.g. at catheter removal, but further studies are needed to verify this. Factors to take into account include, among others, number of PFMT sessions, assessment of UI and biofeedback.
In a post-RP evaluation of UI, Song et al. demonstrated that patients with better developed pelvic floor muscles, especially in relation to the size of the prostate, can be expected to achieve earlier recovery of continence after RP (110). Two studies have shown no significant difference in improvement of urinary symptoms between physiotherapist-guided training of the pelvic floor muscles after RP compared to standard care/training or self-training approach (76,82). However, in one of these studies, PFMT was imitated within 12 months of surgery rather than soon after the surgery (82).
Biofeedback, in conjunction with PFMT, may also play a role in improving LUTS. Biofeedback is a technique in which physiological activity is monitored, amplified and conveyed to the patient as visual or acoustic signals, thereby providing the patient with information about unconscious physiological processes (111). According to a recent review, the biofeedback for PFMT may improve the patients' ability to isolate the PFM and differentiate between muscle contraction and relaxation (108). In one trial, a single session of biofeedback-assisted behavioural training reduced the duration of UI as well as the severity of symptoms in the 6 months post-RP (22). In post-RP patients, intense preoperative biofeed-back-assisted PFMT session which given one day before RP,session immediately following catheter removaland then monthly, combined with an assisted, low-intensity postoperative programme has demonstrated reductions in the duration and severity of UI as well as improvements in QoL (87).
In OAB, the 5th International Consultation on Incontinence (ICI) guidelines recommend the inclusion of biofeedback in the treatment of urgency syndrome, but the decision is a therapist/patient decision based on economics and preference (111).
Summary
Preoperative or immediate postoperative PFMT is useful. In general, for both RP and RT, earlier return to continence was observed if PFMT was started early in the post-treatment period.
Therapist-guided PFMT can significantly improve time to return of continence, especially after prostate surgery. Example of a protocol is shown in Box 1.
PFMT key objective is to build tone in the muscles by repeated exercise so that muscles can respond in time to the increase in intra-abdominal pressure. Note that the actual numbers of exercises are not as important as inclusion of some fast and some slow repetitions (on account of the presence of both slow and fast twitch activity in the pelvic floor muscle). The exercises must be conducted on several occasions throughout the day in order to condition the brain to recognise this as tonic and not as phasic activity. • The exercises should be conducted sitting, standing-up and lying down up to three times a day i.e. in total 60 PFM contractions per day. In the lying down position men should have their knees bent or apart. In the standing position PFMs should be conducted with feet apart; and in the sitting position, PFMs should be conducted with the knees apart. Evidence suggests it is the intensity rather than the frequency of the PFMs that is important (87,111). • Start PFMT pretreatment (ideally 1 month before surgery in the case of RP) or within one month of RT/ADT treatment/catheter removal after surgery.
• Physiotherapist assisted programme has the greatest benefit. Consider using a physiotherapist or at least a DVD with a physiotherapist demonstrating the exercises.
• Continue on PFMT for at least 6 weeks.
• Can be provided in combination with biofeedback, if possible.
Alpha blockers
Alpha blockers can be used to treat LUTS such as urge UI or OAB as they relax smooth muscles (112). Tsumura et al. compared the efficacy of tamsulosin, silodosin and naftopidil in treating LUTS after BT (97). In this study, 212 patients received one of three alpha 1-adrenoceptor antagonists for 1 year after BT. The results demonstrated significantly greater decreases with silodosin vs. naftopidil at 1 month in the total IPSS. Silodosin showed a significant improvement in the postvoid residual at 6 months vs. tamsulosin. The authors concluded that silodosin has a greater impact on improving LUTS after BT than tamsulosin or naftopidil (97). Oyama et al. more recently also demonstrated better improvements in IPSS score with silodosin vs. tamsulosin or naftopidil up to 9 months after BT (95). Shimizu et al., however, demonstrated that the effects of silodosin are temporary in a 12month follow-up study of 105 patients given sildonosin daily for 6 months immediately after BT (96). In clinical experience, incontinenceeither stress or urgeis uncommon after BT and occurs in less than 2% of patients in the first 2 years after implantation. LUTS following BT are generally driven by the temporary swelling/obstruction that the implant causes, hence the need for an alpha blocker. Jang et al. investigated the efficacy of 0.2 mg/day tamsulosin (for 7 days) in preventing acute voiding difficulty after rectal cancer surgery in 94 rectal cancer patients (94). The results demonstrated similar reinsertion rate of the urinary catheter in the tamsulosin and control groups (p = 0.804) and similar effects on voiding parameters and IPSS. The authors concluded that tamsulosin did not prevent acute voiding difficulty after rectal cancer surgery.
However, alpha blockers can exacerbate stress incontinence (113,114) and hence, cannot be recommended after surgery in this review.
Summary alpha blockers
Evidence grade ranging from 1B to 2A (Table 3) • Post-BT: The most effective appears to be silodosin after BT though the effects are temporary (1-6 months) o Silodosin is the only alpha blocker which has demonstrated improvements in LUTS after BT, but its effects only last up to 6 months. However, silodosin is not licensed for use in the UK and hence other alpha blockers can be used. o Tamsulosin is commonly used after BT for 3-6 months before symptoms return to base line.
• Postsurgery: Cannot be recommended as they may exacerbate stress incontinence.
Antimuscarinics
Data on antimuscarinics for LUTS in male cancer patients are scarce. In a study of 116 patients, the antimuscarinic agent solifenacin was shown to provide symptomatic comfort after transurethral resection of the bladder tumour and chemotherapy (98). Patients who received solifenacin 6 h before surgery and every day for 2 weeks after the procedure reported significantly lower OAB symptom scores (5.67 vs. 7.86; p < 0.001) compared with patients who received placebo. In a review, published in 2011, to evaluate contemporary non-invasive and invasive treatment options for postprostatectomy incontinence, the authors recommended use of antimuscarinic therapy for urgency or urge incontinence alongside or after PFMT (39). For patients suffering from OAB symptoms +/À urgency incontinence after prostate surgery, antimuscarinic medications have been recommended in the European Association of Urology (EAU) guidelines (38). However, antimuscarinics may cause cognitive impairment and should be avoided in patients at risk (27). Other side effects of antimuscarinics include constipation, transient bradycardia (followed by tachycardia, palpitation and arrhythmias), reduced bronchial secretions, urinary urgency and retention, dilatation of the pupils with loss of accommodation, photophobia, dry mouth, flushing and dryness of the skin (115). It is important to note here that these adverse effects are less common with the newer antimuscarinic agents.
Antimuscarinics are also most commonly associated with dry mouth, which many patients find uncomfortable, and discontinue the therapy (116). In a 12-month UK study looking at persistence with antimuscarinic treatment, solifenacin was associated with higher levels of persistence compared with other prescribed antimuscarinic agents (116). Mirabegron has been recommended by NICE as an option for treating the symptoms of OAB only for people in whom antimuscarinic drugs are contraindicated or clinically ineffective, or have unacceptable side effects, because of its better adverse event profile and similar efficacy to the antimuscarinics (117).
Antimuscarinics recommendations
Evidence grade of 1B (see Table 3) (Recommendation here are based on consensus opinion as well as evidence-base) • The EAU guidelines recommend antimuscarinic drugs as initial drug therapy for adults with urgency UI. The guidance also states that there is no consistent evidence that one antimuscarinic drug is superior to an alternative antimuscarinic drug for cure or improvement of UI or QoL (118).
• We recommend initiating antimuscarinics (tolterodine or solifenacin (Vesicare) most commonly used) if the main bothersome symptom of LUTS is urgency UI, followed by mirabegron if antimuscarinic drugs are contraindicated or clinically ineffective.
PDE5-Is
Evidence from epidemiological studies suggests that LUTS are closely associated with ED (12,13,119). Oelke et al. demonstrated in a non-cancer clinical study that that PDE5-Is can improve LUTS as well as erectile function (119). Based on evidence in noncancer patients, the 2013 EAU guidelines treatment recommend use of PDE5-Is in men with LUTS (15).
However, there are few postcancer studies in men which demonstrate improvements with PDE5-Is. Gacci et al. demonstrated a potential therapeutic role for daily administration of PDE5-Is in continence recovery after bilateral nerve-sparing prostatectomy in 39 patients (99). A review of 705 patients further corroborated the efficacy of daily PDE5-I use on urinary continence 1 year after RP vs. on demand use (20). Increased blood flow and oxygen supply by PDE5-Is may be beneficial for recovery of sphincter and pelvic floor muscles (20).
The EAU treatment guidelines for LUTS include the use of the PDE-5I tadalafil for LUTS (15). The NICE guidance states that there is there is no statistically significant difference between PDE5-I and alpha blockers in improving symptom scores or nocturia at 3-month follow-up, though alpha blockers are more effective than PDE5-I in decreasing urinary frequency at 3-month follow-up (8). However, it must be noted that the NICE guidance is based on older data compared with EAU guidelines (published in 2013). The NICE guidance also states that there is no statistically significant difference between combination treatment of alpha blockers plus PDE5-I and alpha blockers in improving symptom scores, quality of life (IPSS question), Qmax (ml/s), nocturia or frequency at up to 3month follow-up. Furthermore, the guidance states that there is no statistically significant difference between combination treatment of alpha blockers plus PDE5-I and PDE5-I in improving symptom scores, quality of life (IPSS question), nocturia or frequency at up to 3-month follow-up. However, large comparative studies are probably needed to investigate this more thoroughly. Taking into account the data to date, PDE5-Is should be considered first in patients with LUTS who also suffer from ED.
PDE5-I recommendation
Evidence grading of 1B to 4 ( Table 3) (Recommendation here are based on consensus opinion as well as evidence-base) • Recommend first line daily use of PDE5-I in patients suffering from ED as well as LUTS.
• PDE5-Is should be used for as long as needed by the patients.
Serotonin-norepinephrine reuptake inhibitorduloxetine
Three studies examined SUI in patients treated with duloxetine, and both demonstrated that duloxetine improved postprostatectomy SUI up to 3 months postsurgery, but the benefits were not sustained in one of these studies up to 24 weeks (~5 months) (78,91,93). In addition, the drug intolerance and dropout rates are~15-35% with duloxetine after ≥ 1 month of use (78,93).
However, duloxetine is rarely used in clinical practice as patients often feel nauseous with this medication and it may put the patients at increased suicide risk (120).
SNRI (duloxetine) recommendation
Evidence grading of 1B-4 ( Table 3) (Recommendation here are based on consensus opinion as well as evidence-base) • Not routinely used in clinical practice. There is insufficient evidence for its use and hence cannot be recommended for LUTS.
Summary oral treatment recommendations
Oral treatment recommendation • The sequencing of medication is generally bound by local prescribing guidance. Generally alpha blockers are given first, followed by antimuscarinics. However, our recommendation is to tailor the treatment based on the patient's needs, i.e. first line treatment should depend on what is the most bothersome symptom of LUTS identified on assessment. a) An alpha blocker (commonly tamsulosin) + antimuscarinic to be used first after RT if urge with/without leak incontinence. Stricture should be excluded prior to starting alpha blockers (Flow rate is often not available to primary care, but if the patient is at higher risk of a stricture or it is a possibility, then they will need to be referred for flow rate +/À cystoscopy). b) Alpha blocker + PDE5-I if LUTS + ED c) Antimuscarinic (usually tolterodine) to be used first after surgery if urgency UI. d) Antimuscarinic +PDE5-I if postsurgery LUTS + ED (or mirabegron if adverse effects with antimuscarinics).
• We recommend reviewing every 3 months with each treatment; however, patients should be able to see the healthcare provider sooner if they experience adverse events. NICE UI guidance has suggested a review either face to face or at least telephone at 4 weeks after initiating AM therapy. Therefore, a 4-week telephone review can precede face to face 3 month review.
• The treatments should be continued for as long as needed by the patient.
Cranberry juice
A study published in 2003 showed statistically insignificant effects of cranberry juice vs. apple juice on urinary symptoms in patients undergoing RT (101). A more recent placebo-controlled study by Cowan et al. also demonstrated that cranberry juice did not affect urinary symptoms in patients undergoing RT, although the study was limited by the sample size and duration (102). Another study of 370 patients demonstrated that cranberry extracts significantly reduced the incidence of LUTS, including nocturia, in patients when given during RT (100). A Cochrane review of susceptible population (including cancer patients) on cranberry juice for UTIs demonstrated that compared with placebo, water or no treatment, cranberry products did not significantly reduce the occurrence of symptomatic UTI in cancer patients. The review further stated that cranberry juice cannot be recommended for the prevention of UTIs (121).
In conclusion, although cranberry juice may have some impact on improving symptoms of LUTS, e.g. nocturia, there is no evidence for it in preventing UTIs or LUTS after cancer treatment. It is much more important to ensure patients avoid caffeinated drinks, which can aggravate storage symptoms.
Cranberry juice recommendation
Evidence grading: IIA-1B (Table 3) • There is no significant evidence regarding the benefits of cranberry juice for LUTS and hence it cannot be recommended.
Vitamins
A study by a Japanese group in patients taking mecobalamin (vitamin B12) during and after RP demonstrated no significant effect of mecobalamin on the recovery of urinary or sexual function. However, an early non-significant recovery effect on urinary function was suggested (103).
Vitamin supplement recommendation
Evidence grading IIA ( Table 3) • There is no evidence currently that vitamin supplements improve LUTS symptoms and as such, cannot be recommended for the management of LUTS.
Intravescical sodium hyaluronate
Sodium hyaluronate has been safely administered with success for the treatment of chemical and radiation cystitis, resulting in improvements in urinary symptoms and bladder pain (over 6-8 weeks) (104).
In clinical practice, this is very rarely used except for severe bladder pain after RT.
Intravescical sodium hyaluronate recommendation
Evidence grading 4 ( Table 3) • Very rarely used except for severe bladder pain after RT.
Alternative treatments
In a study of 37 patients, Tanaka et al. showed that Eviprostat, a herbal phytotherapeutic agent, given to patients pre-and post-BT, significantly improved recovery of their urinary symptoms scores, urinary function and urinary obstruction (105).
Some men report Saw Palmetto a useful herbal alternative to an alpha blocker. However, a placebocontrolled study of 369 men has demonstrated no differences in reduction of LUTs between Saw Palmetto and placebo (122).
Alternative treatment recommendation
Evidence grading IIA ( Table 3) • There is a lack of high quality data regarding use of alternative treatment in men after pelvic cancer therapy.
Containment devices
Fader et al. compared the performance of three continence management devices and absorbent pads used by men with intractable urinary leakage following prostate cancer surgery (106). Male devices included penile compression devices (clamp), sheath drainage systems (sheath), and body-worn urinals (BWU). The pads were significantly more highly rated vs. sheaths, clamps and BWUs by all men overall. However, the rating of the other devices varied depending on individual needs. For example, although BWUs were rated worse than the sheath overall, the sheath was rated highest for extended period use. Generally~50% of men stated that they used a combination of these depending on their requirements. The authors concluded that male containment devices can help men with UI and most men prefer to use a combination of devices and pads in order to meet their lifestyle needs (106).
NICE guidelines recommend offering men with storage LUTS (particularly UI) temporary containment products (e.g. pads or collecting devices) to achieve social continence until a diagnosis and management plan have been discussed (8). The ICS 2013 guidelines state that containment products play an essential role towards enhancing quality of life of individuals with incontinence (9).
A recent trial comparing the performance of three continence management devices (sheath drainage system, BWU, penile clamp) and absorbent pads used by 56 men > 1 year after treatment for prostate cancer found that the sheath was useful for extended use, especially when pad changing is difficult; the BWU was rated worse than the sheath and was mainly used for similar activities but by men who could not use a sheath (e.g. retracted penis); and the clamp was useful for short vigorous activities like swimming and exercise. It was also the most secure, least likely to leak and most discreet device but almost all men described it as uncomfortable or painful (123). The pads were useful for everyday activities, best for night-time use, most easy to use, comfortable when dry but most likely to leak and most uncomfortable when wet. The authors concluded that pads and devices have different strengths which make them particularly suited to certain patients (123).
In clinical practice, pads are used first line and over time most will not need the pads or reduce to one per day for an occasional stress leak or psychological comfort (patients with T3 disease and/or over 70 years of age use pads for longer time). Sheaths are very difficult to use for some as they do not generally stay on though correctly fitted sheaths can be very helpful. Clamps can also be really helpful to some patients though good dexterity is required for use of clamps and they should be used intermittently. In addition, clamps need to be sized appropriately. To conclude, use of pads and devices depends on circumstances and lifestyle needs of patients.
In clinical practice, urinary retention occurs in 2-8% of men after BT and is predictable depending on the prostate size and presence of significant LUTS pre-implantation. Intermittent self-catheterisation is very useful for patients who develop retention after BT, vs. an indwelling catheter.
Generally, products available in the community for patients are inadequate for their needs as these are too big or bulky.
Containment devices recommendation
Evidence grading IIA ( Table 3) (Recommendation here are based on consensus opinion as well as evidencebase) • Containment devices recommendations depends on lifestyle needs of patients.
Cost-effectiveness
The extensive use of pads together with the risk of urinary infections present an economic cost not always taken into account as the male patients generally pay for the pads themselves (18).
The NICE 2010 guidelines (not specific to cancer patients) indicate (8): • Alpha blockers are cost-effective for men with moderate to severe symptoms.
• Combination treatment is not considered costeffective although when alpha blockers alone are not working, adding an anticholinergic could be justified. Anticholinergic medications can impair any preexisting mild cognitive impediment and should be used with caution.
• The cost-effectiveness of containment products is uncertain and that the utility of these will vary among patients. Providing a choice of products appears to be the most practical way to offer cost-effective management of LUTS patients.
Duration of treatment and referral
On average, the PFMT lasted for 6 weeks-12 months; oral treatments lasted for ≥ 12 months; and other interventions such as cranberry juice or herbal remedies lasted for ≥ 1 month. The shorter time for the later interventions may be because they are generally not prescribed by physicians. On average, any treatment required ≥ 3 months needs to demonstrate symptom improvement, but ideally, these should be taken for as long as needed and depend on patient preference and response. In case of failure of conservative management, botulinum toxin injections for refractory OAB or surgical options, such as male slings and artificial urinary sphincter for SUI post-prostatectomy, or indwelling urinary catheters are available. However, further research is needed on optimal treatment duration and when best to refer.
However, long-term medical management leads to certain adverse events typical for the class of medications used. Therefore, compliance with medical therapy becomes an issue for patients. Our literature analysis demonstrates the higher rate of discontinuations with longer term treatments. Generally, a relatively high proportion of patients drop out of longterm trials because they are unwilling to tolerate the side effects associated with the treatment (124). However, these studies are not specific to men after cancer treatment. Clinical experience suggests that compliance is related to the efficacy of drugs and whether the most bothersome symptoms of LUTS are being addressed by the said treatment. Managing expectations and providing coping strategies is important in order to improve compliance as well as preparing the patients for symptoms of LUTS after treatment.
Clinicians should also ask the patients about other over the counter or prescribed medications they are on, as some may cause urinary problems. These include antihistamines, decongestants, diuretics, opiates and tricyclic antidepressants.
Recommendation for duration of treatment
• If symptoms do not improve in at least 3 months of each intervention (or a combination of these) described here, referral may be warranted to specialist urology centres.
Referral
Referral should be considered if: • Symptoms of LUTS persist after ≥ 3 months of conservative treatment or drug treatment.
ConservaƟve management
Pre-or within one month of RT/ADT treatment / catheter removal aŌer surgery: • Pelvic floor muscle training (preferably therapist-guided for at least 6 wk) • PaƟent educaƟon and health promoƟon: Ensure paƟents avoid caffeinated drinks, which can aggravate irritaƟve storage symptoms.
Referral
• If symptoms so not improve with at least 3 month of each intervenƟon (or a combinaƟon of these) described here, referral may be warranted to specialist urology centres. • Any significant impact on QoL.
• Frequency persists at > 8 times per day.
• If any malignancy suspected or recurrence.
Algorithm
Based on the above recommendation, Figure 1 outlines the treatment algorithm for LUTS after treatment for pelvic cancers. A summary of recommendations from the review is highlighted in Table 5.
Conclusions
There is a general lack of an evidence-base for managing male LUTS after pelvic cancer treatment. More evidence is available for LUTs management of men with prostate cancer than other pelvic cancers. Patients presenting with LUTS usually have overlapping symptoms, complicating assessment • Start PFMT pre-treatment (ideally 1 month before surgery in case of RP) or within one month of RT/ADT treatment/catheter removal after surgery • Physiotherapist assisted programme has the greatest benefit. Consider using a physiotherapist or at least a DVD with a physiotherapist demonstrating the • We recommend reviewing every 3 months with each treatment; however, patients should be able to see the healthcare provider sooner if they experience adverse events Other options (also included in existing guidelines) • Patient education and health promotion: Advise on bladder retraining, fluid intake and dietary irritants, review existing medications. • Caffeinated drinks: Ensure patients avoid caffeinated drinks, which can aggravate irritative storage symptoms. • Containment devices Duration • If symptoms do not improve within at least 3 month of each intervention (or a combination of these) described here, referral may be warranted to specialist urology centres.
• NICE UI guidance has suggested a review either face to face or at least telephone at 4 weeks after initiating Antimuscarinics therapy. Therefore a 4 week telephone review can precede face to face 3 month review.
• We recommend that all management options should be used for as long as needed by the patient
Referral
Referral should be considered if: and management. Large high quality studies are needed to investigate the comparative effectiveness of the various treatment options. Guideline recommendations for treatment give general recommendation and are not specific to male cancers and their specific requirements. Additionally, there are no recommendations concerning the optimal timing for the initiation or duration of the non-invasive management of male LUTS as a result of cancer treatment.
In this review, we have attempted to provide a comprehensive review of the evidence together with consensus opinion from clinical practice in order to develop recommendations for this patient group. It must be noted, however, that most of the studies presented here were performed in patients with UI rather than LUTS. For many men after cancer treatment LUTS is one of several symptoms and comorbidities and as such requires a holistic approach for its assessment and subsequent management. In addition, LUTS can cause great distress and functional limitations for men.
Clear assessment of the aetiology along with information on techniques to help men cope is essential in managing symptoms. The interventions and algorithm recommended here can be applied in clinical practice to improve management of LUTS in men with pelvic cancers although further testing of recommended management strategies is warranted. | 2018-04-03T03:06:22.160Z | 2015-08-20T00:00:00.000 | {
"year": 2015,
"sha1": "924402fc3a6d22f8a37aae480e9efaaec9f0e473",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ijcp.12693",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "924402fc3a6d22f8a37aae480e9efaaec9f0e473",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
26484174 | pes2o/s2orc | v3-fos-license | Prediction of Mortality in Patients with Isolated Traumatic Subarachnoid Hemorrhage Using a Decision Tree Classifier: A Retrospective Analysis Based on a Trauma Registry System
Background: In contrast to patients with traumatic subarachnoid hemorrhage (tSAH) in the presence of other types of intracranial hemorrhage, the prognosis of patients with isolated tSAH is good. The incidence of mortality in these patients ranges from 0–2.5%. However, few data or predictive models are available for the identification of patients with a high mortality risk. In this study, we aimed to construct a model for mortality prediction using a decision tree (DT) algorithm, along with data obtained from a population-based trauma registry, in a Level 1 trauma center. Methods: Five hundred and forty-five patients with isolated tSAH, including 533 patients who survived and 12 who died, between January 2009 and December 2016, were allocated to training (n = 377) or test (n = 168) sets. Using the data on demographics and injury characteristics, as well as laboratory data of the patients, classification and regression tree (CART) analysis was performed based on the Gini impurity index, using the rpart function in the rpart package in R. Results: In this established DT model, three nodes (head Abbreviated Injury Scale (AIS) score ≤4, creatinine (Cr) <1.4 mg/dL, and age <76 years) were identified as important determinative variables in the prediction of mortality. Of the patients with isolated tSAH, 60% of those with a head AIS >4 died, as did the 57% of those with an AIS score ≤4, but Cr ≥1.4 and age ≥76 years. All patients who did not meet the above-mentioned criteria survived. With all the variables in the model, the DT achieved an accuracy of 97.9% (sensitivity of 90.9% and specificity of 98.1%) and 97.7% (sensitivity of 100% and specificity of 97.7%), for the training set and test set, respectively. Conclusions: The study established a DT model with three nodes (head AIS score ≤4, Cr <1.4, and age <76 years) to predict fatal outcomes in patients with isolated tSAH. The proposed decision-making algorithm may help identify patients with a high risk of mortality.
Introduction
Traumatic subarachnoid hemorrhage (tSAH) is frequently observed in patients with head injuries [1]. In patients visiting the emergency department (ED) for the treatment of blunt head injury, tSAH was found to be the second most frequent consequence of traumatic brain injury (TBI) [2] and occurred in 33-60% of patients after moderate or severe TBI [1,[3][4][5]. It has been shown that tSAH is a marker of a more severe initial injury, indicating greater mechanical forces and intracranial deformation [6]. In the presence of tSAH, symptomatic cerebral vasospasm may develop in around 20% of patients with severe TBI [7,8].
Isolated tSAH is defined as the exclusive presence of tSAH in the absence of any other traumatic radiographic intracranial pathology. In contrast to those patients with tSAH in the presence of other intracranial hemorrhage, the prognosis of patients with isolated tSAH is good [9,10]. In patients with mild TBI (Glasgow Coma Scale (GCS) score ≥13) and isolated tSAH, no neurologic decline or need for neurosurgical procedures was observed [11][12][13][14][15]. In a retrospective study evaluating isolated tSAH in patients with mild TBI (GCS [13][14][15], of 67 patients, only 1 patient (1.5%) experienced neurological deterioration, and not a single patient required neurosurgical intervention [16]. In a meta-analysis study, the cumulative incidences of radiographic progression and eventual neurosurgical intervention after isolated tSAH were 5.76% (95% confidence interval (CI) 1.18-12.9%) and 0.0017% (95% CI 0-0.39%), respectively [10]. In the same meta-analysis study, among eight studies with a total of 873 patients with isolated tSAH, the incidence of mortality ranged from 0-2.5%, with a cumulative incidence of 0.60% across all the included studies (95% CI 0.09-1.4%) [10].
A previously conducted study recommended a protocol which did not require transfer for neurosurgical consultation for patients with isolated tSAH and a GCS score of 15 [10,17]. However, for most of the patients found to have an isolated tSAH on the CT scan, few data are available to help in the identification of high-risk individuals and guide physicians on which patients will likely need further evaluation and treatment. Despite the frequency of neurosurgical consultation, only a minority of patients may actually undergo neurosurgical intervention. Two common prediction models (the International Mission for Prognosis and Analysis of Clinical Trials in Traumatic Brain Injury (IMPACT)) and Corticosteroid Randomization after Significant Head Injury (CRASH)), based on large clinical trial datasets, have shown good discrimination and accurate outcome predictions for patients with TBI [18][19][20]. However, these two models lack the precision required for use at the individual patient level [21,22], and are not suitable to be applied in the management of patients with isolated tSAH.
The decision tree (DT) is a machine learning model, and is composed of decision rules based on optimal feature cutoff values that recursively split independent variables into different groups, and predict an outcome in a hierarchical manner [17,23,24]. To define the variables that could identify individuals at a risk for mortality from among patients with isolated tSAH, we aimed to construct a model for mortality prediction using the DT algorithm and data obtained from a population-based trauma registry, in a Level 1 trauma center. This established model may be useful to determine patients with a high mortality risk, and help improve clinical decision-making in the case of patients with isolated tSAH.
Study Population
Approval was obtained from the institutional review board of the Kaohsiung Chang Gung Memorial Hospital (reference number 201701412B0). This hospital is a Level 1 regional trauma center in southern Taiwan [25,26]. Then, the database of the Trauma Registry System was searched for the diagnostic injury code 852.0 (traumatic subarachnoid hemorrhage) from the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM). All injury data were coded according to the 1998 version of the Abbreviated Injury Scale (AIS). The AIS is coded by two trauma registrars, who review medical records for written descriptions of injury from radiologists and physicians and do not rely indirectly on ICD-9-CM diagnostic codes. The head AIS scores are presented as the following scores: 1 (minor); 2 (moderate); 3 (serious); 4 (severe); 5 (critical); and 6 (unsurvivable). All adult patients (≥20 years of age) who presented with head trauma and tSAH requiring admission from 1 January 2009 to 31 December 2016, were included in the study. To avoid the confounding effect of injuries of other body regions on the mortality assessment, polytrauma patients [27] with an AIS score ≥3 in any other region of the body were excluded from the study. Thus, the included patients were defined as having isolated tSAH. Of the 1665 identified patients with TBI, 545 patients had isolated tSAH, which included 533 who survived and 12 who died. The following data were retrieved: sex; age; body mass index (BMI); co-morbidities such as diabetes mellitus (DM), hypertension (HTN), coronary artery disease (CAD), congestive heart failure (CHF), cerebral vascular accident (CVA), and end-stage renal disease (ESRD); vital signs, including temperature, systolic blood pressure (SBP), diastolic blood pressure (DBP), heart rate (HR), respiratory rate (RR); shock index (=HR/SBP); Injury Severity Score (ISS); GCS score; AIS score in different regions of the body; white blood cell (WBC) and red blood cell (RBC) count, levels of hemoglobin (Hb), hematocrit (Hct), platelets, blood urine nitrogen (BUN), creatinine (Cr), alanine aminotransferase (ALT), aspartate aminotransferase (AST), sodium (Na), potassium (K), and glucose at the ED; hospital length of stay (LOS); rates of admission to the intensive care unit (ICU); and in-hospital mortality. ISS was expressed as the median and interquartile range (IQR, Q1-Q3).
Decision Tree Classifier
Enrolled patients were divided into a training set and a test set, with a ratio of 7:3. Of the 545 patients with isolated tSAH, 377 and 168 patients, were assigned to the training set and the test set, respectively. The training set was used for predictor discovery and supervised classification to generate a plausible model. The test set was used to test the performance of the model generated in the training sample. The DT classification was performed using classification and regression trees (CART), based on the Gini impurity index using the rpart function in the rpart package in R. The CART analysis searched for the split on the variable that would partition the data into two different groups-a group of mostly '1 s' (people who died) and a group of mostly '0 s' (people who survived) [28,29]. The CART model partitioned the data and assigned a predicted class to each subgroup. With the repetition of the same process on each predictor in the model, the CART identifies the best overall split by iteratively testing all the possible splits and producing the greatest reduction in impurity [30,31]. The CART analysis was performed recursively, in this manner, until the specified stopping criteria were reached, a specified number of nodes were created, or a further reduction in node impurity became impossible [30][31][32]. In order to generate a sequence of simpler trees, each of which is a candidate for the appropriately-fit final tree, the method of "cost-complexity" pruning is used. In this study, the complexity parameter (α), a measure of how much additional accuracy a split must add to the entire tree to warrant additional complexity, was 0.001.
Performance of the Decision Tree Classifier
The accuracy, sensitivity, and specificity of the DT model were calculated. In the test set, stratified 10-fold cross-validation was used to evaluate the predictive power of the models. Briefly, the patients were randomly divided into 10 folds, and the number of patients with an event was approximately equal in all folds. The model was developed using nine folds and validation on the tenth.
Statistical Analysis
We performed the statistical analyses using IBM SPSS Statistics for Windows, version 20.0 (IBM Corp., Armonk, NY, USA) and R 3.3.3 (R Foundation, Vienna, Austria). The primary outcome of the study was in-hospital mortality. Two-sided Fisher's exact or Pearson's chi-square tests were used to compare categorical data, with presented odds ratios (ORs) and 95% CIs. The normality of continuous data was examined using the Kolmogorov-Smirnov test. The normally distributed continuous data and non-normally distributed data were analyzed with unpaired Student's tand Mann-Whitney U-tests, respectively, and presented as mean ± standard deviation. p-values < 0.05 were defined as statistically significant.
Characteristics and Outcomes of Patients with Isolated tSAH
No significant differences in sex and comorbidities were observed between the survival and mortality groups ( Table 1). The mortality group had significantly higher AIS scores in the head and abdomen regions than the survival group. As shown in Table 2, the mortality group had a significantly higher ISS (median (IQR), 25 (26.3)) than the survival group (15 (2.0)). In addition, the mortality group had a significantly higher HR, glucose, RBC, Na, Cr, and AST level, but lower GCS, DBP, Hb, and Hct levels than the survival group.
Classification by Decision Tree
As shown in Figure 1, in the DT model, the head AIS scorewas identified as the variable of the initial split, with an optimal cut-off value of ≤4. Among patients with a head AIS score >4 (i.e., AIS score = 5 or 6), 60% of the patients with isolated tSAH had fatal outcomes and 40% survived. Among the patients with a head AIS score ≤4, Cr was identified as the variable of the second split, with an optimal cut-off value of <1.4 mg/dL. For this node, all the patients with Cr <1.4 mg/dL survived. The outcome of patients with Cr ≥1.4 mg/dL was determined by an additional predictor-age, with an optimal cut-off value of <76 years; 57% and 43% of patients aged ≥76 had fatal outcomes and survived, respectively. In contrast, all the patients below the age of 76 years of age survived. According to the classification by the DT, two groups of patients with a high risk of fatality were identified. With all the variables in the model, the DT achieved an accuracy of 97.9% (sensitivity of 90.9% and specificity of 98.1%) for the training set. In the test set, the DT achieved an accuracy of 97.7 ± 0.9%, sensitivity of 100.0 ± 0.0%, and specificity of 97.7 ± 0.9%.
Discussion
In this established DT model, three nodes (head AIS score ≤4, Cr <1.4 mg/dL, and age <76 years) were identified as important determinative variables in the prediction of mortality in patients. In the present study, among the patients with isolated tSAH, 60% of the patients with a head AIS score >4 died, as did 57% of the patients with a head AIS score ≤4, but Cr ≥1.4 and age ≥76 years. However, all the patients who did not fit the above-mentioned criteria survived. In this DT model, an AIS ≤4 was the first node in predicting fatality in patients. Of the patients with isolated tSAH, in our study, 60% of those with a head AIS = 5 or 6 were likely to die. However, most reports state that tSAH in itself is not a significant prognostic factor of further medical or surgical treatment for mild TBI [11][12][13][14][15]. However, in those head injury patients with an AIS score = 5 (critical) or 6 (unsurvivable), the prognosis is obviously poor. Notably, as per the 1998 version of the AIS, the diagnosis of SAH would be assigned an AIS score = 3. In the upgraded version, AIS score = 4 or 5 would be indicative of the presence or loss of consciousness between 6 and 24 h or >24 h, respectively. Therefore, prolonged time in a state of unconsciousness could also be an important factor in the determination of mortality in patients with isolated tSAH. This is also in agreement with the opinion that the neurological status at admission, after tSAH, reflects early brain injury and is a larger predictor of death [12,13].
Cr <1.4, indicative of renal function, was the second node in the DT model used in the prediction of fatal outcomes. Laboratory values are infrequently included in TBI prognostic evaluation; however, they have been shown to assist in determining patient outcomes [33,34]. For example, patients with SAH are at a high risk of dysnatremia for several weeks following injury [35]. In a multivariate logistic regression model, hypernatremia (defined as sodium >143 mmol/L) was a statistically borderline predictor of mortality [36]. Evidence indicated that posterior hypothalamic lesions trigger renal vasoconstriction by the activation of the reninangiotensin system, and thereby reduce renal blood flow [37]. A correlation between high plasma renin activity, high urinary catecholamine excretion, and poor patient outcome after SAH has been reported [38]. Furthermore, after aneurysmal SAH, a significant association of renal complications (3.6%, p < 0.001) with unfavorable outcomes was reported [11].
The third node in the prediction of mortality patients in this DT model was age <76 years. It is well-known that age, in itself, is an independent predictor of mortality in the case of TBI [39]. The observed increase in mortality begins in the fifth decade of life, with a steep increase occurring at age 70 years in patients with isolated traumatic brain injury [39]. For those with
Discussion
In this established DT model, three nodes (head AIS score ≤4, Cr <1.4 mg/dL, and age <76 years) were identified as important determinative variables in the prediction of mortality in patients. In the present study, among the patients with isolated tSAH, 60% of the patients with a head AIS score >4 died, as did 57% of the patients with a head AIS score ≤4, but Cr ≥1.4 and age ≥76 years. However, all the patients who did not fit the above-mentioned criteria survived. In this DT model, an AIS ≤4 was the first node in predicting fatality in patients. Of the patients with isolated tSAH, in our study, 60% of those with a head AIS = 5 or 6 were likely to die. However, most reports state that tSAH in itself is not a significant prognostic factor of further medical or surgical treatment for mild TBI [11][12][13][14][15]. However, in those head injury patients with an AIS score = 5 (critical) or 6 (unsurvivable), the prognosis is obviously poor. Notably, as per the 1998 version of the AIS, the diagnosis of SAH would be assigned an AIS score = 3. In the upgraded version, AIS score = 4 or 5 would be indicative of the presence or loss of consciousness between 6 and 24 h or >24 h, respectively. Therefore, prolonged time in a state of unconsciousness could also be an important factor in the determination of mortality in patients with isolated tSAH. This is also in agreement with the opinion that the neurological status at admission, after tSAH, reflects early brain injury and is a larger predictor of death [12,13].
Cr <1.4, indicative of renal function, was the second node in the DT model used in the prediction of fatal outcomes. Laboratory values are infrequently included in TBI prognostic evaluation; however, they have been shown to assist in determining patient outcomes [33,34]. For example, patients with SAH are at a high risk of dysnatremia for several weeks following injury [35]. In a multivariate logistic regression model, hypernatremia (defined as sodium >143 mmol/L) was a statistically borderline predictor of mortality [36]. Evidence indicated that posterior hypothalamic lesions trigger renal vasoconstriction by the activation of the renin-angiotensin system, and thereby reduce renal blood flow [37]. A correlation between high plasma renin activity, high urinary catecholamine excretion, and poor patient outcome after SAH has been reported [38]. Furthermore, after aneurysmal SAH, a significant association of renal complications (3.6%, p < 0.001) with unfavorable outcomes was reported [11].
The third node in the prediction of mortality patients in this DT model was age <76 years. It is well-known that age, in itself, is an independent predictor of mortality in the case of TBI [39]. The observed increase in mortality begins in the fifth decade of life, with a steep increase occurring at age 70 years in patients with isolated traumatic brain injury [39]. For those with mild-to-moderate TBI, with a GCS score of 9-15, the mortality was twice as high among elderly adults compared to their younger counterparts [39]. Age was also recognized as an independent predictor of mortality in patients with tSAH [9,40], and especially in patients with isolated tSAH [16]. In a previously conducted study, age 58 years was identified as the best threshold for discriminating injury mortality in the case of SAH [41]. In this study, the threshold of age <76 years was selected by the DT algorithm as the best split for further classification.
There are many models, including C4.5 and C5.0, DTs, ID3s, CART, and chi-square automatic interaction detector DTs (CHAIDs), which could be used to construct DT models [26,28]. CART analysis is an innovative DT model in which several predictive variables are crucial in the identification of patients at different levels of risk, in various medical fields, through progressive binary splits, to develop a model for better prediction and clinical decision-making [30][31][32]. Among these methods, CART analysis is conducted based on the combination of nonparametric and nonlinear variables for recursive partitioning analysis [30][31][32]. Approaches with different DT models may provide a model with similar predictive power but with the selection of different kinds of variables as nodes. Determining which tree is the most suitable as a prediction model may depend on the reasonability of the selected node in explaining the predicted outcomes. One advantage of the DT algorithm is its construction does not require any domain knowledge or parameter setting, and is therefore appropriate for exploratory knowledge discovery. The procedure of DT for classifying data based on attributes was different from conventional statistical analysis, which tends to identify the different variables among the compared groups. For example, in this study, three nodes (head AIS score ≤4, Cr <1.4 mg/dL, and age <76 years) were identified as important determinative variables in the prediction of mortality in patients. However, there was even no significant difference of age between the survival and mortality group. Furthermore, mortality group had a significantly higher HR, glucose, RBC, Na, and AST level, in addition to Cr, than the survival group.
One limitation of the study is its relatively small dataset. Therefore, further validation using larger and different datasets may help in the examination of the usefulness of this decision-making model. It has been reported that hemorrhages observed in the basal cistern and Sylvian fissure carry a risk of late deterioration in patients with isolated tSAH [42]. This late deterioration may result from hematoma expansion, which is caused by the abruption of a perforating branch arising from the middle cerebral artery at the time of head injury [42]. Wu et al. also suggested that the presence of tSAH in the basal cisterns or Sylvian fissure on the initial CT scan should be considered as evidence of progressive hemorrhage on the repeat CT scan, and should warrant prompt consideration of neurosurgical consultation [43]. In this study, the lack of important information from CT scans in establishing the DT model may have impaired the predictive power of the constructed model; this is the second limitation of this study.
Furthermore, the prolonged time spent in a state of unconsciousness could have added to the determination of mortality outcomes in the patients in this study. However, this indicates that, before the assignment of the head AIS score, the time of loss of consciousness should be observed for more than 24 h. Therefore, this decision-making algorithm is not capable of providing prediction within 24 h, thus limiting its application. Additionally, as per the 2005 version of the AIS, the diagnosis of SAH would be assigned an AIS score = 2, and the modifier of the loss of conscious code would not be selected if a specific brain injury that can cause unconsciousness is identified [44]. Thus, according to the data used to establish the model in this study, it is preferable to use the state of tSAH with the loss of consciousness >24 h to indicate the first node of the DT model.
According to this DT model, it is not possible to determine the mortality of high-risk patients with very high confidence. However, considering few data are available to help in the identification of high-risk individuals and guide physicians on which patients will likely need further evaluation and treatment, this model may still be helpful in classifying the patients into survival groups (AIS score ≤4 and Cr <1.4 mg/dL as well as AIS score ≤4, Cr ≥1.4 mg/dL, and age <76 years) and groups of high risk to mortality (AIS score >4 and Cr <1.4 mg/dL as well as AIS score ≤4, Cr ≥1.4 mg/dL, and age ≥76 years) with predicted mortality rate. This imperfection also indicated that we have space for improvement in this established DT model. The present study has some other limitations too, the first of which is the selection bias associated with the retrospective study design. Due to the relatively small sample size, the risk factors for mortality may not have been fully assessed. Second, patients who died at the scene or were declared dead upon hospital arrival were not included in this study, leading to further selection bias considering that mortality was the primary outcome. Third, in the absence of a standard protocol for the treatment of patients with isolated tSAH, in this study, we can only assume that the patients included received uniform management by the care-giving physicians, especially considering that there was a lack of important information regarding the administration of anticoagulation and antiepileptic medications. Furthermore, the study was limited to a single center, and the patient injury characteristics and management may vary from those observed at other institutions; this limits the generalizability of the findings.
Conclusions
The study established a DT model with three nodes (head AIS score ≤4, Cr <1.4, and age <76 years) to predict fatal outcomes in patients with isolated tSAH. The proposed decision-making algorithm may help identify patients with a high risk of mortality. | 2017-12-10T14:44:17.971Z | 2017-11-01T00:00:00.000 | {
"year": 2017,
"sha1": "352debc27f481fda8bb05644dfc5ed67b1d1691e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/14/11/1420/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "352debc27f481fda8bb05644dfc5ed67b1d1691e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
83854509 | pes2o/s2orc | v3-fos-license | Non-volant tetrapods from Reserva Biológica de Duas Bocas , State of Espírito Santo , Southeastern Brazil
Reserva Biológica de Duas Bocas (2,910 ha) is one of the largest Atlantic forest remnants in the State of Espírito Santo, Southeastern Brazil. We recorded non-volant tetrapods in this area from May 2007 through April 2008, using pitfalls, live traps, camera traps, and diurnal and nocturnal opportunistic searches. In addition, we compiled available museum and literature records from this area. We documented 52 species of amphibians, 24 species of non-avian reptiles, and 39 species of non-volant mammals. Out of these 115 species, 47 are new records for this area and six other species had their geographic ranges expanded with the present study. Furthermore, we present the record of predation of the tree frog Hypsiboas faber by the snake Chironius bicarinatus. Out of the species listed, five species are listed as threatened with extinction in the State of Espírito Santo, and many others have uncertain conservation status. Reserva Biológica de Duas Bocas is an important wildlife refuge, especially considering the expansion of urban areas in its surroundings.
We used pitfall traps for capturing small mammals, amphibians and non-avian reptiles.We established six 100 m transects where eleven 60-liter buckets (40 cm in diameter by 54 cm in depth) were installed, 10 m apart.The buckets were connected by 50 cm-high drift fences secured by wooden stakes.In addition, we used conventional live traps for small mammals: one Sherman (23 × 9 × 9 cm) and one wire cage (32 × 15 × 15 cm) were arranged within a radius of about five meters of each pitfall.These traps were baited with pineapple and peanut butter.
We also placed three camera traps (Tigrinus® 4.0C) for detecting medium to large mammals.They were installed 50 cm above the ground, attached to tree trunks near creeks or possible animal routes.Cameras were set to operate both night and day with a minimum interval of 90 seconds between pictures.They were checked monthly, when film and batteries were changed and the total sampling effort was 1,095 camera trap-nights.Additionally, we conducted opportunistic diurnal and nocturnal searches for animals, nests, shelter, tracks, or other signs of non-volant tetrapods in the study area.These searches were conducted throughout the year, 5 hours a day on average.
Series of specimens were collected to serve as museum vouchers (listed in Appendix 1), and were prepared according to standard techniques (Auricchio & Salomão 2002).Vouchers are deposited in the mammal collection at Universidade Federal do Espírito Santo, Vitória (UFES); Museu de Biologia Professor Mello Leitão, Santa Teresa (MBML); and Célio Fernando Baptista Haddad collection (CFBH) at Universidade Estadual Paulista Júlio de Mesquita Filho, Rio Claro, but some specimens still will be deposited in MBML.The remaining animals were identified and released (in the case of amphibians and non-avian reptiles), or marked with numbered ear tags (National Band and Tag Co.) and released (in the case of mammals).We also used published records (Paresque et al. 2004, Prado & Pombal 2005) and museum data available at the Reference Center on Environmental Information -CRIA (http://splink.cria.org.br) for MBML, CFBH, and "Alphonse Richard Hoge" herpetological collection (IBSP-Herpeto).
Introduction
Despite intense exploitation of its natural resources, the Brazilian Atlantic forest still maintains an extremely rich biodiversity, including a large number of endemic species of plants and animals (Morellato & Haddad 2000).It has unique floristic and climatic characteristics, and a marked altitudinal gradient, favoring geographic isolation among populations (Haddad et al. 1996).
Since European colonization, 500 years ago, the Atlantic forest cover in the State of Espírito Santo has been reduced to only 8.9% (Lederman & Padovan 2005), consisting typically of small, isolated fragments disturbed by human activities.One of the main Atlantic forest remnants is located in the metropolitan region of the state capital, Vitória (Grande Vitória).Reserva Biológica de Duas Bocas (RBDB) is an important buffer and refuge for the native fauna, and remained well preserved despite intense pressure from the expansion of urban areas and associated activities in its surroundings.
The destruction of the Atlantic forest in the State of Espírito Santo began in 1503, with the establishment of the first village, and later intensified with the indiscriminate extraction of hardwood (Secretaria... 1998).Coffee plantations expanded in the second half of the 19 th century, leading to deforestation, soil erosion, and river aggradation and pollution (Schettino 2000).The state population was predominately rural until the 1960's, but industrialization lead to an increase in urban population, especially around the state capital (Lederman & Padovan 2005).Nowadays, the main land use is for cattle ranches, which cover 40% of the state, especially in the northern region (Lederman & Padovan 2005).
According to Passamani & Mendes (2007), 197 animal species are threatened with extinction and 11 have been considered regionally extinct in the State of Espírito Santo.The number of threatened species in the state corresponds to 32.9% of the total number of species included on the Brazilian Endagered Species List (Machado et al. 2005).This is a worrisome problem, since the state occupies only 0.53% of the Brazilian territory.This situation if further aggravated by the lack of long-term inventories, leading to very few adequate species lists.The only two animal groups with reasonably well-documented faunas are birds (Simon 2009) and mammals (Moreira et al. 2008).
Here we provide a list of non-volant tetrapods from Reserva Biológica de Duas Bocas, based on a one-year survey using a combination of several sampling techniques.Our main goal is to provide a good preliminary inventory of the local biodiversity, and to increase scientific collections, which will subsidize further work on taxonomy, biogeography and conservation biology.
Study area
RBDB is located between latitudes 20° 14' 04 '' and 20° 18' 30" S,and longitudes 40° 28' 01'' and 40° 32' 07'' W (Figure 1)."Duas Bocas" means "two mouths" in Portuguese, and this name stems from the confluence of the rivers Panelas and Naiá-Assú, which flow into a dam built in 1951 on the eastern end of the reserve.RBDB has an area of 2,910 ha and the elevation ranges from 300 to 738 m.Most of the area shelters primary forest.The climate is tropical humid (average annual temperature between 19 and 22 °C, relative humidity > 70%), and rains are well distributed throughout the year, with an average annual rainfall of approximately 1,500 mm (Feitoza 1986).
Discussion
With the results presented above, we increase the species list of non-volant tetrapods from RBDB by 40%.These records correspond to 30% of the mammal species in the State of Espírito Santo (Moreira et al. 2008).The same fraction cannot be calculated for non-avian reptiles and amphibians because there is no state wide consistent species list of these animals.
In addition, we confirmed the presence of two exotic species: domestic dogs (Canis familiaris) were often seen and photographed by camera traps, and one specimen of black rat (Rattus rattus) was trapped in the middle of a very pristine forest, at least 3 km away from the forest edge.Domestic or free-roaming dogs can be frequent visitors to wet tropical forest preserves of the Neotropics, and are potential competitors with other large mammals, including some large cats and smaller carnivores, like Cerdocyon thous (Srbek-Araújo & Chiarello 2008).Several researchers have documented the presence of domestic dogs in Brazilian protected areas (e.g., Horowitz 1992, 2004) reported the occurrence of the short-tailed opossum Monodelphis domestica in this area, but we did not have access to their voucher specimen.We believe that its occurrence in the area, or even in the State of Espírito Santo, is unlikely because this species is found only in the drylands of central Brazil (Caatinga and Cerrado), eastern Bolivia, Northern Paraguay, and Northeastern Argentina (Chaco) (Gardner 2007).Considering the lack of a voucher and the geographic distribution of M. domestica, we did not include this species in our list of non-volant tetrapods from RBDB.
Among the amphibian species captured, approximately 80% are endemic to the Atlantic forest (Frost 2010).This result is expected, and is due to the fact that most of the evolutionary history of any frog community along the Brazilian coast is strongly connected to the Atlantic forest domain (Heyer et al. 1990).We obtained 12 new records of amphibians from RBDB: the anurans Ischnocnema sp.(Figure 2d), Ischnocnema guentheri (Figure 2b), Ischnocnema oea (Figure 2c), Chiasmocleis carvalhoi, Dendrophryniscus sp., Zachaenus carvalhoi (Figure 3f), Bokermannohyla caramaschii (Figure 2f), Vitreorana uranoscopa (Figure 3a), Scinax hayii (Figure 2o), Physalaemus cuvieri (Figure 3h), Leptodactylus aff.cupreus (Figure 3b), and the caecilian Siphonops annulatus.The species Ischnocnema sp. is a new species and is currently being described, and the specimen of Dendrophryniscus sp. was found dead and was in a bad state of conservation, making it difficult to identify the species level.
Moreover, the success of sampling can be attributed to the fact that opportunistic searches were conducted at different times of day over a year, allowing the detection of species with seasonal occurrence and with different ecological habits (Auricchio & Salomão 2002).
The increase in the number of non-volant tetrapods species from RBDB is probably due to the use of multiple methods of capture.Other researchers carried out field expeditions in the area, but using mostly visual methods for detecting animals (Prado & Pombal 2005).The use of pitfall traps in this study greatly increased trapping success, probably being responsible for most new records.Many species recorded in pitfalls usually inhabit the leaf-litter, have fossorial habits, and are rarely observed during opportunistic searches.
The tree frog Phasmahyla exilis, the glass frog Vitreorana uranoscopa, the snake-necked turtle Hydromedusa maximiliani (Figure 3o) are on the list of threatened species in the State of Espírito Santo (Gasparini et al. 2007, Almeida et al. 2007) (Table 4).We found these three species in the Pau Oco Creek.We observed more than one individual of Phasmahyla exilis and usually heard its vocalization when we walked through the creek at night.Vitreorana uranoscopa, on the other hand, was detected only once in December 2007, when we found a male vocalizing.We found a male, a female and an egg of the turtle Hydromedusa maximiliani during the study.The male was recorded during an active search at night along the Pau Oco Creek, the female was accidentally captured in a Tomahawk trap set for small mammals, and the egg was collected during the opportunistic searches and later proved to belong to this species.
The study area houses the maned sloth Bradypus torquatus and the water opossum Chironectes minimus, which are both on the state list of threatened species (Chiarello et al. 2007).Mother and baby maned sloths were observed in the forest canopy (about 20 m above the ground) near the main trail for 15 minutes.Water opossums are seldom captured by traditional methods used for other small mammal species because of their semiaquatic and nocturnal habits (Galliez et al. 2009).One of us (J.L. Gasparini) saw one water opossum in the study area a few years before the beginning of the present fieldwork, but despite some directed effort to capture this species in the area by setting tomahawk traps along a creek and waterfall during the present study, we were unable to trap any individual.It is included on the checklist because C. minimus can be easily recognized in the field since it is the only semi-aquatic marsupial and shows a very characteristic black and white pelage.
Table 4. Conservation status of threatened non-volant tetrapods species recorded at Reserva Biológica de Duas Bocas according to regional, national, and international lists.http://www.biotaneotropica.org.br/v10n3/en/abstract?inventory+bn02710032010 http://www.biotaneotropica.org.brNeotrop.,vol. 10,no. 3 Our record of the snake Oxyrophus formosus is the southernmost, and highest in terms of elevation.Its previous southern limit was Mucuri, in the Southern part of the State of Bahia (Argôlo 2004).A.P. Almeida, J.L. Gasparini & A. Argôlo (unpublished data) have also recorded one specimen near Reserva Biológica de Sooretama, in the Northern part of Espírito Santo.The only individual of O. formosus captured during the present study escaped.Despite the lack of a voucher, this conspicuous species was included on the list, since our field identification was confirmed by specialists who examined its picture (A.Argôlo and A.P. Almeida, pers. commun.).O. formosus has a disjunct distribution, occurring in the Amazon and the Atlantic forest, and Argôlo (2004) discussed the possibility that this taxon in fact represents a composite species.We hope that the record of this species at Duas Bocas will stimulate and promote further field research in the area and its surroundings.
Biota
The reserve still houses little known anurans of uncertain conservation status, such as Scinax kautskyi (Figure 2p), Ischnocnema oea, Euparkerella tridactyla, and Zachaenus carvalhoi (Gasparini et al. 2007) (Table 4).Before this study, published records for S. kautskyi, I. oea and Z. carvalhoi were known only from their type locality, at Santa Teresa, State of the Espirito Santo, which is 40 km from RBDB, These four species had already been collected at RBDB by J.L. Gasparini, in 2003, and Chiasmocleis carvalhoi occurs in the States of São Paulo, Rio de Janeiro, and Bahia (Ilhéus), and it was recently recorded from Espírito Santo (Silva-Soares et al. 2010).In the present paper, we expand the occurrence of this species to RBDB, about 40 km NW of the previous record (Setiba, Espírito Santo) and to an elevation above 550 m, given that previous records are all from coastal lowlands.
We found the snake Chironius bicarinatus preying on an adult tree frog Hypsiboas faber near the Alto Alegre field station, where they both fell on the ground from a tree branch in December 2007.Pombal Jr. (2007) considered Chironius bicarinatus as a potential predator of anurans and Oliveira (2008) reported this snake preying on the tree frog Hypsiboas pulchellus.
All anurans and most mammals recorded at RBDB also occur at Santa Teresa, State of Espírito Santo, which is a hotspot of frog and mammal diversity (Rodder et al. 2007, Passamani et al. 2000).This status has been achieved as a direct result of constant collecting by well-trained herpetologists and mammalogists and a research tradition associated to a 50-year old local natural history museum (Museu de Biologia Professor Mello Leitão).Other important Atlantic forest fragments such as RBDB have been historically neglected in terms of faunal surveys, and we are only now beginning to evaluate their biological diversity.Reserva Biológica de Duas Bocas undoubtedly houses additional unrecorded species and long-term fieldwork is needed to complete its inventory.For example, park rangers mentioned the occurrence of the snake Bothrops bilineatus, but we did not find it.RBDB is located very close to the state capital, which is witnessing a wave of economical growth in recent years.Therefore human impact is rapidly growing in the surroundings, especially eucalyptus plantations, hunting and agriculture.This growth increases the fragility of the ecosystem in this area, which is becoming an island of forest within a matrix of urban and agricultural land uses.All efforts should be devoted to survey and protect the biodiversity of this important Atlantic forest remnant.More information about our project, including pictures of most species, can be found in the "Virtual Guide to the Fauna from Reserva Biológica de Duas Bocas" available at http://www.cchn.ufes.br/dbio/labs/lamab/duasbocas (in Portuguese).
We are grateful to several undergraduate and graduate students from the Mammalogy and Biogeography Laboratory at UFES who provided invaluable help in the field.T. Lacher and four anonymous refereesprovided valuable suggestions.L.D. Centoducatte helped in the preparation of the map, P.R.W. Stein helped in the preparation of the photos, E. Ferreira provided suggestions to the text, and P.L.V. Peloso contributed with photographs.A. Argôlo | 2019-03-20T13:06:18.520Z | 2010-09-01T00:00:00.000 | {
"year": 2010,
"sha1": "fd16953556f3e6fd38d584daf01de7ef3dc31d91",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/bn/a/xHKCCWgCQjkHCFknvDv3VkJ/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "fd16953556f3e6fd38d584daf01de7ef3dc31d91",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
} |
96428641 | pes2o/s2orc | v3-fos-license | The role of Wilm ’ s Tumor 1 immunohistochemical marker in surface epithelial ovarian tumors
Background: Wilms’ tumor 1 is a tumor suppressor gene. The gene is located in chromosome 11p13. And its expression was found in many solid tumors (including ovarian tumor) and also expressed in hematologic malignancies, Recent studies found that WT1 to be involved in angiogenesis. Objectives: To evaluate the expression of WT1 in surface epithelial ovarian tumorand study the possibility of using WT1 as replacement of both;ovarian tumor marker CA125 and a endothelial cell phenotypic marker CD34. Patients and methods: This is a study of a retrospective ( cross sectional ) of sixty cases with total abdominal hysterectomy and bilateral salpingo oopherectomy collected from department of Histopathology – Teaching Laboratories / Medical City Teaching Hospital , as well as Al alwya hospital and Al Habibia hospital in Baghdad during the period of study from December 2007 to December 2012. Thirty cases diagnosed as surface epithelial ovarian tumors and thirty cases of histologically normal ovarian tissue which were included as a control group. Formalin fixed, paraffin embedded ovarian tissue blocks from 60 cases were used . Three section of 4 micron for each taken and stained with WT1, CD34, and CA125 immunohistochemical marker on positively charged slides. Results: there were a significant correlation between expression of WT1 and histological types of surface epithelial ovarian tumor with a higher expression in serous tumors among other cancer types (P-value < 0.001).There was a significant positive correlation between the expression of WT1 and CA125 scores ( p-value < 0.001).There was a significant correlation between WT1microvessel density (MVD) expression and CD34microvessel density (MVD) expression in ovarian tumors (P-value = 0.05).On the other hand, there were no significant correlation of WT1 with the age of cases (P-value = 0.9) and with the grade of ovarian tumors ( P-value = 0.23) . Conclusions: The present study demonstrates high expression of WT1 in both tumor and endothelial cells in surface epithelial ovarian tumors, and it had dual usages in evaluation of both ovarian tumor cells and the vascular density. That was proved by demonstrating a significant correlation between WT1 and CA125 expression, and between WT1-MVD and CD34MVD . There was no statistically significant association between WT1 expression and different tumor grades. There was significance differences in WT1expression among different histological subtypes of primary ovarian carcinomas, with serous carcinoma as the most frequent type.
Introduction:
Ovarian cancer is the second leading cause of cancer-related death in women worldwide.Since most patients are diagnosed in advanced disease stages.1In Iraq, ovarian tumors rank the 6th commonest cancer and constituted 3.81 % according to Iraqi Cancer Board Registry in 2009.2These tumors comprise several distinct histological types.The surface epithelial tumors account for 60% of all ovarian neoplasm.3 Its etiology is poorly understood .It's more common in nulliparous women, in those living in industrialized countries and epidemiological studies have shown a significant reduction in ovarian cancer in women who have used oral contraceptive pills.Most cases of epithelial ovarian cancer are sporadic, occurring with no family history of the disease.4Wilms'tumor 1 (WT1) is a transcription factor first found in Wilms' tumor of the kidney, where it acts as a tumor suppressor gene.The gene is located in chromosome 11p13.And WT1 was associated with cell proliferation in many solid tumors (malignant melanoma,5 breast cancer,6 glialtumors7 , desmoplastic small round cell tumors.8Andepithelial ovarian tumors 9) .And also found in hematologic malignancies (myeloid leukemia cells)10 recent studies have reported correlations between WT1 and neovascularization in histogenetics , normal genitourinary development, cardiac malformation and tumor angiogenesis.11 WT-1 is a useful
The role of Wilm's Tumor1 immunohistochemical marker in surface epithelial
Muna I. AL Hafedh ovarian tumor marker for detection of ovarian tumor cells, 11and is also consistent with that of a recent study in 2008, which reported that "Endothelial WT-1 expression was detected in 95% of 113 tumors of different origin", 12 including expression of WT1 in endothelial cells in human breast tumors.Li HJ. , et al in 2009 assessed that WT-1 immunohistochemistry have dual usages in evaluation of the myoepithelial cells and micro-vessel density in breast cancer.13As the human ovary is rich in blood vessels and WT-1 has been used as a biomarker for ovarian tumors11 Together, these findings suggest that a single WT1 immunohistochemistry have dual usages in evaluation of both ovarian tumor cells and the vascular density.14 Single WT -1 immunohistochemistry can be used to assess both the tumor cells and micro-vascular density in ovarian tumors as Yi-Hsuan H. Et al in 2010 suggest that WT-1 is expressed in both tumor and endothelial cells in ovarian tumor.14It is coexpressed with a well-defined ovarian tumor marker CA125 15 ,and also with a endothelial cell phenotypic marker CD34, in the same cells.Especially in serous tumor whereas in other surface epithelial tumors was with no benefit.16
Patients and Methods :
This is a retrospective ( cross sectional ) study of ( 60 ) cases with total abdominal hysterectomy and bilateral salpingooopherectomy collected from department of Histopathology -Teaching Laboratories / Medical City Teaching Hospital , as well as Al alwya hospital and Al Habibia hospital in Baghdad , the period of study from December 2007 to December 2012.Thirty cases diagnosed as surface epithelial ovarian tumors and thirty cases of histologically normal ovarian tissue which were included as a control group.Formalin-fixed paraffin-embedded ovarian tissue blocks from 60 cases were used .Three section of 4micron for each taken and stained with WT1, CD34, and CA125 immunohistochemical marker on positively charged slides.All the clinicopathological parameters such as (age, gender, site of tumor and grade ) were obtained from histopathological reports available in labrotories of the Hospital mentioned above.Tumor grading was according to FIGO grading criteria.And For the thirty cases of surface epithelial ovarian tumor classified into different histological type according to WHO classification.The immunohistochemical procedure was carried out, at the Oncology Teaching Hospital and Forensic medical institute in accordance with the manufacturer s instructions with modifications to optimize the results.The primary antibody CD34 class 2 (DAKO Denmark) monoclonal mouse (QBEnd 10); diluted against 1:25mol/L Tris/Hcl was incubated with tissue sections for 30-60 min.And thePrimary antibody CA125M11(DAKO Denmark) monoclonalmouse ; diluted against 1:20mol/L Tris/Hcl was incubated with tissue sections for 30-60 min.The (BioGenex, USA) detection kit, QD430-Xake wasused for antigen visualization.The primary antibody WT1 6F-H2 (DAKO Denmark), monoclonal mouse , ready to use.The(Mouse specific HRP\DAB abcam ) detection kit Ab 64259.Paraffin sections of Fallopian tube, were run with eachbatch to serve as a positive control for WT1, CA125 and normal ovarian tissue as positive control for CD34.
Results:
WT1 was observed as brown precipitation in the nuclei of surface epithelial tumor cells.WT1 was scored, and graded on a 0 to 3 scale :0 (negative), 1 (weak), 2 (medium) and 3 (strong) ; While the extent of staining was scored as : 0 (0 %) , 1 ( 1-25 % ) , 2 (26-50% ) , 3 (51-75% ) and 4 (76-100% ) The sum of the intensity and extent score was used as the final staining score (0-7) for WT1.17CD34was observed red-brown colored precipitate at the specific cytoplasmic site and cell membrane of CD34 antigen.Estimation of microvessel density(Weidner's method): slides were first scanned at 100× magnification, and five areas of maximum microvessels density (MVD) called hot spots were identified at 200× magnification on each slide.In each of these hot spots, microvessels (capillaries and small venules) were counted at 400×.In each case, means of the hot spots were counted.18CA125 was observed a brown colored in the cell membrane of surface epithelial tumor cells.Intensity of CA125 was graded on a 0 to 3 scale ( 0 for no staining , + for weak ; ++ for moderate; and +++ for strong ).The percentage of cells was scored as follows : 1 for (0-25%) ; 2 for ( 26-50%) ; 3 for (51-75%) ; and 4 for (76-100%).The values of the staining intensity and the percent of immunoreactive cells were multiplied to obtain a composite score ranging from 0 to 12. 19 Statistical Analysis: Spss version 20 was used for data entry and analysis.Tests used : One -Way -Anova test, tukey B post -hoc test , Kruskall -Wallis test where P-value ≤ 0.05 was significant.Spearman rho correlation test where P -value ≤ 0.01 was significant.
Results:
A total of sixty female patients ; thirty cases were diagnosed as surface epithelial tumors of the ovary , an additional 30 patients with normal ovary were taken as a control study group.The mean age of the patients with malignant ovarian tumor was (49 ±13.5 ) years , and for the control group was (49.8 ± SD 6.7 ) years.The WT1 scores expression was highly significant in serous tumors than other cancer types.As shown in table (1) , figure (1) .
Figure (3): scatter plot and line of between WT1-MVD and CD34-MVD
Assessing the differences in distribution of immunomarker expressions between malignant cases and control group : •There was no significant difference in CD34-MVD expression between malignant and control group cases.( p-value =0.5).
•There was no significant difference in WT1-MVD expression between malignant and control group cases.( p-value = 0.8) •There was significant difference in CA125 expression between malignant and control group cases.( p-value < 0.001) •There was significant difference in WT1 expression between malignant and control group.(p-value < 0.001).asseen in table (3), figure (4).
Discussion:
Ovarian cancer is second most commonly diagnosed gynecological malignancy after endometrial cancer.20 In Iraq, it is the sixth most common cancer among females, it constituted 3.81% according to Iraqi Cancer Board Registry in 2009. 2 Due to their nonspecific initial symptoms, 70% of patients have widespread metastatic disease at the time of diagnosis.21Various recent immunohistochemical studies of ovarian cancer have suggested that expression of particular markers may help in predicting outcome , and therefore guide therapeutic choices.22 the present study showed no significant correlation between patient age and degree of WT1 expression ) P -value .(0.9 = There is no previous or similar studies found for comparison , but a study on thyroid gland
Figure(4): Distribution of immunomarkers among studied cases
done by Katsuhiro et al 2007 .24 agrees with our study which revealed no significant relationship between the age and the expression of WT1.The present study has shown that WT1 expression was significantly higher in serous than other ovarian tumor type (P-value 0.001) .And this agree with previous studies done by Goldstein et al. 2001,20 Al-Hussaini M. et al. 2004,25 Shimizu M. 2000, 9Acs G. et al. 2004, 26Euscher E. et al. 2005, 27 Goldstein N. et al 2002, 28Hylander B. et al 2006.29While disagreed with other studiesdone by Lee B. et al. 2002,30 Hecht JL. et al. 2002, 31Goldestein N. et al. 2002, 28 who found that negative reaction of WT1 in serous ovarian carcinoma , the differences in the results may be due to differences in sample size , IHC protocols and the use of different primary antibody clone.The present study showed that no correlation of WT1 expression to the histological grade.(P-value 0.05).This result is in agreement with the results of the studies done by Shimizu et al , ovary, endometrium andperitoneum. Int JGynecol Pathol. 2003;22: 374-377. 33-MarianneW. , AnniG.: Immunohistochemical Expression of Wilms Tumor Gene Protein in different histologic subtypes of ovarian carcinomas . 2005;vol.129, Issue 1:85-88. 34-Tornos C , Soslow R, Chen S , et al :Expression of WT1, CA 125, and GCDFP-15 as useful markers in the differential diagnosis of primary ovarian carcinomas versus metastatic breast cancer to theovary.Am J .2005;29(11):1482-9. 35-Satoshi D. , Satoshi O., Yumiko O., et al : WT1 expressioncorrelates with angiogenesis in endometrial cancer tissue.Anticancer research . 2010;30: 3187-3192 . 36-Mustpha A. , Raad J. , Qais A. , et al : Immunohistocemichal assessment of the role of WT1 protein expression in CML and itscorrelation with CD 31 as an angiogenic marker.Iraqi J Med Sci2013;vol 11(3) : 297-302. 37-E S Bamberger, C W Perrett : Angiogenesis in epithelial ovarian cancer .Mol Pathol. Dec 2002;55(6): 348-359. 38-Makoto E. , Hiroshi I., Kumi M., et al: Differences in the angiogenesis of benign and malignant ovarian tumors, demonstrated by analyses of color doppler ultrasound, immunohistochemistry and microvessel density.American Cancer Society.1997;vol. 80 ,issue 5:899-907.
Figure(1): correlation of WT1 expression by histological type No significant statistical correlation found in distribution of WT1expression scores among the tumor grades .( p-value = 0.23) as seen in table (2) Table ( 2) : Distribution of WT1 scores by tumor grades.Tumor grade WT1 expression p-value* Count Median Range borderline 14 5 7 0.23 well differentiated 4 4 7 moderately differentiated 6 3 5 poorly differentiated 6 6 4 Total 30 * Kruskall Wallis test A positive correlation was found between expression scores of WT1 and CA125 markers (p-value < 0.001).as seen in figure (2)
The role of Wilm's Tumor1 immunohistochemical marker in surface epithelial Muna I. AL Hafedh ovarian tumor
. , 9Lee et al., 30 Hashi et al., 25Acs et al. 26In contrast to Marianne W. et al 2005 study 33 result indicate that the expression related to the histological grade of differentiation, and these difference may be due to sample size, IHC protocol.therewas marked significant correlation between WT1 and CA125 expression score (P-value 0.001) .This result is supported by studies done by Tornos C. et al. | 2019-01-12T21:13:02.439Z | 2015-07-01T00:00:00.000 | {
"year": 2015,
"sha1": "356a81159394c2af5c5791f2df2702db3eb92197",
"oa_license": null,
"oa_url": "http://iqjmc.uobaghdad.edu.iq/index.php/19JFacMedBaghdad36/article/download/345/238",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "356a81159394c2af5c5791f2df2702db3eb92197",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
237935442 | pes2o/s2orc | v3-fos-license | A 3D In Vitro Model for Burn Wounds: Monitoring of Regeneration on the Epidermal Level
Burns affect millions every year and a model to mimic the pathophysiology of such injuries in detail is required to better understand regeneration. The current gold standard for studying burn wounds are animal models, which are under criticism due to ethical considerations and a limited predictiveness. Here, we present a three-dimensional burn model, based on an open-source model, to monitor wound healing on the epidermal level. Skin equivalents were burned, using a preheated metal cylinder. The healing process was monitored regarding histomorphology, metabolic changes, inflammatory response and reepithelialization for 14 days. During this time, the wound size decreased from 25% to 5% of the model area and the inflammatory response (IL-1β, IL-6 and IL-8) showed a comparable course to wounding and healing in vivo. Additionally, the topical application of 5% dexpanthenol enhanced tissue morphology and the number of proliferative keratinocytes in the newly formed epidermis, but did not influence the overall reepithelialization rate. In summary, the model showed a comparable healing process to in vivo, and thus, offers the opportunity to better understand the physiology of thermal burn wound healing on the keratinocyte level.
Introduction
Annually, about 11 million people suffer from severe burn injuries [1]. Even though the mortality rate of burn injuries decreased continuously in the past century, and treatment options were improved [2][3][4], there is a disproportionate distribution of fatal and non-fatal burn incidents around the world, with approximately 90% of burn deaths occurring in middle-and low-income countries. Despite improved treatment options and the decreasing mortality rate, fire-related burns are the third most frequent cause of injury deaths, globally [5]. The most common burn wounds are first degree burns, but their prevalence can only be roughly estimated, as most cases are treated at home and are not documented in clinics [6]. Accordingly, minor burns are not considered to be critical, thus, the main focus of research concentrates on deep-and full-thickness burns.
Current research approaches usually utilize animal models. While porcine models show a high pathophysiological resemblance to humans, there are still some important Biomedicines 2021, 9,1153 2 of 18 differences, like a less vascular dermal compartment [7]. Due to their comparative inexpensiveness and quick reproducibility, rodents are the primary animal model in burn research, even though there are significant differences in skin histology and pathology between humans and rodents. For example, in rodents the primary wound-healing mechanism is wound contraction, while in humans the primary process is reepithelialization and granulation [3,8,9]. In addition to these differences, experiments on burn wounds include the infliction of large burns in a high number of animals, which is among the most invasive treatments in in vivo animal experimentation. This stands in contrast with the 3R principles by Russel and Burch, aiming for the "Reduction, Refinement and Replacement" of animal testing [10]. Since the initial publication in 1959 these principles have been widely accepted as guidelines for the ethical treatment of animal models and were incorporated into various legislations, e.g., the seventh amendment of the European cosmetic directive [11][12][13][14]. Together with fast-growing concerns over laboratory animal tests among public opinion, this strengthens the need for an alternative wound model even more.
As an example, human ex vivo skin models, which are mostly obtained from skin reduction operations [15,16] present a different approach in burn wound research, eliminating the extensive drawbacks of animal models. They have the advantage of being a full-thickness model, which thoroughly represents the highly complex structure of human skin. However, ex vivo skin models lack reproducibility, mostly due to limitations of the donated skin surface [17], and the timely dependence on operations. Another animal free and inexpensive alternative are 2D fibroblast assays to investigate burn wound healing, albeit completely lacking the physiological skin context [18,19].
Considering the limitations of in vivo and ex vivo models, the need for a reliable and standardized in vitro model for burn wounds is evident. A disadvantage of current in vitro models is that, unlike animals, they still cannot depict the whole entity of biological systems in one model (e.g., epidermis, dermis, lymphatic and immune system). However, this also enables the separate observation of different tissue compartments, like the epidermis and dermis, without interference by other cell types. Additionally, in vitro models pose great advantages, like their high standardization, a high throughput and their easy implementation into standard cell culture laboratories. Based on these reasons, epidermal skin models have been used for hazard identification in several official test procedures. These include skin irritation [20], skin sensitization [21] and skin corrosion [22]. Additionally, studies of cutaneous wound healing have been performed using in vitro skin models [23].
Such a model could be used to investigate burn wound healing, to better understand inflammation and metabolism during regeneration, support the development of treatments and evaluate active agents in concern of their applicability and effect.
We, therefore, developed a highly standardized in vitro 3D epidermal burn model, originating from primary epidermal keratinocytes. It reflects the physiological setup of a burn of human skin and can target research on the most frequent burn accidents [6]. This model allows the monitoring and analysis of wound healing for up to 14 days and the testing of pharmacological agents, such as dexpanthenol. To enable the systematic evaluation of histological criteria in epidermal models, we developed a scoring system, which enables an easy evaluation of the healing process.
Isolation and Culture of Primary Skin Cells
Human epidermal keratinocytes (hEK) were isolated from foreskin biopsies obtained from juvenile donors under informed consent according to ethical approval granted by the local ethical committee (ethical committee of the medical faculty Wuerzburg; vote 182/10 and 280/18sc). For all samples, the written informed consent of their legal guardians was obtained. All experiments were performed in accordance with these ethical guidelines and regulations. The isolation of hEKs was run according to a previously described protocol [24]. Cells were cultured in EpiLife ® medium (Gibco, Carlsbad, CA, USA) supplemented with Human Keratinocyte Growth Supplements (0.2% bovine pituitary extract/BPE, 5 µg/mL bovine insulin, 0.18 µg/mL hydrocortisone, 5 µg/mL bovine transferrin, 0.2 ng/mL human recombinant epidermal growth factor) and 50 U/mL penicillin and 50 µg/mL streptomycin (all from Life Technologies, Darmstadt, Germany) in a humidified incubator at 37 • C and 5% CO 2 up to passage two. Media was changed every 2-3 days.
Generation of Epidermis Models
Epidermal models (OS-REp) were generated following a previously published protocol [24]. Briefly, hEKs were incubated with accutase ® (Sigma-Aldrich, Darmstadt, Germany) for 10 min at 37 • C to detach. After centrifugation, cells were resuspended in culture medium supplemented with 1.44 mM CaCl 2 . 5 × 10 5 cells were seeded in inserts (Greiner Bio-One, Frickenhausen, Germany) in 500 µL medium. After 2 h, each insert was placed in 1 mL of medium. After additional 24 h, medium inside the inserts was removed to generate an air-liquid interface culture. The medium was exchanged to 4.2 mL culture medium supplemented with 1.44 mM CaCl 2 and additional 73 µg/mL L-ascorbic acid 2-phosphate and 10 ng/mL keratinocyte growth factor (both Sigma-Aldrich, Germany). Media exchange was performed three times per week.
Burning of Epidermis Models
Skin models were cultured for 12 days before burning. On day 12, a thermal burn injury was created at the center of the models, accounting for about 25% of the model area. For this, a metal rod with a diameter of 6 mm was preheated to 83 • C and then placed on the models for seven seconds without use of further pressure. Control models were treated similarly, with a metal rod at room temperature. The models were kept in culture for up to 14 days afterward with media changes three times a week. Treatment was performed by topical application of Bepanthen ® Wound and Healing Ointment containing 5% dexpanthenol (Bayer, Leverkusen, Germany) 3 h, 2 days and 6 days after burning. Duration of the treatment was 24 h, respectively. To avoid oxidative stress of the models and to facilitate impedance spectroscopy, the remaining ointment was removed with a cotton swab after 24 h, as excess crème on models is not soaked in, or removed by, wound dressings or clothes. Unwounded models were used as control (no additional vehicle control for 5% dexpanthenol ointment).
CEDEX Glucose Metabolism
Cell metabolism was analyzed photometrically using the Cedex Bio Analyzer (Roche Diagnostics GmbH, Mannheim, Germany). Models were exposed to 1 mL fresh culture medium for 24 h before glucose concentration, lactate concentration, and lactate dehydrogenase level were measured in the media with the applicable kits (Glucose Bio; LDH Bio; Lactate Bio). The media was collected at days 0, 1, 3, 7, 10, and 14, with fresh media as a control. Glucose consumption was calculated, as previously described [23].
Barrier Function Impedance
Impedance spectroscopy was analyzed as previously described [25]. Skin equivalents were positioned between two titanium nitride electrodes of a custom-made measuring system [26] and the system was connected to the impedance spectrometer LCR HiTESTER 3522-50 (HIOKI E.E. Corporation, Ueda, Nagano, Japan). To achieve conduction between the equivalents and the electrodes, spacing was filled with EpiLife ® medium supplemented with 50 U/mL penicillin and 50 µg/mL streptomycin and 1.44 mM CaCl 2 . A total of 40 logarithmic measuring points were taken between 1 Hz and 100 kHz to get insights into the full spectrum of the barrier function. Impedance data were then analyzed using the TEER 1000Hz in Ωcm 2 .
Viability Measurement (MTT Assay)
Cell viability was measured via MTT (3-[4,5-dimethylthiazole-2-yl]-2,5-diphenyltetrazolium bromide) assay (Serva, Heidelberg, Germany). Models were incubated for 3 h with MTT solution (1 mg/mL MTT) at 37 • C, before images were taken for further analysis. As the surrounding tissue accounts for 75% of the model and would superimpose the signal, a 6 mm biopsy punch was used to remove the wound area and measure the viability of the two compartments separately. The dye salt was dissolved in 2-Propanol (Sigma-Aldrich, Germany) and the absorbance of the samples (200 µL each) was measured spectrophotometrically at 570 nm using the Infinite1 200 PRO (TECAN Trading AG, Zurich, Switzerland).
Measuring of the Burned Surface Area
Images obtained from the MTT Assay (described above) were used to determine the burned surface area of models (BSA). Viable areas appeared dark blue due to the formed dye salt in viable cells, while dead tissue appeared white. The area of both colors was measured using ImageJ to calculate the percentage of damaged tissue. This was performed instead of measuring the length of regenerated tissue in histological staining, as wounds do not close uniformly from all wound edges and histology represents only a cross-section of the epidermal model, while measurement of the wound area in MTT assay accounts for the whole wound area.
Histological Staining and Immunofluorescence
OS-REp models were fixed at multiple time points during culture in Roti Histofix ® (Carl Roth GmbH, Karlsruhe, Germany) (4% Paraformaldehyde PFA) and embedded in paraffin before cutting 3 µm cross sections. To show the general morphological architecture on brightfield images Hematoxylin & Eosin (H&E; Morphisto, Offenbach am Main, Germany) staining was accomplished. For immunofluorescence staining, tissue sections were hydrated and treated with the following primary antibody solutions: keratin 10 (K 10), 1:100 (Abcam, Cambridge, UK); keratin 14 (K 14), 1:1000 (Sigma-Aldrich, Germany), high mobility group protein B1 (HMGB1), 1:100 (Cell Signaling Technology, Danvers, MA, USA), antigen Ki67 (Ki67), 1:100 (Abcam, Cambridge, UK). Primary antibodies were applied and incubated for 16 h at 4 • C, followed by the incubation of the secondary antibody solutions coupled with Alexa Fluor ® 647, Alexa Fluor ® 555 or Alexa Fluor ® 488 (donkey anti rabbit or donkey anti mouse; all from Life Technologies, Darmstadt, Germany) for 60 min at room temperature. Cell nuclei were stained with 4 ,6-diamidino-2-phenylindole (DAPI) in Fluoromount-G DAPI mounting medium (Life Technologies, Darmstadt, Germany) after washing. Brightfield and fluorescence images were taken at the KEYENCE BZ 9000 microscope (Keyence, Neu-Isenburg, Germany) with 10× or 20× magnification. Merges of pictures were obtained using the Image Composite Editor (Microsoft, Albuquerque, NM, USA). The relative proliferative capacity for the OS-REp untreated and treated with dexpanthenol was analyzed using the ImageJ software, version 1.53e (developed by Wayne Rasband, National Institutes of Health, Bethesda, MD, USA) and Java 1.6.0_24 (64bits), comparing the number of Ki67 positive cells per total number of DAPI positive stained cells. Flatfield correction of brightfield pictures was achieved using the BioVoxxel Toolbox plugin for ImageJ (BioVoxxel, Ludwigshafen, Germany).
Quantitative Analysis of Histological Sections Using a Scoring System
To determine the quality of epidermis models, a training set of more than 2000 HEstained light microscopy images of OS-REp were analyzed and examined for possible defects. According to the epidermis' physiological structure, 40 histological criteria, which can be found in Supplementary Table S1, were established to assess the quality of the epidermal layers. The criteria were assigned with ascending point values reflecting the physiological appearance of each layer, meaning a high point score corresponds to a high similarity to in vivo skin and vice versa. Additionally, weighting factors were assigned to the individual layers of the epidermis to reflect the relevance of each stratum for the whole model. Given that the basal layer is the most significant for tissue differentiation, the value of this layer is weighted with four. The stratum spinosum and stratum granulosum were each weighted with a factor of three. The stratum corneum was assigned a weighing factor of two. To calculate the total score of a model (see also Supplementary Figure S2), each stratum is examined and given the appropriate score value (according to the mentioned 40 criteria). In the next step, this score value is multiplied by the assigned weighting factor. In a final step the obtained values of all strata are summed up, to form the score of the whole model. The highest score a model can achieve is 100 points. Score values between 0 and 100 can be used to classify a model as "very good or good "(+, values between 70 and 100), "satisfactory or sufficient "(o, values between 28 and 69), or "poor or deficient" (−, values between 0 and 27). A graphical representation of the score is shown in Figure 4 of Section 3. The BSGC Score, including all 40 criteria, a schematic overview and exemplary images can be found in the Supplementary Data (Supplementary Table S1 and Figures S2 and S3). Within this study, three images of three sections per experimental group were analyzed. It should be noted, that this score was developed specifically for the evaluation of in vitro skin models. However, the assessment of native human skin is not in the applicability domain of the method.
Cytometric Bead Assay
Analysis of secreted factors in the supernatant was performed using the CBA Flex Kit (BD Biosciences, San Jose, CA, USA) according to manufacturer's instructions.
Statistical Analysis
All data were tested for normality using the D'Agostino & Pearson omnibus normality test. For data passing normality testing, a two-way ANOVA employing Tukey's multiple comparisons test was performed. For data that did not pass normality testing, a Kruskal-Wallis test employing Dunn's multiple comparisons test was performed. The data shows mean values for 9 to 36 technical replicates of three independent test runs (3 donors). Statistical analysis was performed between experimental groups at each time point. Standard deviation is depicting repeatability between technical replicates and independent test runs. Statistics were computed in GraphPad PRISM 6 software (GraphPad Sofware Inc., San Diego, CA, USA).
Burn Wounds Can Be Generated with a Heated Metal Rod and Regenerate over 14 Days
In order to generate a burn wound, a metal rod with a diameter of 6 mm was preheated to 83 • C and placed on top of the skin models for seven seconds ( Figure 1A). The three experimental groups (control, burned, burned +5% dexpanthenol) were evaluated afterwards for up to 14 days post burning. While all models were viable throughout the whole culture period, the evaluation of the burn surface area (BSA), as well as quantitative analysis of MTT assays showed a significant decrease in viability of burned models compared to unburned controls (p < 0.0001). However, there was no difference between dexpanthenol treated and untreated wound models ( Figure 1). Only on day 14 models treated with 5% dexpanthenol showed a small but significantly higher viability of the tissue surrounding the wound compared to both, the control (p = 0.0007) and the burned group (p = 0.015). In the wound area, the viability decreased significantly after burning. Although an increase of viability from 2% (SD = 0.67%) (day 1) to up to 78% (SD = 12.11%) (day 14) compared to the control could be detected during the healing process, viability was still significantly lower (p < 0.0001) compared to unburned models after 14 days culture period ( Figure 1B). Evaluation of the burned surface area (Supplementary Figure S1) confirmed the measured values from the MTT assay. The wound area shrunk in burned models and models treated with 5% dexpanthenol continually from 25% of the burned surface area one day after burning to about 5% after 14 days of regeneration ( Figure 1C). an increase of viability from 2% (SD = 0.67%) (day 1) to up to 78% (SD = 12.11%) (day 14) compared to the control could be detected during the healing process, viability was still significantly lower (p < 0.0001) compared to unburned models after 14 days culture period ( Figure 1B). Evaluation of the burned surface area (Supplementary Figure S1) confirmed the measured values from the MTT assay. The wound area shrunk in burned models and models treated with 5% dexpanthenol continually from 25% of the burned surface area one day after burning to about 5% after 14 days of regeneration ( Figure 1C). Wound healing and viability were monitored for 14 days, with one experimental group being treated by topical application of 5% dexpanthenol. (B) Viability in percentage normalized to the unwounded control group. Viability was measured for burned and surrounding area separately. Viability of surrounding tissue showed significant differences for the group treated with 5% dexpanthenol on day 14 after burning. Wounded area showed significantly decreased values of viability on all days for burned models and models treated with dexpanthenol. (3 biological replicates in independent test runs with 3 technical replicates each; mean values ± SD; 2way ANOVA with Tukey's multiple comparisons test, *** p < 0.001, **** p < 0.0001 compared to the control. ° p < 0.05 compared to burned models). (C) Evaluation of burned surface area showed decreasing wound area with significantly lower values on day 14. (3 biological replicates in independent test runs with 3 technical replicates each; mean values ± SD; Kruskal-Wallis test with Dunn's multiple comparisons test, **** p < 0.0001 compared to the initial burned area on day 1).
Wound Healing Can Be Monitored Using Histological and Immunohistological Analysis
In the H&E staining, one day after burning, a clear wound edge was visible in burned models. Cells within the wounded area showed histological indicators for the degeneration like pycnotic nuclei, cellular swelling, indistinct cellular borders and separation of the stacked strata ( Figure 2). During the following two weeks, ingrown keratinocytes started to close the burn wound and form a new epidermis, pushing the remaining dead tissue in the wound area off the cell culture membrane. Wound healing and viability were monitored for 14 days, with one experimental group being treated by topical application of 5% dexpanthenol. (B) Viability in percentage normalized to the unwounded control group. Viability was measured for burned and surrounding area separately. Viability of surrounding tissue showed significant differences for the group treated with 5% dexpanthenol on day 14 after burning. Wounded area showed significantly decreased values of viability on all days for burned models and models treated with dexpanthenol. (3 biological replicates in independent test runs with 3 technical replicates each; mean values ± SD; 2way ANOVA with Tukey's multiple comparisons test, *** p < 0.001, **** p < 0.0001 compared to the control. • p < 0.05 compared to burned models). (C) Evaluation of burned surface area showed decreasing wound area with significantly lower values on day 14. (3 biological replicates in independent test runs with 3 technical replicates each; mean values ± SD; Kruskal-Wallis test with Dunn's multiple comparisons test, **** p < 0.0001 compared to the initial burned area on day 1).
Wound Healing Can Be Monitored Using Histological and Immunohistological Analysis
In the H&E staining, one day after burning, a clear wound edge was visible in burned models. Cells within the wounded area showed histological indicators for the degeneration like pycnotic nuclei, cellular swelling, indistinct cellular borders and separation of the stacked strata ( Figure 2). During the following two weeks, ingrown keratinocytes started to close the burn wound and form a new epidermis, pushing the remaining dead tissue in the wound area off the cell culture membrane.
Immunofluoresence staining for Keratin 10 (K 10) showed a positive signal in the apical layers of models, while Keratin 14 (K 14) was located in the basal layer ( Figure 3). For the newly formed tissue a clear separation of K 10 and K 14 could be observed in areas close to the origin of the wound edge. However, cells at the tip of the wound margin were stained only for K 14.
Two weeks after burning, only incomplete wound closure was observed. However, the wound edges visible in the H&E staining had progressed up to 2.1 mm (2.1 mm on the left and 1.8 mm on the right) into the wound area. Additional evaluation of epidermal quality on the wound edges was done using the BSGC Score ( Figure 4). It showed that after 7 days, the newly formed epidermis in the burned area showed a significantly (p = 0.021) poorer quality (41 points, 45% decreased; SD = 8.02), which improved until day 14, while burned models treated with dexpanthenol had a slightly better BSGC score (59 points, 33% decreased value; SD = 5.57). The higher values in the dexpanthenol treated group were mainly due to the strata basale and spinosum. Immunofluoresence staining for Keratin 10 (K 10) showed a positive signal in the apical layers of models, while Keratin 14 (K 14) was located in the basal layer ( Figure 3). For the newly formed tissue a clear separation of K 10 and K 14 could be observed in areas close to the origin of the wound edge. However, cells at the tip of the wound margin were stained only for K 14. Two weeks after burning, only incomplete wound closure was observed. However, the wound edges visible in the H&E staining had progressed up to 2.1 mm (2.1 mm on the left and 1.8 mm on the right) into the wound area. Additional evaluation of epidermal quality on the wound edges was done using the BSGC Score (Figure 4). It showed that after 7 days, the newly formed epidermis in the burned area showed a significantly (p = 0.021) poorer quality (41 points, 45% decreased; SD = 8.02), which improved until day 14, while burned models treated with dexpanthenol had a slightly better BSGC score (59 points, 33% decreased value; SD = 5.57). The higher values in the dexpanthenol treated group were mainly due to the strata basale and spinosum. (Images from 1 out of 3 biological replicates in independent test runs with 1 technical replicate each).
Barrier Integrity, LDH Release and Metabolic Changes Can Be Measured in Wound Models
As burn wounds are associated with a lack of the skin's barrier function, impedance spectroscopy was used to analyze the barrier integrity ( Figure 6A). Directly after burning, the TEER 1000 Hz value was not varying between burned (3.0 kΩcm 2 ; SD = 0.91 kΩcm 2 ) and unwounded (3.6 kΩcm 2 ; SD = 1.31 kΩcm 2 ) groups. Then, 24 h after burning, the replicates treated with dexpanthenol showed a significantly lower epidermal barrier compared to the control (p = 0.01) and the burned replicates (p = 0.003). This effect sustained for six more days. The burned models showed constant TEER 1000 Hz values (3.0-3.4 kΩcm 2 ; SD between 0.8 and 1.9 kΩcm 2 ) throughout the complete experiment. The control continuously increased the impedance (from 3.6 kΩcm 2 up to 7.3 kΩcm 2 ; SD between 1.1 and 4.0 kΩcm 2 ), whereas the impedance of dexpanthenol treated models decreased during treatment (2.0 kΩcm 2 at day 3; SD = 0.75 kΩcm 2 ) and started to increase again later (2.9 kΩcm 2 at day 10; SD = 1.14 kΩcm 2 ), until comparable values to the burned group were reached (3.4 kΩcm 2 ; SD = 1.60 kΩcm 2 ) at day 14. Biomedicines 2021, 9, x FOR PEER REVIEW 9 of 18 We also analyzed the presence of proliferative cells using an antibody against Ki67 ( Figure 5; Supplementary Figure S4). Ki67 stains the nucleus of proliferating cells and is ogy of the nuclear staining was affected by the thermal stress and Ki67 positive nucle appeared more elongated. On day 7 and day 14 new Ki67 positive tissue had emerge from the wound edge, growing under the damaged tissue and extending further into th burned area with time. After 14 days of culture, 18% (SD = 2.34%) of the cells in the newl formed tissue of burned OS-REp were positive, whereas 24% (SD = 0.77%) of the cells i the burned models treated with dexpanthenol showed positive staining for Ki67.
Barrier Integrity, LDH Release and Metabolic Changes Can Be Measured in Wound Models
As burn wounds are associated with a lack of the skin's barrier function, impedanc spectroscopy was used to analyze the barrier integrity ( Figure 6A). Directly after burning the TEER1000 Hz value was not varying between burned (3.0 kΩcm 2 ; SD = 0.91 kΩcm 2 ) an unwounded (3.6 kΩcm 2 ; SD = 1.31 kΩcm 2 ) groups. Then, 24 h after burning, the replicate treated with dexpanthenol showed a significantly lower epidermal barrier compared t the control (p = 0.01) and the burned replicates (p = 0.003). This effect sustained for si more days. The burned models showed constant TEER1000 Hz values (3.0-3.4 kΩcm 2 ; SD between 0.8 and 1.9 kΩcm 2 ) throughout the complete experiment. The control continu ously increased the impedance (from 3.6 kΩcm 2 up to 7.3 kΩcm 2 ; SD between 1.1 and 4. kΩcm 2 ), whereas the impedance of dexpanthenol treated models decreased during trea ment (2.0 kΩcm 2 at day 3; SD = 0.75 kΩcm 2 ) and started to increase again later (2.9 kΩcm Furthermore, we analyzed whether the burning of models caused a disruption of cells, and thus, the release of intracellular LDH into the supernatant ( Figure 6C). Increasing concentrations in the first 24 h after burning could be detected. A 20-fold increase of LDH level directly after burning was measurable. One day after burning, the wounded OS-REp models still had a three times higher LDH value than before burning, which after three days, diminished for the remaining time of the experiment.
Lactate and glucose levels can give insights into aerobic conditions and cellular stress levels. Under normal, aerobic conditions glucose is converted to pyruvate, which is then converted to acetyl CoA. Acetyl CoA enters the tricarboxylic acid cycle and electron transfer chain, where it is oxidized to adenosine triphosphate (ATP), nicotinamide adenine dinucleotide (NAD + ), carbon dioxide and water. If conditions change to anaerobic metabolism, or in case of cellular stress, glucose is no longer converted to acetyl CoA, but to lactate and NAD + [27]. Therefore, we analyzed glucose consumption and lactate production under consideration of the connectivity between those two metabolic mechanisms ( Figure 6B). After burning, the relationship between glucose uptake and lactate production shifted. More lactate was produced than glucose was consumed. For three days, this stood in significant contrast to the control for both, treated and untreated models. The applied treatment slightly enhanced this effect on the metabolism. After three days, the relationship shifted towards negative in control models, thereby bringing it more in line with the measurements of the burned models for the rest of the experiment.
contrast to the control for both, treated and untreated models. The applied slightly enhanced this effect on the metabolism. After three days, the relationsh towards negative in control models, thereby bringing it more in line with the ments of the burned models for the rest of the experiment. Figure 6. Burning influences skin barrier, causes release of intracellular LDH and stress-r abolic differences in OS-REp. (A) Transepithelial electrical resistance (TEER1000 Hz) was with a custom-made system before (0 h) and at certain time points after burning. It revea tion in TEER1000 Hz values of burned models, while the control's TEER1000 Hz increased over time. (3 biological replicates in independent test runs with 4-12 technical replicates each ues ± SD; Kruskal-Wallis test with Dunn's multiple comparisons test, * p < 0.05, ** p < 0 Figure 6. Burning influences skin barrier, causes release of intracellular LDH and stress-related metabolic differences in OS-REp. (A) Transepithelial electrical resistance (TEER 1000 Hz ) was measured with a custom-made system before (0 h) and at certain time points after burning. It revealed stagnation in TEER 1000 Hz values of burned models, while the control's TEER 1000 Hz increased over cultivation time. (3 biological replicates in independent test runs with 4-12 technical replicates each; mean values ± SD; Kruskal-Wallis test with Dunn's multiple comparisons test, * p < 0.05, ** p < 0.01, **** p < 0.0001 compared to the control. • p < 0.05, •• p < 0.01 compared to burned models). (B) Glucose consumption subtracted by lactate production in mM. The glucose consumption is calculated in comparison to the glucose level measured in the fresh medium. Burning results in a significant lower ratio for the first three days after burning in comparison to the control, no matter the treatment. Glucose consumption and lactate production values can be found in the supplements (Supplementary Figure S5) (3 biological replicates in independent test runs with 3 technical replicates each; mean values ± SD; Kruskal-Wallis test with Dunn's multiple comparisons test, * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001 compared to the control). (C) Burning leads to cellular disruption and a peak of lactate dehydrogenase (LDH) levels in the supernatant directly after injury but decreases after 24 h (3 biological replicates in independent test runs with 3 technical replicates each; mean values ± SD; Kruskal-Wallis test with Dunn's multiple comparisons test, ** p < 0.01, *** p < 0.001, compared to the control).
Burn Wounds Cause Inflammatory Activity in Reconstructed Human Epidermis
In order to investigate the cytokine release, which occurs as part of an inflammatory response, supernatants of the three groups were taken at several time points. The levels of the inflammatory markers IL-8, IL-6, IL-1β, and VEGF were then examined, utilizing cytometric bead assay. As shown in Figure 7, it could be observed that IL-8 concentrations peaked in both burned groups compared to the unburned control group after 3 h, and the significant (burn: p = 0.0017; treated: p = 0.012) increase was sustained until 24 h post-injury. In the further course, a progressive decline of the elevated values for the burned groups could be detected, and the levels remained constant for all groups from day seven onwards. In addition, the concentration of IL-8 in the dexpanthenol treated group was significantly (p ranging between 0.0005 and 0.004) increased over almost the whole period, compared to the control group. The secretion of IL-6 and IL-1β was very low in all groups over the entire period. The secretion of VEGF was significantly increased in the burn group during the first 24 h (p = 0.038) when compared to the unwounded group. Moreover, the dexpanthenol treated group showed a significant rise, compared to the control group (p = 0.041) and burned group (p = 0.02) after three days. Aside from this, VEGF concentrations fluctuated in all groups over the observed period. mean values ± SD; Kruskal-Wallis test with Dunn's multiple comparisons test, ** p < 0.01, *** p < 0.001, compared to the control).
Burn Wounds Cause Inflammatory Activity in Reconstructed Human Epidermis
In order to investigate the cytokine release, which occurs as part of an inflammatory response, supernatants of the three groups were taken at several time points. The levels of the inflammatory markers IL-8, IL-6, IL-1β, and VEGF were then examined, utilizing cytometric bead assay. As shown in Figure 7, it could be observed that IL-8 concentrations peaked in both burned groups compared to the unburned control group after 3 h, and the significant (burn: p = 0.0017; treated: p = 0.012) increase was sustained until 24 h postinjury. In the further course, a progressive decline of the elevated values for the burned groups could be detected, and the levels remained constant for all groups from day seven onwards. In addition, the concentration of IL-8 in the dexpanthenol treated group was significantly (p ranging between 0.0005 and 0.004) increased over almost the whole period, compared to the control group. The secretion of IL-6 and IL-1β was very low in all groups over the entire period. The secretion of VEGF was significantly increased in the burn group during the first 24 h (p = 0.038) when compared to the unwounded group. Moreover, the dexpanthenol treated group showed a significant rise, compared to the control group (p = 0.041) and burned group (p = 0.02) after three days. Aside from this, VEGF concentrations fluctuated in all groups over the observed period. Figure 7. Infliction of burn wounds causes an inflammatory response in reconstructed human epidermis. Concentrations of inflammatory cytokines IL-8, IL-6, IL-1β, and VEGF in cell culture supernatants of burned and unwounded models at distinct time points over 14-day period determined by CBA. The detected IL-8 and VEGF levels peaked 3 h after burning. IL-6 and IL-1β were very low at any given time point (3 biological replicates in independent test runs with 3 technical replicates each; mean values ± SD; Kruskal-Wallis test with Dunn's multiple comparisons test, * p < 0.5, ** p < 0.01, *** p < 0.001, compared to the control; • p < 0.05, compared to burned models).
Discussion
Extensive thermal stress leads to different physical and cellular reactions in the skin compared to a mechanical wound. First, thermal energy leads to rapid denaturation of cellular proteins and ultimately to necrosis in the affected tissue areas [28]. These effects are accompanied by a detachment of the epidermis from the underlying basal membrane and known histological attributes, such as cellular swelling, loosening of the cell-cell-contacts and necrotic fragmentation of the cell nuclei [29][30][31][32]. The same histological effects were also detectable within our model. The local reduction of viability, indicated by MTT and the release of intracellular LDH, due to the rupture of cells, further confirmed the successful wounding of the model.
Following the initial effects of a burn wound, keratinocytes begin to proliferate and migrate into the wound area in order to close the defect. However, if not treated by debridement, which is only performed in deep burn wounds, the healing of a burn differs significantly from a mechanical wound. In an epidermal burn wound the necrotic tissue is still present and the neo epidermis needs to grow under the dead tissue [33]. Due to this growth, the burned epidermal region is pushed up and later eliminated through desquamation. Consistent with this, H&E staining indicates that reepithelization in our model is also starting from the wound edges, supplanting the necrotic tissue, which is consistent with previous findings in full-thickness models [34]. The newly formed epidermis did not only show the corresponding cellular morphology, confirmed by H&E staining, but also the presence of the basal and supra-basal keratin network (K 10 and K14). In addition, the progression and speed of reepithelialization was similar to previously reported burn-and punch wounds in in vitro skin equivalents [23,33].
The positive signal for Ki67 was restricted to the basal layer of our model, which was expected, since transiently amplifying keratinocytes in the basal layer of the epidermis are responsible for tissue renewal [35,36]. Although a weak Ki67 signal was also found in the burned areas, this was probably attributed to the presence of denaturized Ki67 protein.
Apart from morphological features, we evaluated other parameters for tissue functionality, such as the epidermal barrier, measured via impedance spectroscopy. The impedance of the unwounded control models increased over time, indicating an ongoing tissue maturation. The burned models retained some barrier function, but stagnated over time, confirming the histological data showing an incomplete healing process within 14 days. The intact barrier after the wounding process stands in contrast to previously published, mechanically wounded models, where the punch biopsy and removal of the stratum corneum led to a nearly complete reduction of the electrical barrier [23]. However, in our experimental setup the burn wound does not remove parts of the epidermis, but leads to denaturation of the proteins and lipids within the stratum corneum. Therefore, the physical barrier of the epidermis remains partially intact.
On the metabolic level, a stress-associated switch to an "anaerobic" metabolism could be observed in our models during the first week after burning. The switch is characterized by a shift in the ratio between glucose consumption and lactate production to negative values [27]. This effect occurs during wound healing and was also observed in a previously published wound model [23].
Apart from physical effects, the infliction of a burn wound leads to an inflammatory response of the model, including multiple cellular signals of the keratinocytes. The cytokines IL-1β, IL-6 and IL-8 are important mediators of the inflammatory response after wounding and are attributed to increasing keratinocyte proliferation and motility [37][38][39][40][41]. In our model only IL-8 and IL6 (but not IL-1β) were significantly increased 3 h after burning, returning to basal levels afterwards. The observed gradual decrease of IL-8 and IL-6 levels over the healing period is coherent with the finding that, e.g., IL-8 is only upregulated during the inflammatory phase of wound healing [42]. Moreover, reports from burn patients similarly show a steep incline shortly after burn injury, followed by a gradual decline for both factors [43][44][45][46][47][48]. Moreover, the burned models showed a potential induction of VEGF by IL-8. While it is described that IL-1β secretion is immediately elevated after wounding, and persistent until the late proliferative stage of the healing process [49][50][51][52], this effect could not be achieved in our model. This absent signal is potentially caused by the lack of immune components, such as macrophages and neutrophils that play a major role during wound healing [38,53,54]. Comparing these findings with previous studies, which were solely performed in full thickness skin systems, we achieved comparable results. In relation to the histological analysis of the wound healing process, our models showed similar results to Breetveld et al. and Iljas et al. [33,34]. Although the inflammatory response was observed in previous publications [55,56], it was only monitored short-term after burning (48 h, 5 days), while our approach included measurements for up to 14 days. Furthermore, changes in metabolism and electrical barrier function, which we measured over the whole culture period, were not considered in any publication on burn wounds before.
To test whether our model can be implemented in the preclinical assessment of burnwound therapies, we assessed the effect of a commercial ointment on the wound healing process. Bepanthen ® Wound and Healing Ointment with its active ingredient dexpanthenol is a topical formulation used for the treatment of minor wounds, such as superficial burns, and is present in many households [57]. In our study dexpanthenol showed a positive effect on the morphology of the newly formed epidermis. The treatment also resulted in a prolonged negative relation of glucose consumption and lactate production, indicating a higher metabolism or growth of keratinocytes, especially at the wound margin. This was also supported by an increased number of Ki67 positive cells in this area. Although these effects did not significantly improve reepithelization in our model, they might be more pronounced in vivo and explain the observed positive effects of a dexpanthenol treatment on wound healing in previous studies [56,58]. A previously published study by Marquardt et al. found a positive effect on wound closure after treatment with dexpanthenol in a full thickness skin equivalent [56]. This might be caused by some fundamental differences in the experimental setup. Apart from the possible influence by the fibroblasts in the model, the mode of wounding and the wound size differed considerably from our established model. While we inflicted a thermal burn wound, in the mentioned study, a laser was used for wounding, removing the necrotic tissue, and thus, enabling the treatment to directly penetrate into the wound area and the adjoining cells. While other studies reported that dexpanthenol treatment also has a positive effect on the barrier function (indicated by transepidermal water loss) of the skin in vivo [59], we observed a decrease of the impedance values after treatment. The application of ointments can cause a decrease in impedance values through loosening of the brick and mortar structure in the stratum corneum, as the ointment remains on top of the model and is not removed by, e.g., a wound dressing. Dexpanthenol has been described to increase the hydration level in the stratum corneum [58], which might influence the water loss and electrical barrier in different manners. To overcome this limitation in future experiments, not only the impedance, but also the permeability of the model for different substances should be measured via tracer molecules. For future studies treatment should also be performed via systemic application of dexpanthenol into the culture medium to assess, if the positive effects could be pronounced by a more direct application of the compound [56].
In vivo wound models are still the gold standard for evaluating the efficacy of wound treatments [60]. This stands in contrast to the international aspirations to comply to the 3R principles [11]. While the testing of skin irritation and sensitization via in vitro models is already implemented in the European guidelines as a full replacement of the animal experiment, there is still no system available for the assessment of wound healing in the pre-clinical phase [14]. However, the predictiveness of animal models can be sometimes questionable, and they pose significant practical challenges, such as dangerous handling of cold and hot materials. Moreover, these experiments require substantial equipment in the respective animal facility and require special equipment to generate a reproducible wound [61]. Additionally, the analysis of an animal study is often biased by a variable epidermal and dermal thickness and is limited to a few methods, such as macroscopic inspection and histology. In contrast, in vitro models, like our burn wound model, can be easily implemented in a standard cell culture lab and are compatible with the 3R principles. Furthermore, these models allow the testing of more experimental groups, and thus, a higher throughput during the preclinical assessment.
Within this study, we present for the first time, a model to analyze the effect of thermal stress on the epidermis. The model allows deeper and more specific analysis of the keratinocyte population during wound healing. While animal models are solely used for the research of deeper wounds and are often restricted in their readout, this model allows a deeper insight into the metabolic and molecular changes of keratinocytes, unbiased from interfering factors by other cell types, tissues, or environmental factors. However, there are clear limitations to this burn model, mostly resulting from the implemented skin equivalent itself. Since the OS-REp models mimic the epidermis solely, the depth of injury cannot be adapted, restricting our model to a first degree burn wound. Furthermore, only therapies targeting the epidermal keratinocytes can be assessed.
In future studies, our model will be extended by a dermal compartment, consisting of a collagen matrix with embedded cells. This will allow us to generate deeper wounds, and thus, simulate second to third-degree burns. A dermal layer will improve our model, especially since deep second degree burns show a lack of regeneration and often need surgical intervention.
It would be also interesting to see, if the mechanism of burn injury itself has an effect on the wound healing properties of models. The introduction of an electrical or chemical burn wound instead of a thermal burn could give additional insights to this question. Furthermore, the addition of different additional tissue components, such as subcutaneous tissue (adipose tissue), lymph-and blood vessels or parts of the immune system could further expand the potential use of our model to replace animal experiments for the investigation of burn wounds and their possible treatment. Moreover, the addition of cells from the skin microbiome might help to recapitulate the imperfect conditions within a wound in vivo.
Conclusions
We could establish an in vitro burn wound model for the investigation of regeneration on the epidermal level and possible treatment with active ingredients targeting reepithelization. During wound healing, it showed morphological and metabolic changes comparable to the in vivo situation and could support the reduction of animal experimentation in the development of burn wound therapies.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/biomedicines9091153/s1, Figure S1. Measurement of the burn surface area. Figure S2. Schematic illustration of the BSGC Score; Figure S3: Exemplary images for the BSGC Score; Figure S4. Ki67 staining and analysis of Ki67 positive cells in the OS-REp models; Figure S5. Glucose consumption and lactate production after wounding; Table S1: BSGC Score Quality criteria. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the local Ethics Committee of the medical faculty Wuerzburg (vote 182/10 and 280/18sc).
Informed Consent Statement:
For all used samples the written informed consent of the patients legal guardians was obtained. All experiments were performed in accordance to these ethical guidelines and regulations.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical restrictions caused by the use of primary patient materials. | 2021-09-28T05:16:41.465Z | 2021-09-01T00:00:00.000 | {
"year": 2021,
"sha1": "ecb361e23854e28f16e73d10f2e41268645a49d7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9059/9/9/1153/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ecb361e23854e28f16e73d10f2e41268645a49d7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265174853 | pes2o/s2orc | v3-fos-license | The Early Endocytosis Gene PAL1 Contributes to Stress Tolerance and Hyphal Formation in Candida albicans
The endocytic and secretory pathways of the fungal pathogen Candida albicans are fundamental to various key cellular processes such as cell growth, cell wall integrity, protein secretion, hyphal formation, and pathogenesis. Our previous studies focused on several candidate genes involved in early endocytosis, including ENT2 and END3, that play crucial roles in such processes. However, much remains to be discovered about other endocytosis-related genes and their contributions toward Candida albicans secretion and virulence. In this study, we examined the functions of the early endocytosis gene PAL1 using a reverse genetics approach based on CRISPR-Cas9-mediated gene deletion. Saccharomyces cerevisiae Pal1 is a protein in the early coat complex involved in clathrin-mediated endocytosis that is later internalized with the coat. The C. albicans pal1Δ/Δ null mutant demonstrated increased resistance to the antifungal agent caspofungin and the cell wall stressor Congo Red. In contrast, the null mutant was more sensitive to the antifungal drug fluconazole and low concentrations of SDS than the wild type (WT) and the re-integrant (KI). While pal1Δ/Δ can form hyphae and a biofilm, under some hyphal-inducing conditions, it was less able to demonstrate filamentous growth when compared to the WT and KI. The pal1Δ/Δ null mutant had no defect in clathrin-mediated endocytosis, and there were no changes in virulence-related processes compared to controls. Our results suggest that PAL1 has a role in susceptibility to antifungal agents, cell wall integrity, and membrane stability related to early endocytosis.
Introduction
The opportunistic fungus Candida albicans is a major cause of severe invasive infections in hospitalized patients that may lead to substantial morbidity and mortality.A more complete understanding of the mechanisms of invasive candidiasis is needed for the development of more effective diagnostic and therapeutic strategies.One major contributor to Candida pathogenesis is secretion.This process is needed for fundamental biological and virulence-related processes, such as filamentation and biofilm formation [1,2].It has been suggested that endocytosis is another cellular process that contributes to virulence by allowing cells to uptake not only nutrients but also signaling molecules, regulate plasma membrane structure, and maintain cell wall composition and integrity [3].These functions are crucial to virulence-related processes that include filamentation, biofilm formation, and secretion of virulence-associated proteins such as aspartyl proteases [4,5].Several studies of endocytic and secretory mutants support the connection between yeast intracellular transport pathways and pathogenesis.The secretory mutants vps1, vps4, and pep12 in C. albicans have marked abnormalities in endocytic and secretory functions, as well as in filamentation and pathogenesis [6][7][8].Furthermore, our previous work with the C. albicans early endocytosis genes ENT2 and END3 also demonstrated endocytic defects in null mutants coinciding with reduced protease secretion, impaired filamentation, biofilm formation, and decreased virulence traits [9,10].
Among other endocytic processes, clathrin-based endocytosis (CME) is well characterized in yeast and is a significant type of endocytosis in both C. albicans and the model yeast Saccharomyces cerevisiae [11].More than 50 proteins enable cargo to be transported intracellularly in a highly organized and sequential process [3].Endocytosis starts at the formation of the initial endocytic site, which is defined by cortical actin patches that are subsequently accompanied by assembly processes for vesicle formation [2].Several coat proteins facilitate the formation of a clathrin-coated transport vesicle that moves from the donor to target membranes in the delivery process.The early-coat proteins determine the locations of endocytic sites and recruit cargo.They consist of clathrin, the AP-2 adaptor protein, Ede1p, Syp1p, and Pal1p, which are recruited in a well-orchestrated manner [12].Clathrin, AP-2, and Pal1p are internalized with the clathrin coat, while Ede1p and Syp1p remain associated with the donor plasma membrane [13].Subsequent middle-and late-coat proteins drive actin assembly and membrane invagination, resulting in mature endocytic vesicle formation and scission [2,14].Sla2p is important for the transition to middle-coat endocytosis, and is joined by the epsins Ent1p and Ent2p, as well as the clathrin-binding proteins Yap1801p and Yap1802p, which are involved in clathrin cage assembly [15,16].Finally, late-coat proteins such as Pan1p, Sla1p, Prk1p, End3p, and Las17p form a stable transport complex before mobile endocytosis begins with the WASP/Myo, amphiphysin, and actin modules [17][18][19][20][21].
Several genes encoding key coat proteins, including EDE1, SLA2, PAN1, SLA1, END3, and LAS17, have been studied in detail [2].While EDE1 was found to be non-essential for C. albicans growth and filamentation, the other endocytic genes appear to be essential for filamentation and hyphal growth, especially Ca WAL1, which is the C. albicans homolog of the late-coat protein-encoding gene LAS17 [22,23].The early-coat protein Pal1p has yet to be studied in C. albicans but is orthologous to Pal1p in S. cerevisiae.Though its molecular function has not yet been fully clarified, Pa1lp localizes to the cell periphery and the endocytic sites of the bud neck during early endocytosis in S. cerevisiae.Pal1p interacts with the early-coat protein Ede1p and the middle-coat protein Slap2 [12,24].
Due to its involvement in the early stage of the clathrin-mediated endocytosis pathway, we examined C. albicans PAL1 to determine its roles in endocytosis and pathogenesis; specifically, protease secretion, filamentation, and biofilm formation.We also investigated the role of PAL1 in antifungal sensitivity to shed light on any clinical implications.Our studies examine how the early-coat protein Pal1p contributes to establishing endocytic pathways fundamental to Candida albicans invasiveness and infection.
Identification of the C. albicans Ortholog of PAL1
The DNA and protein sequences of S. cerevisiae PAL1 (YDR348C) were retrieved using the Saccharomyces Genome Database (http://www.yeastgenome.org(accessed on 1 June 2020)).The predicted DNA and protein sequences for the S. cerevisiae PAL1 ortholog were identified as the uncharacterized ORF C3_01890C in C. albicans in the Candida Genome Database (http://www.candidagenome.org(accessed on 1 June 2020)).SnapGene Version 7.0.2(http://www.snapgene.com(accessed on 2 June 2020)) was used to align the two protein sequences by the Smith-Waterman method [25].Protein domain identification and annotation was conducted through the Simple Modular Architecture Research Tool (SMART), a web resource updated in 2020 that is available at https://smart.embl.de(accessed on 3 April 2023) [26].
Deletion of C. albicans Pal1
The C. albicans pal1∆/∆ null mutant (KO) and re-integrant (KI) were generated from the AHY940 parental strain following the CRISPR-Cas9 protocol established by the Hernday laboratory [27].The primers used for these gene mutations are indicated in Table 1.
Preparation of Genomic DNA and Plasmid Isolation
Table 2 indicates the plasmids generated and utilized in this study.Genomic DNA extraction from yeast cells was conducted according to the manufacturer's instructions using the MasterPure Yeast DNA purification kit (Epicentre Biotechnologies, Madison, WI, USA).Plasmids were transformed and maintained in competent Escherichia coli DH5α cells (Invitrogen, Waltham, MA, USA), then extracted with the Qiagen Plasmid Miniprep system protocol (Qiagen, Germantown, MD, USA) using overnight cultures of transformed E. coli cells grown at 37 • C in LB medium (1% tryptone, 0.5% glucose, and 1% NaCl) with 100 µg/mL ampicillin.The human vaginal keratinocyte cell line VK-2/E6E7 (ATCC CRL-2616, Manassas, VA) was grown in Keratinocyte-SFM media (Invitrogen, Waltham, MA, USA) in a cell culture incubator at 37 • C with 5% CO 2 with cells fed every two days and split once a week according to the manufacturer's instructions.
Cell Growth Assay and Assays for Response to Environmental Stress and Filamentation
To determine the effect of PAL1 on cell growth, C. albicans pal1∆/∆ or control (wildtype and re-integrant) strains were first grown in YPD overnight at 30 • C and counted.After the strains were diluted in YNB to 1 × 10 6 cells/mL, 100 µL of cell dilution was loaded in 96-well plates in triplicate, with three replicates per strain.A BioTek Synergy H1 microplate reader (Agilent Technologies, Santa Clara, CA, USA) provided the 30 • C growth rates by recording OD 600 nm extinction at 30 min intervals over 16 h.The average OD 600 values over time were graphed using Excel Version 16.66.1 (Microsoft Corporation, Redmond, WA, USA) to display the growth curve, and the data were analyzed.
As previously described [6,8], agar plate assays also qualitatively described growth rates.After growing cell cultures overnight in YPD at 30 • C, the cells were counted and diluted to 1 × 10 8 cells/mL concentrations in YPD for this and other agar plate assays.Five 5-fold serial dilutions produced five concentrations where 5 µL of each suspension was spotted onto agar plates.Growth on YPD plates was assessed after incubating the plates at 30 • C, 37 • C, and 42 • C for 48 h.Response to cell wall stressors was assessed in triplicate for each strain after 48 h at 30 • C on YNB plates containing 100 µg/mL Calcofluor White or 140 µg/mL Congo Red, and on YPD plates containing 0.02% SDS (all Sigma-Aldrich, St. Louis, MO, USA).To assay for antifungal drug sensitivity, YPD agar plates containing fluconazole (1, 2, and 4 µg/mL), caspofungin (0.025, 0.05, and 0.1 µg/mL), and amphotericin B (0.11, 0.33, and 1 µg/mL) were prepared, and growth on each type of media at three concentrations at 30 • C was assessed after 24 h in triplicate for each strain (all Sigma-Aldrich, St. Louis, MO, USA).
In addition, growth was observed over 24 h in a liquid YPD medium assay with each of the three antifungal drugs: fluconazole, caspofungin, and amphotericin B. Cell cultures were grown overnight in YPD at 30 • C, then counted and diluted to final concentrations of 1 × 10 6 cells/mL in liquid YPD media containing fluconazole, caspofungin, and amphotericin B for final concentrations of 2 µg/mL, 0.05 µg/mL, and 0.33 µg/mL, respectively.A BioTek Synergy H1 microplate reader recorded OD 600 nm extinction at 30 min intervals over 24 h in 96-well plates where 100 µL of cell dilution was loaded in triplicate, with three replicates for each strain.The average OD 600 values over time were graphed using Excel (Microsoft Corporation, Redmond, WA, USA) to display the growth curve.
Fluorescence Microscopy of C. albicans Strains
We visualized cell morphology of the yeast and hyphal forms with slight modifications to the methods described previously [10,29].Yeast growth was conducted with standard overnight cell cultures in liquid YPD at 30 • C. Filamentous growth was induced by overnight incubation in liquid RPMI-1640 media at 37 • C followed by shaking the next day.At 2 and 6 h, cells from the RPMI culture were washed with PBS and mixed in a 2:1 ratio with Calcofluor White (Sigma-Aldrich, St. Louis, MO, USA).Glass slides with the Calcofluor White cell dilution were viewed on a Zeiss Axio Imager M1 fluorescent microscope (Carl-Zeiss AG, Oberkochen, Germany) with a 63× objective lens through both the differential interference contrast (DIC) channel and the 4 ,6-diamidino-2-phenylindole (DAPI) filter.
For the cultures incubated for 6 h in RPMI-1640, 100 cells of each strain were counted by morphotype (yeast, hyphae, pseudohyphae) to qualitatively assess for statistical significance in any differences in morphology between the WT or KI and the KO.Contingency table analysis was performed on percentage frequency within strains using a generalized linear model (GLM).Mean differences between cells were obtained by the least square method.
Analyses of Endocytosis with FM4-64
The actively endocytosed lipophilic membrane dye N-(3-triethylammoniumpropyl)-4-(6-(4-(diethylamino)phenylhexatrienyl) pyridiniumdibromide (FM4-64, EMD Millipore, Temecula, CA, USA) was used to assay membrane-related endocytosis and vacuole morphology as described previously [10].Ice-cold cell cultures grown to the exponential phase were incubated with FM4-64 at a 2 mM concentration for 20 min on ice for dye uptake.The cells were then washed in ice-cold YPD, and subsequently incubated for 5, 15, and 30 min at room temperature.At each time point, a cell aliquot was maintained in ice cold 12 mM sodium azide (Sigma-Aldrich, St. Louis, MO, USA) to halt membrane transport so the uptake process could be observed over time.Vacuolar staining was observed using a Zeiss Axio Imager M1 fluorescent microscope using standard Texas Red filters to visualize FM4-64 dye.The presence and rate of vacuolar membrane staining was used as a representation for endocytosis in image analysis.
Analysis of Biofilm Formation in C. albicans Endocytosis Mutants
The XTT reduction assay was used to assess biofilm metabolic activity according to previous methods [30].To induce biofilm growth, overnight cell cultures of C. albicans pal1∆/∆ or control (wild-type and re-integrant) strains in YPD were diluted to a concentration of 1 × 10 6 cells/mL in liquid RPMI-1640.In total, 100 µL of cell dilution was loaded into a CellBIND 96-well microplate (Corning Inc., Corning, NY, USA) in triplicates for each strain, then incubated at 37 • C for 48 h.Biofilms were washed with PBS then incubated with the XTT-menadione substrate at 37 • C for 2 h before the supernatant was transferred to a new CellBIND 96-well plate.Absorbance at a wavelength of 490 nm was read using the BioTek Synergy H1 microplate reader (BioTek, Winooski, VT, USA) to represent biofilm metabolic activity.Student's t-test in Excel (Microsoft) was used to analyze statistical significance of any differences in absorbance.Biofilms were also visualized using light microscopy on a Nikon Eclipse Ti inverted microscope (Nikon Instruments, Melville, NY, USA and Tokyo, Japan).
Immunofluorescence Microscopy of Co-Cultured VK-2 Cells with C. albicans Strains
VK-2 cells in keratinocyte serum-free media (SFM) were grown on glass coverslips at 37 • C and 5% CO 2 .They were infected with C. albicans pal1∆/∆ or control strains at an MOI of 0.01 for 6 and 24 h as described [10,31].VK-2 cells were stained with a 1:100 dilution of a E-cadherin primary antibody (R&D, Minneapolis, MN, USA) in 0.2% gelatin PBS for 1 h, then washed and incubated for 1 h with an anti-rat Alexa Fluor TM 488 secondary antibody (Invitrogen, Waltham, MA, USA).The coverslips were mounted on slides with an antifade mounting solution containing DAPI for nuclear visualization.A Zeiss Axio Imager M1 microscope using standard EGFR and DAPI filters was used to take images and assess for the presence of E-cadherin.
Western Blotting and Detection of E-Cadherin
Western blotting was performed with a standard protocol as previously described [32] using protein lysate from VK-2 cells infected with C. albicans pal1∆/∆ mutant or control strains in keratinocyte SFM media for 6 h and 24 h at 37 • C and 5% CO 2 .The blot was hybridized overnight in 1x Tris Buffered Saline Casein Blocking Buffer (Bio-Rad, Hercules, CA, USA) containing a 1:500 dilution of a E-cadherin primary antibody (R&D, Minneapolis, MN, USA).The loading control was a 1:1000 dilution of a tubulin antibody (Invitrogen, Waltham, MA, USA).The blot was incubated after washing with an anti-mouse-HRP secondary antibody (Invitrogen, Waltham, MA, USA) for 1 h.The Clarity Max Western ECL substrates (Bio-Rad, Hercules, CA, USA) and the ChemiDoc Imaging System (Bio-Rad, Hercules, CA, USA) were used to perform protein detection and imaging.
Live/Dead Viability Assay
VK-2 cells were grown in keratinocyte serum-free medium (SFM) in 96-well plates to a 30% confluence and were co-cultured for 4 and 24 h with overnight cultures of C. albicans strains grown in YPD at a concentration of 5 × 10 5 cells/mL.The Invitrogen Live/Dead viability assay (Waltham, MA, USA) was conducted following the manufacturer's protocol.The Live/Dead data were analyzed using Excel (Microsoft).
Identification and Comparison of S. cerevisiae PAL1 and C. albicans C3_01890C
The Candida Genome Database has annotated the C. albicans gene C3_01890C as an ortholog to S. cerevisiae PAL1 (YDR348C) [33].C. albicans C3_01890C is an uncharacterized gene located on chromosome 3 that spans 1888 bp.The transcription unit is split into two exons together encoding a 443 amino acid product.Protein sequence comparisons between S. cerevisiae PAL1 and C. albicans C3_01890C via SnapGene showed modest alignment, with 33.08% identity and 43.23% similarity.Given its moderate degree of homology to S. cerevisiae PAL1, C. albicans C3_01890C was referred to as PAL1.
The two alleles of PAL1 are highly homologous.The nucleotide sequence of the coding exons of the alleles are conserved to the 99.77% level.The three alterations (of the 1332 nucleotides) encode one silent and two similar amino acid substitutions.The introns (529 nucleotides) differ by only 7 nucleotides, including a gap of 3 nucleotides, and the promoter proximal 2 kb has 99.10% sequence identity.
Protein sequences of five other fungi species, including three other Candida species, were aligned with C. albicans C3_01890C and S. cerevisiae PAL1 using Muscle 5 on Snapgene to demonstrate other evolutionary relationships (Figure 1).The intron-exon architecture is maintained in most but not all of the Candida PAL1 orthologs examined.Exceptions include C. glabrata as well as D. hansenii and S. cerevisiae.Relatively low amino acid conservation across species was detected in the first exon, with the notable exception of the conserved NPF motif (boxed in blue) near the N-terminal of the proteins.A second NPF motif was found at the beginning of the second exon.These motifs are bound by the EH (Ets Homology) domains of companion proteins to mediate critical events in endocytosis [34].Regions of higher conservation are clustered in exon two (starting with methionine 99 in C. albicans; marked with a red V in Figure 1).Amino acids 140-145 (PPSYEE; orange box in Figure 1) are well conserved; this region is predicted to have a modest tendency to form an alpha-helical arrangement.Other conserved regions are not predicted to have a specific secondary structure.Also notable are the many well-conserved prolines which may function to delimit domains.
In addition, further comparisons with ten other fungal species, including eight Candida species (three of which are shown in Figure 1), also found notable regions of species conservation.The additional species included C. tropicalis, C. parapsilosis, thopsilosis, C. guilliermondii, and L. elongisporus.Several areas, including residues 238, residues 245 to 252, and residues 354 to 357, were absolutely conserved, sugg functional importance.Interestingly, prolines (P) 140, 141, 249, 265, 270, 274, and 356 also absolutely conserved across all species.The PAL1 homolog is also found broa other species, such as in Schizosaccharomyces pombe and Aspergillus nidulans, among [33].The alignment was produced using data from the Candida Genome Database.Regions of amino acid conservation at >80% are highlighted in green.Among the highly conserved sequences are NPF motifs that bind EH domains (boxed in blue), and many proline residues.The sequences in the orange box are a conserved region that are weakly alpha-helical.The red V marks the splice junction in C. albicans that is conserved in many but not all the orthologs examined.
In addition, further comparisons with ten other fungal species, including eight other Candida species (three of which are shown in Figure 1), also found notable regions of cross-species conservation.The additional species included C. tropicalis, C. parapsilosis, C. orthopsilosis, C. guilliermondii, and L. elongisporus.Several areas, including residues 231 to 238, residues 245 to 252, and residues 354 to 357, were absolutely conserved, suggesting functional importance.Interestingly, prolines (P) 140, 141, 249, 265, 270, 274, and 356 were also absolutely conserved across all species.The PAL1 homolog is also found broadly in other species, such as in Schizosaccharomyces pombe and Aspergillus nidulans, among others [33].
However, UniProt and SMART annotation did not indicate any documented protein domains in either S. cerevisiae PAL1 or the protein product of C. albicans C3_01890C.Therefore, structural analysis of C. albicans C3_01890C provided little additional insight into its molecular function.
Construction of C. albicans pal1∆/∆ Mutant and Re-Integrant Strains
We next studied the function of C. albicans PAL1 using a reverse genetics approach.We generated the C. albicans pal1∆/∆ null mutant (KO) and the corresponding re-integrant (KI) strain using a C. albicans-adapted CRISPR-Cas9 strategy and PCR.PCR using three sets of outer primers and one set of inner primers verified correct strain composition (Figure S1).
Contribution of C. albicans PAL1 to Growth and Viability
We then examined the growth of the C. albicans pal1∆/∆ mutant in comparison to the control WT and KI strains.Growth on yeast extract peptone dextrose (YPD) agar plates at 30 • C, 37 • C, and 42 • C was similar between the KO and the control strains and demonstrated no temperature sensitivity (Figure 2a).When grown in liquid YPD media at 30 • C, a slight possible growth delay was observed in the KO compared to the KI after 14 h of the growth curve until the end at 17 h, although this is most likely due to settling of cells in the microtiter plate (Figure 2b).In any case, the KO growth in prior hours aligned closely with KI growth and the final levels of KO growth did not dip below final WT growth levels.The WT, KI, and KO strain doubling times were measured as 3.28 ± 0.27 h, 3.16 ± 0.08 h, and 3.37 ± 0.29 h, respectively.The differences between the KO doubling time and those of the WT and KI are not statistically significant (p = 0.714, p = 0.293).Our results suggest that C. albicans PAL1 does not play a central role in cell growth.
C. albicans PAL1 Affects Stress Tolerance
We next assessed the contribution of PAL1 to cell wall integrity by characterizing the growth of the pal1∆/∆ mutant in response to various cell wall stressors and antifungal agents.No growth defect was observed on the medium containing Calcofluor White.However, there was a marked reduction in growth for the null mutant compared to the WT and KI control strains on media containing 0.02% sodium dodecyl sulfate (SDS), which permeabilizes cell membranes (Figure 3).In contrast, there was a reduction in growth inhibition of the KO strain when compared to the WT and KI strains on plates with Congo Red, a compound that disrupts fungal cell walls (Figure 3).The alignment was produced using data from the Candida Genome Database.Regions of amino acid conservation at >80% are highlighted in green.Among the highly conserved sequences are NPF motifs that bind EH domains (boxed in blue), and many proline residues.The sequences in the orange box are a conserved region that are weakly alpha-helical.The red V marks the splice junction in C. albicans that is conserved in many but not all the orthologs examined.
However, UniProt and SMART annotation did not indicate any documented protein domains in either S. cerevisiae PAL1 or the protein product of C. albicans C3_01890C.Therefore, structural analysis of C. albicans C3_01890C provided little additional insight into its molecular function.
Construction of C. albicans pal1Δ/Δ Mutant and Re-Integrant Strains
We next studied the function of C. albicans PAL1 using a reverse genetics approach.We generated the C. albicans pal1Δ/Δ null mutant (KO) and the corresponding re-integrant (KI) strain using a C. albicans-adapted CRISPR-Cas9 strategy and PCR.PCR using three sets of outer primers and one set of inner primers verified correct strain composition (Figure S1).
Contribution of C. albicans PAL1 to Growth and Viability
We then examined the growth of the C. albicans pal1Δ/Δ mutant in comparison to the control WT and KI strains.Growth on yeast extract peptone dextrose (YPD) agar plates at 30 °C, 37 °C, and 42 °C was similar between the KO and the control strains and demonstrated no temperature sensitivity (Figure 2a).When grown in liquid YPD media at 30 °C, a slight possible growth delay was observed in the KO compared to the KI after 14 h of the growth curve until the end at 17 h, although this is most likely due to settling of cells in the microtiter plate (Figure 2b).In any case, the KO growth in prior hours aligned closely with KI growth and the final levels of KO growth did not dip below final WT growth levels.The WT, KI, and KO strain doubling times were measured as 3.28 ± 0.27 h, 3.16 ± 0.08 h, and 3.37 ± 0.29 h, respectively.The differences between the KO doubling time and those of the WT and KI are not statistically significant (p = 0.714, p = 0.293).Our results suggest that C. albicans PAL1 does not play a central role in cell growth. (a)
C. albicans PAL1 Affects Stress Tolerance
We next assessed the contribution of PAL1 to cell wall integrity by characterizing the growth of the pal1Δ/Δ mutant in response to various cell wall stressors and antifungal agents.No growth defect was observed on the medium containing Calcofluor White.However, there was a marked reduction in growth for the null mutant compared to the WT and KI control strains on media containing 0.02% sodium dodecyl sulfate (SDS), which permeabilizes cell membranes (Figure 3).In contrast, there was a reduction in growth inhibition of the KO strain when compared to the WT and KI strains on plates with Congo Red, a compound that disrupts fungal cell walls (Figure 3).When grown on media containing the three common antifungal drugs fluconazole, caspofungin, and amphotericin B, the pal1∆/∆ null mutant exhibited no change in sensitivity to fluconazole (Figure 4).In contrast, a mild reduction in growth impairment in the null mutant was observed on YPD agar plates with 0.1 µg/mL caspofungin compared to controls, indicating reduced susceptibility to this agent.All three (WT, KI, KO) strains grew at all three concentration levels under each drug condition.Reduced susceptibility to caspofungin was also demonstrated in a liquid YPD medium assay with 0.05 µg/mL caspofungin for the null mutant, while liquid media assays with 0.2 µg/mL fluconazole and 0.33 µg/mL amphotericin B demonstrated no changes in sensitivity for the null mutant compared to the WT and KI strains.Caspofungin disrupts cell wall integrity by inhibiting glucan synthase in contrast to fluconazole [35], which interferes with ergosterol synthesis to decrease membrane stability [36].The decreased susceptibility of the KO strain to the cell wall active agents caspofungin and Congo Red suggests that the C. albicans PAL1 gene plays a complex role in the maintenance of cell wall integrity.The increased sensitivity of the KO strain to SDS indicates a role of PAL1 in contributing to maintaining the cell membrane.When grown on media containing the three common antifungal drugs fluconazole, caspofungin, and amphotericin B, the pal1Δ/Δ null mutant exhibited no change in sensitivity to fluconazole (Figure 4).In contrast, a mild reduction in growth impairment in the null mutant was observed on YPD agar plates with 0.1 μg/mL caspofungin compared to controls, indicating reduced susceptibility to this agent.All three (WT, KI, KO) strains grew at all three concentration levels under each drug condition.Reduced susceptibility to caspofungin was also demonstrated in a liquid YPD medium assay with 0.05 μg/mL caspofungin for the null mutant, while liquid media assays with 0.2 μg/mL fluconazole and 0.33 μg/mL amphotericin B demonstrated no changes in sensitivity for the null mutant compared to the WT and KI strains.Caspofungin disrupts cell wall integrity by inhibiting glucan synthase in contrast to fluconazole [35], which interferes with ergosterol synthesis to decrease membrane stability [36].The decreased susceptibility of the KO strain to the cell wall active agents caspofungin and Congo Red suggests that the C. albicans PAL1 gene plays a complex role in the maintenance of cell wall integrity.The increased sensitivity of the KO strain to SDS indicates a role of PAL1 in contributing to maintaining the cell membrane.The KO strain also exhibits reduced susceptibility compared with the WT and KI strains in a liquid YPD medium assay with caspofungin at a concentration of 0.05 µg/mL.
Membrane-Related Endocytosis Is Intact in the C. albicans pal1∆/∆ Mutant
To investigate whether there were any defects in endocytosis and intracellular membrane trafficking in the pal1∆/∆ mutant, we evaluated the intake of the lipophilic fluorescent dye FM4-64.After FM4-64 enters into the outer membrane, it is internalized through clathrin-based endocytosis, which allows visualization of the endocytic transport process towards the vacuole via intermediate compartments.
We observed no delay in endocytosis in the pal1∆/∆ mutant.After 30 min, the FM4-64 stain was observed in the vacuolar membrane of all three strains, which demonstrates that FM4-64 was endocytosed to the vacuole by the KO strain at a similar efficiency level as the control strains (Figure 5).These results are consistent with previous findings that the KO strain did not display altered susceptibility when grown in media containing fluconazole, whose effects depends on cellular internalization [33] (Figure 4).The lack of any observed endocytic defects suggests that C. albicans PAL1 is not required for endocytosis of FM4-64.that FM4-64 was endocytosed to the vacuole by the KO strain at a similar efficiency level as the control strains (Figure 5).These results are consistent with previous findings that the KO strain did not display altered susceptibility when grown in media containing fluconazole, whose effects depends on cellular internalization [33] (Figure 4).The lack of any observed endocytic defects suggests that C. albicans PAL1 is not required for endocytosis of FM4-64.Membrane-related endocytosis was observed over time using lipophilic dye FM4-64 (red), then visualized using DIC and fluorescence microscopy.At 5 and 15 min after incubation at room temperature, the dye was observed to move from the cell periphery towards the vacuole.At 30 min after incubation at room temperature, the dye was accumulated in the vacuolar membrane in all three strains, as evidenced by the presence of a fluorescent vacuolar "ring", indicating a lack of active endocytosis delay in the KO mutant.The scale bar is 10 μm.
C. albicans PAL1 Is Required for Wild-Type Filamentation
As proper C. albicans hyphal and biofilm formation require intact cell wall and remodeling processes, we next investigated whether the defects in the cell wall stress response also impact filamentation.Using solid filamentation media, including RPMI-1640, Medium 199 (M199), fetal calf serum (FCS), and Spider media, we assessed the ability of the pal1Δ/Δ mutant to successfully form hyphae (Figure 6c).On RPMI-1640 and Medium 199 agar plates, we saw that filamentous growth was reduced in the KO compared to the WT and KI controls, which had robust filamentous growth on and around the spotted colony.In liquid RPMI-1640 media, the null mutant showed primarily pseudo-hyphal growth with abnormal septin ring formation and septin rings at cell junctions in the KO strain that are more constricted compared to those of the WT and KI strains (Figure 6a), whereas the control stains exhibited true hyphal growth (Figure 6b).At 6 h, the Membrane-related endocytosis was observed over time using lipophilic dye FM4-64 (red), then visualized using DIC and fluorescence microscopy.At 5 and 15 min after incubation at room temperature, the dye was observed to move from the cell periphery towards the vacuole.At 30 min after incubation at room temperature, the dye was accumulated in the vacuolar membrane in all three strains, as evidenced by the presence of a fluorescent vacuolar "ring", indicating a lack of active endocytosis delay in the KO mutant.The scale bar is 10 µm.
C. albicans PAL1 Is Required for Wild-Type Filamentation
As proper C. albicans hyphal and biofilm formation require intact cell wall and remodeling processes, we next investigated whether the defects in the cell wall stress response also impact filamentation.Using solid filamentation media, including RPMI-1640, Medium 199 (M199), fetal calf serum (FCS), and Spider media, we assessed the ability of the pal1∆/∆ mutant to successfully form hyphae (Figure 6c).On RPMI-1640 and Medium 199 agar plates, we saw that filamentous growth was reduced in the KO compared to the WT and KI controls, which had robust filamentous growth on and around the spotted colony.In liquid RPMI-1640 media, the null mutant showed primarily pseudo-hyphal growth with abnormal septin ring formation and septin rings at cell junctions in the KO strain that are more constricted compared to those of the WT and KI strains (Figure 6a), whereas the control stains exhibited true hyphal growth (Figure 6b).At 6 h, the proportions of WT or KI cells that were hyphae were significantly greater than the proportion of KO cells that were hyphae (p < 0.0001 for both).The proportions of WT or KI cells that were pseudohyphae were significantly less than the proportion of KO cells that were pseudohyphae (p < 0.0001 for both).
To investigate any effects of defective filamentation on the ability to form biofilms, we evaluated biofilm metabolic activity.Relative to the WT and KI controls, biofilm metabolic activity was not significantly different in the KO strain (Figure 6c).These results together demonstrate that C. albicans PAL1 is required for proper hyphal growth and formation yet has a negligible impact on biofilm metabolic activity.
portion of KO cells that were hyphae (p < 0.0001 for both).The proportions of WT or KI cells that were pseudohyphae were significantly less than the proportion of KO cells that were pseudohyphae (p < 0.0001 for both).
To investigate any effects of defective filamentation on the ability to form biofilms, we evaluated biofilm metabolic activity.Relative to the WT and KI controls, biofilm metabolic activity was not significantly different in the KO strain (Figure 6c).These results together demonstrate that C. albicans PAL1 is required for proper hyphal growth and formation yet has a negligible impact on biofilm metabolic activity.Because extracellular protease secretion is a key virulence-related attribute in C. albicans along with hyphal and biofilm formation, we analyzed whether there were defects in protease secretion in the null mutant.Given the filamentation defects in the pal1∆/∆ mutant, we investigated their effects on virulence-associated traits using a human vaginal keratinocyte (VK-2) model of infection.C. albicans virulence is partially aided by the secreted aspartic proteases Sap4 to Sap6 that digest E-cadherin, a mammalian adhesion protein present at tight junctions.These proteases support the ability of C. albicans to target tight junctions between host cells to disrupt cell-cell adhesions [37,38].
For 6 h and 24 h after VK-2 cells were infected with C. albicans WT, KI, or KO strains, we visualized the labeled E-cadherin proteins with fluorescence and DAPI microscopy.The labeled tight junctions appeared intact 6 h after infection with all three C. albicans strains, but no E-cadherin staining was observed in VK-2 cells infected with any strain after 24 h (Figure 7a).Western blotting to visualize impaired dissolution of cell-cell adhesions for the null mutant was also used to determine whether E-cadherin was present in VK-2 cells at 0-, 6-, and 24 h post infection (Figure 7b).E-cadherin levels were similar in VK-2 cells infected with WT, KI, and KO strains at 6 h.After 24 h, E-cadherin was virtually undetected in cells infected with either of the three strains, which aligns with the microscopy results.These findings indicate that C. albicans PAL1 deletion is not associated with any changes in the virulence-associated ability to damage cell-cell adhesions in host VK-2 cells.Given the ability of C. albicans to kill host cells in addition to disrupting tight junctions, we further investigated the role of PAL1 in C. albicans pathogenesis using a microplate Live/Dead assay (Invitrogen, Waltham, MA, USA).We measured VK-2 cell growth 4 and 24 h after infection with WT, KI, or KO strains.At 4 h post infection after exposure to any of the three strains, almost 100% of VK-2 cells were still alive (Figure 8a).At 24 h, the WT and KI control strains, respectively killed about 50% and 55% of VK-2 cells, whereas nearly 60% of the VK-2 cells infected with the KO strain were killed (Figure 8b).The KO strain was just as effective and timely in killing VK-2 cells in vitro as the WT and KI.Thus, C. albicans PAL1 does not directly contribute to host cell death in this model.While 50% of cells infected with the wild-type (WT) strain and 55% of cells infected with the re-integrant (KI) strain were dead after 24 h, nearly 60% of the cells infected with the KO strain were dead as well, suggesting that the capacity to kill host cells is unaffected in the KO strain.
Discussion
Previous studies on early endocytosis genes in C. albicans such as ENT2 and END3 have suggested that they play a role in pathogenesis through their involvement in clathrin-mediated endocytosis (CME) in C. albicans [2,9,10].Although several CME coat protein genes have been characterized, others are yet to be studied.In this work, we focused on the C. albicans ortholog (C3_01890C) of S. cerevisiae PAL1 (YDR348C) and its contribution While 50% of cells infected with the wild-type (WT) strain and 55% of cells infected with the re-integrant (KI) strain were dead after 24 h, nearly 60% of the cells infected with the KO strain were dead as well, suggesting that the capacity to kill host cells is unaffected in the KO strain.
Discussion
Previous studies on early endocytosis genes in C. albicans such as ENT2 and END3 have suggested that they play a role in pathogenesis through their involvement in clathrinmediated endocytosis (CME) in C. albicans [2,9,10].Although several CME coat protein genes have been characterized, others are yet to be studied.In this work, we focused on the C. albicans ortholog (C3_01890C) of S. cerevisiae PAL1 (YDR348C) and its contribution to growth, stress tolerance, filamentation and biofilm formation, and virulence.The two highly homologous alleles of PAL1 contain only three differences between the 1332 nucleotides that lead to one silent and two similar amino acid substitutions.
An analysis of structural homology with C. albicans C3_01890C and S. cerevisiae PAL1 was conducted through sequence alignment, which found a moderate level of alignment and several regions of conservation.Further comparisons with ten other fungal species, including eight other Candida species (three of which are shown in Figure 1), also found notable regions of cross-species conservation.Because proline side chains produce a kink in the peptide backbone due to their rigidity, the absolute conservation of seven prolines (P) across all species may be notable for protein structure and may support thermal stability.At least one of the two small NPF motifs, known to bind EH domains, are found in all the PAL1 orthologs examined, indicating the integration of PAL1 in the endocytic pathways.
Next, we did not detect a growth defect or temperature sensitivity for the C. albicans pal1∆/∆ mutant (KO).Because C. albicans growth depends on adapting to external nutrient conditions [39], the absence of any growth delays indicates that the intracellular nutrient trafficking pathways under standard nutrient conditions remain comparable to the WT and KI controls [40], or alternatively, compensatory mechanisms overcome any trafficking defects.The absence of growth defects in the KO suggests that C. albicans PAL1 is nonessential in the processes of nutrient transport and cytokinesis.Nonetheless, the delay in filamentous growth observed in the KO compared to the WT and KI was only seen in RPMI-1640 and Medium 199 agar plates, which are both minimal media plates, and not in the FCS and Spider agar plates, suggesting a possible indirect effect on filamentation.Under these minimal media conditions, filamentous growth in the KO could be indirectly impacted by an impairment in nutrient uptake that is linked to nutrient deprivation.
However, the absence of the C. albicans PAL1 gene broadly affects both cell membrane and cell wall integrity.The KO exhibited increased sensitivity to the cell-membraneactive agent SDS, indicating that PAL1 contributes to cell membrane integrity.Because the KO also demonstrated decreased susceptibility to the cell wall stressor Congo Red and the cell wall active drug caspofungin, PAL1 appears to have a complex role in the response of C. albicans to cell wall stressors.The fungal cell wall is generally composed of an inner layer of chitin and β-1,3-glucan covered by an outer layer of cell wall proteins.Caspofungin interferes with fungal cell wall integrity by inhibiting glucan synthase [41], which is responsible for generating β-linked glucans.Congo Red impedes fungal cell wall assembly through interacting with β-linked glucans and chitin [42].Because both of their mechanisms of action rely on β-linked glucans, PAL1 s function in cell wall maintenance likely relates to the relationship between β-linked glucans and the cell wall.In contrast, no significant difference in growth was observed for the antifungal drugs fluconazole and amphotericin B which affect the cell membrane, nor for the chitin synthesis inhibitor Calcofluor White [42].Fluconazole interferes with ergosterol metabolism to impair plasma membrane integrity [36], and amphotericin B binds ergosterol in the fungal cell membrane, leading to pore formation [43].Therefore, the role of C. albicans PAL1 in cell membrane integrity does not appear to be related to the ergosterol synthesis or function.Based on the limited number of studies on PAL1 in both C. albicans and S. cerevisiae, the exact mechanisms of these phenotypes have not been clarified.S. cerevisiae Pal1 has been identified as an early coat protein in clathrin-based endocytosis (CME) that localizes to endocytic sites of the bud neck.PAL1 s role in the C. albicans CME pathway may be similar, given the high degree of structural and functional conservation of endocytosis in yeasts and other eukaryotic cells.In contrast to the pal1∆/∆ mutant with unaltered sensitivity for fluconazole, the null mutants (ent2∆/∆ and end3∆/∆) of the C. albicans early endocytosis genes ENT2 and END3, also proposed as CME coat proteins, demonstrated reduced susceptibility to fluconazole.ENT2 and END3 also expressed increased sensitivity to a broader collection of drugs and stressors, including amphotericin B and Calcofluor White, although all three null mutants are sensitive to SDS [9,10].Taken together, these differences indicate that early endocytosis genes occupy distinct roles in the highly orchestrated process of early endocytosis, which has an impact on antifungal drug resistance through as of yet undefined mechanisms.
Additionally, our results indicate that C. albicans PAL1 is non-essential to clathrinmediated endocytosis.In contrast, the null mutants of ENT2 and END3 demonstrate marked endocytic defects.These differences suggest further distinctions between earlycoat proteins such as Pal1 compared with middle-and late-coat proteins, such as Ent2 and End3, respectively [9,10].Interestingly, although the pal1∆/∆ mutant (KO) demonstrated impaired hyphal growth and septin ring formation under some conditions, its ability to form biofilms was comparable to that of the WT and KI controls.Nevertheless, its tendency to form pseudohyphal filaments under some conditions instead of true hyphae indicates that the C. albicans PAL1 gene has an impact on hyphal formation.In C. albicans, septins are cytoskeletal filament-forming proteins that also regulate hyphal morphogenesis [44,45].Because hyphal morphology is dependent on forming septin rings at cell junctions [46], PAL1 s influence on hyphal formation in conjunction with minimal nutrient conditions suggests that the gene contributes to nutrient utilization or uptake processes required for filamentation.
Given the lack of a growth defect observed in the C. albicans pal1∆/∆ mutant, the similarity in biofilm metabolic activity cannot be explained by any differences in baseline growth rates.Although the KO exhibits a much greater proportion of pseudohyphae instead of hyphae, its ability to adhere to a solid surface to form a layer of anchoring yeast cells during the "seeding" step of biofilm formation is unaffected by the presence or absence of PAL1 [47].Hence, even though C. albicans PAL1 plays an important role in mediating hyphal formation, these effects do not affect overall levels of biofilm formation.Consistent with their endocytic defects, mutants lacking the middle-coat protein Ent2 or the late-coat protein End3 display both impaired filamentation and biofilm formation [9,10].Moreover, the C. albicans homolog of the S. cerevisiae late-coat protein-encoding gene LAS17, WAL1, appears essential to filamentation and hyphal growth as well [22,23].
Active penetration depends in part on the physical force that is exerted by elongating hyphae in order to produce greater epithelial cell damage in vivo [48].Despite the pseudohyphal morphology of the C. albicans pal1∆/∆ mutant, the lack of defects in its abilities to kill host cells and to dissolve cell-cell adhesion is consistent with the lack of a defect in biofilm formation.In contrast, ENT2 and END3 differ from PAL1 with findings of decreased tissue invasiveness by their null mutants [9,10].All the dramatic defects in endocytosis, biofilm formation, and virulence in mutants lacking ENT2 and END3 are absent in the pal1∆/∆ mutant, indicating a divergence of function within the CME pathway despite their shared trait as coat proteins functioning during the process of early endocytosis.These studies indicate that while PAL1 plays an important role in cell wall integrity and filamentation, it is dispensable for biofilm formation and tissue invasiveness in our in vitro models.Intriguingly, loss of PAL1 led to reduced susceptibility to the cell-wall-active agents caspofungin and Congo Red; studies to elucidate the mechanisms of these phenotypes are planned.
Figure 1 .
Figure 1.Protein sequence alignments between C. albicans C3_01890C, S. cerevisiae YDR348C (PAL1), and orthologs in three other species of Candida and one in the Debaryomyces species.The alignment was produced using data from the Candida Genome Database.Regions of amino acid conservation at >80% are highlighted in green.Among the highly conserved sequences are NPF motifs that bind EH domains (boxed in blue), and many proline residues.The sequences in the orange box are a conserved region that are weakly alpha-helical.The red V marks the splice junction in C. albicans that is conserved in many but not all the orthologs examined.
20 Figure 1 .
Figure1.Protein sequence alignments between C. albicans C3_01890C, S. cerevisiae YDR348C (PAL1), and orthologs in three other species of Candida and one in the Debaryomyces species.The alignment was produced using data from the Candida Genome Database.Regions of amino acid conservation at >80% are highlighted in green.Among the highly conserved sequences are NPF motifs that bind EH domains (boxed in blue), and many proline residues.The sequences in the orange box are a conserved region that are weakly alpha-helical.The red V marks the splice junction in C. albicans that is conserved in many but not all the orthologs examined.
Figure 2 .
Figure 2. The C. albicans pal1∆/∆ null mutant (KO) does not demonstrate impaired growth nor altered cell morphology.(a) Growth on yeast extract peptone dextrose (YPD) plates at 30 • C, 37 • C, and 42 • C. The KO does not display any significant reduction in growth compared to the wild-type (WT) and re-integrant (KI) strains.All three temperatures showed similar levels of growth, indicating an absence of temperature sensitivity.(b) Growth curve in liquid yeast nitrogen base (YNB) at 30 • C. OD 600 values were recorded every 30 min for 16 h.Experiments were conducted in triplicate, with three replicates per strain.Error bars indicate the 95% confidence interval of OD 600 values at each time point for each strain.The doubling times for the WT, KI, and KO strains were calculated as 3.28 ± 0.27 h, 3.16 ± 0.08 h, and 3.37 ± 0.29 h, respectively.
Figure 3 .
Figure 3.The C. albicans pal1Δ/Δ mutant strain (KO) altered stress tolerance.As shown by the plate assays, the null mutant demonstrated increased sensitivity to SDS, which permeabilizes cell membranes.Reduced sensitivity to Congo Red, which disrupts fungal cell walls, was observed in the KO compared to the wild-type (WT) and re-integrant (KI) strains.The KO grew comparably to the WT and KI under Calcofluor White conditions.
Figure 3 .
Figure 3.The C. albicans pal1∆/∆ mutant strain (KO) altered stress tolerance.As shown by the plate assays, the null mutant demonstrated increased sensitivity to SDS, which permeabilizes cell membranes.Reduced sensitivity to Congo Red, which disrupts fungal cell walls, was observed in the KO compared to the wild-type (WT) and re-integrant (KI) strains.The KO grew comparably to the WT and KI under Calcofluor White conditions.
Figure 4 .
Figure 4.The C. albicans pal1Δ/Δ mutant strain (KO) shows reduced sensitivity to the antifungal drug caspofungin compared to the wild-type (WT) and re-integrant (KI) strains.(a) As demonstrated in the plate assays, the KO strain exhibits better growth than the WT and KI strains in media containing caspofungin.However, the KO strain grew comparably to the WT and KI in all concentrations of antifungal drug fluconazole and amphotericin B. (b) The KO strain also exhibits reduced susceptibility compared with the WT and KI strains in a liquid YPD medium assay with caspofungin at a concentration of 0.05 µg/mL.
Figure 4 .
Figure 4.The C. albicans pal1∆/∆ mutant strain (KO) shows reduced sensitivity to the antifungal drug caspofungin compared to the wild-type (WT) and re-integrant (KI) strains.(a) As demonstrated in the plate assays, the KO strain exhibits better growth than the WT and KI strains in media containing caspofungin.However, the KO strain grew comparably to the WT and KI in all concentrations of antifungal drug fluconazole and amphotericin B. (b) The KO strain also exhibits reduced susceptibility compared with the WT and KI strains in a liquid YPD medium assay with caspofungin at a concentration of 0.05 µg/mL.
Figure 5 .
Figure 5. Membrane-related endocytosis is unaffected in the C. albicans pa11Δ/Δ mutant (KO).Membrane-related endocytosis was observed over time using lipophilic dye FM4-64 (red), then visualized using DIC and fluorescence microscopy.At 5 and 15 min after incubation at room temperature, the dye was observed to move from the cell periphery towards the vacuole.At 30 min after incubation at room temperature, the dye was accumulated in the vacuolar membrane in all three strains, as evidenced by the presence of a fluorescent vacuolar "ring", indicating a lack of active endocytosis delay in the KO mutant.The scale bar is 10 μm.
Figure 5 .
Figure 5. Membrane-related endocytosis is unaffected in the C. albicans pa11∆/∆ mutant (KO).Membrane-related endocytosis was observed over time using lipophilic dye FM4-64 (red), then visualized using DIC and fluorescence microscopy.At 5 and 15 min after incubation at room temperature, the dye was observed to move from the cell periphery towards the vacuole.At 30 min after incubation at room temperature, the dye was accumulated in the vacuolar membrane in all three strains, as evidenced by the presence of a fluorescent vacuolar "ring", indicating a lack of active endocytosis delay in the KO mutant.The scale bar is 10 µm.
Figure 6 .Figure 6 . 3 . 7 .
Figure 6.The C. albicans pal1Δ/Δ mutant strain (KO) is defective in filamentation but not biofilm formation.(a) Filaments visualized under differential interference contrast (DIC) and fluorescence microscopy with Calcofluor White (CW) staining after 2 and 6 h of liquid RPMI-1640 incubation after overnight growth in YPD.While wild-type (WT) and re-integrant (KI) filaments demonstrated hyphal morphology, the null mutant formed pseudohyphae with abnormal septal junctions.The scale bar is 10 μm.(b) Proportions of yeast, hyphae, and pseudohyphae present in each of the WT, KI, and KO strains assessed for cells incubated for 6 h in RPMI-1640.Statistical significance was determined using a generalized linear model (GLM) with a least-square means estimation of difference.Compared to the WT and KI strains, the KO strain contained a significantly greater proportion of pseudohyphae and a significantly lower proportion of hyphae (p < 0.0001 for all).Significant differences between the WT and KO strains are marked with *.Significant differences between the KI and KO strains are denoted by **.(c) Agar plate assays of C. albicans hyphal formation.WT, KI, and KO strains were spotted on filamentation-inducing media (RPMI-1640, M199, FCS, and Spider) and incubated at 37 °C for 72 h, then photographed.Under RPMI-1640 and M199 conditions, the KO exhibited a substantial reduction in filamentous growth around the spotted colony, suggesting impaired filamentation.(d) Biofilm metabolic activity measured using an XTT reduction assay.Error bars indicate standard deviation.Statistical significance was determined with Student's t test (WT/KO p < 0.0001; KI/KO p < 0.0001; WT/KI p = 0.07).Relative to the WT and KI strains, the biofilm activity in the KO was not significantly lower, indicating a lack of defect in biofilm formation in the C. albicans pal1∆/∆ null mutant.3.7.Dissolution of Cell-Cell Adhesions Is Unaffected in the C. albicans pal1Δ/Δ Mutant Because extracellular protease secretion is a key virulence-related attribute in C. albi-
Figure 7 .
Figure 7.The ability to dissolve cell-cell adhesions in human VK-2 cells remains unaffected in the presence of C. albicans pal1Δ/Δ null mutant (KO).(a) Human VK-2 cells were infected with wildtype (WT), re-integrant (KI), and KO strains then incubated for 6 and 24 h.E-cadherin was fluorescently labeled using a GFP-tagged antibody and observed as punctate structures present at epithelial cell junctions.DAPI dye was used to label the nucleus, and the merged E-cadherin and DAPI images are displayed in the top row.E-cadherin-labeled junctions were degraded at comparable levels after 24 h by all Candida strains.The scale bar is 10 μm.(b) Western blot of E-cadherin in C. albicans-infected VK-2 cells.E-cadherin is absent in the WT-, KI-, and KO-infected VK-2 cells 24 h post infection, indicating a lack of defect in the ability to disrupt host cell-cell junctions in the C.
Figure 7 .
Figure 7.The ability to dissolve cell-cell adhesions in human VK-2 cells remains unaffected in the presence of C. albicans pal1∆/∆ null mutant (KO).(a) Human VK-2 cells were infected with wild-type
3. 8 .
Ability to Kill Host Cells Does Not Diminish in the C. albicans pal1∆/∆ Mutant J. Fungi 2023, 9, x FOR PEER REVIEW 16 of 20
Figure 8 .
Figure 8.The C. albicans pal1Δ/Δ mutant demonstrates a similar level of ability to kill human VK-2 cells in vitro.Proportion of live (white bars) and dead (black bars) VK-2 cells 4 h and 24 h after C. albicans infection.Error bars represent the standard deviation.While 50% of cells infected with the wild-type (WT) strain and 55% of cells infected with the re-integrant (KI) strain were dead after 24 h, nearly 60% of the cells infected with the KO strain were dead as well, suggesting that the capacity to kill host cells is unaffected in the KO strain.
Figure 8 .
Figure 8.The C. albicans pal1∆/∆ mutant demonstrates a similar level of ability to kill human VK-2 cells in vitro.Proportion of live (white bars) and dead (black bars) VK-2 cells 4 h and 24 h after C. albicans infection.Error bars represent the standard deviation.While 50% of cells infected with the wild-type (WT) strain and 55% of cells infected with the re-integrant (KI) strain were dead after 24 h, nearly 60% of the cells infected with the KO strain were dead as well, suggesting that the capacity to kill host cells is unaffected in the KO strain.
Table 3
lists the C. albicans strains utilized in this study.Unless otherwise stated, strains were cultured in YPD (1% yeast extract, 2% peptone, 2% glucose) or in YNB minimal media (0.67% yeast nitrogen base, amino acids, 2% glucose) and incubated at 30 • C with shaking at 250 rpm.Solid media with 2% agar were prepared for agar plate assays.Inoculated C. albicans plates were incubated at either 30 • C or 37 • C. | 2023-11-15T16:17:06.031Z | 2023-11-01T00:00:00.000 | {
"year": 2023,
"sha1": "d3f876d702037241a47854e3389903ac9d1950be",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "bc0e5145eeea031dd34cde269113f39b60f0766d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
152282968 | pes2o/s2orc | v3-fos-license | Optical polarization properties of February 2010 outburst of the blazar Mrk 421
In this paper, we explore the behavior of optical polarization during the multi-wavelength outburst of the blazar Mrk 421 detected in February 2010. We use optical polarization measurements in the wavelength range 500-700 nm from SPOL observations available between January 1, 2010 and March 31, 2010 (MJD 55197-55286) including the period of multi-wavelength flaring activity detected from the source around February 16-17, 2010 (MJD 55243-55244). We also use near simultaneous optical and radio flux measurements from SPOL in V and R bands and OVRO at 15 GHz respectively. We find that the emissions in the optical and radio bands do not show any significant change in the source activity unlike at X-ray and $\gamma$--ray energies during the outburst. The optical and radio flux measurements are found to be consistent with the long term quiescent state emission of the source. Moreover, the linear polarization in the wavelength range 500-700 nm decreases to a minimum value of 1.6$\%$ during the X-ray and $\gamma$--ray outburst which is significantly lower than the long term average value of $\sim$ 4.2$\%$. The angle of polarization varies between 114$^\circ$-163$^\circ$ with a preferred average value of $\sim$ 137$^\circ$ during this period. We estimate the degree of polarization intrinsic to the jet taking into account the host galaxy contamination in R band and compare this with the theoretical synchrotron polarization estimated for a power law distribution of relativistic electrons gyrating in an emission region filled with ordered and chaotic magnetic fields. The intrinsic linear polarization estimated for different epochs during the above period is found to be consistent with the theoretical synchrotron polarization produced by the relativistic electrons with power law spectral index $\sim$ 2.2.
Introduction
BL Lacertae objects (BL Lacs) and Flat Spectrum Radio Quasars (FSRQs) collectively form the blazar class of active galactic nuclei (AGN) with highly collimated relativistic jets pointing at small angles (≤ 10 • ) towards the observer on Earth (Urry & Padovani 1995).
FSRQs are observed to be more luminous and powerful than BL Lacs. In addition, the optical spectra of FSRQs have prominent emission lines from the thermal plasma whereas no such or weak spectral lines are observed from BL Lacs. Blazars are observed to emit radiation over the entire electromagnetic spectrum from radio to TeV γ-rays. Most of the electromagnetic radiation observed from blazars is explained by the synchrotron and inverse Compton processes. The broadband spectral energy distribution (SED) of blazars exhibits two broad humps peaking at low (radio-UV/optical-soft X-ray) and high (hard X-ray-MeV-GeV-TeV γ-ray) energies respectively. The physical process for the origin of low energy hump has been completely understood and is attributed to the synchrotron radition of relativistic electrons and positrons in the jet magnetic field. However, the origin of high energy hump is not very clear and is one of the important open questions in the high energy astrophysics research until today. In the generally accepted scenario, high energy emission from blazars is attributed to the inverse Compton (IC) scattering of low energy photons produced both inside and outside the jet by the same population of relativistic electrons responsible for the origin of low energy hump. If target photons for IC scattering are the synchrotron photons produced inside the jet, it is referred to as synchrotron self Compton (SSC) process under the leptonic scenario (Maraschi et al. 1992; Ghisellini et al. 1998). On the other hand, if the seed photon field for IC scattering is outside the jet, the process is known as external Compton (EC) model (Dermer et al. 1992;Sikora et al. 1994). In the alternative hadronic scenario, models based on proton synchrotron or secondary emission from the cascade initiated by proton-γ interactions have been proposed to explain the high energy hump in the blazar SED (Aharonian 2002;-5 -Mücke et al. 2003;Abdo et al. 2011;Fraija & Marinelli 2015). Blazars are also observed to exhibit broadband or orphan flaring activity (dominant at X-ray and γ-ray energies) with the highest output flux level varying at timescales ranging from months to few minutes (Cui 2004;Fraija 2015;Ackermann et al. 2016;Singh et al. 2018Singh et al. , 2019. The physical processes involved in the flaring activity of these sources are not completely understood. The broadband radiation measured from the blazars is characterized as non-thermal, variable, Doppler boosted and highly polarized. The observations of polarized radiation at optical wavelengths from blazars support the synchrotron process in the ordered magnetic field in the blazar jets. FSRQs are observed to be less polarized than the radio selected BL Lacs (Fan et al. 2008). The observations of high degree of polarization during the flaring and quiescent states play a very important role in exploring the emission mechanisms operating in the jets or outflows in blazars. The measurements of the degree and angle of polarization from blazars at low energies help in probing the underlying particle distribution along with the strength and configuration of magnetic field in the emission region of the blazar jet. Correlated variability between polarization and flux in different energy bands provides information about the magnetic field geometry in the blazar jet.
Therefore, variability study of the polarization is important for exploring the jet properties and associated magnetic field structure (Visvanathan & Wills 1996). It has also been observed that the polarization is connected to the morphology and relativistic beaming and therefore plays an important role in the unification models for quasars (Fan et al. 2008;Zakamska et al. 2005). Mrk 421 is a well studied BL Lac object and has been considered as an excellent candidate for understanding the physical mechanisms involved in the blazar jets. In this work, we use the optical polarization measurements on Mrk 421 observed during the extreme outburst in the X-ray and γ-ray energy bands in February 2010 to investigate the physical processes involved in the flaring activity. The paper is organized as follows: in Section 2, we present an overview of the important results based on the -6multi-wavelength studies of the flare detected from Mrk 421 in February 2010. In Section 3, we describe the observations and data sets used in this work. Section 4 describes the results of the optical and radio observations of the source. In Section 5, we discuss the theoretical aspects of the synchrotron polarization and compare them with the results obtained during the outburst. Finally, we conclude the important findings of this study in Section 6.
Overview of Mrk 421 outburst in February 2010
Mrk 421 is a very active BL Lac type blazar at redshift z=0.031 (Sbarufatti et al. 2005). The source has been observed to undergo broadband flaring activities occasionally since its discovery at TeV γ-rays in 1991 (Punch et al. 1992). Here, we present an overview of the results from the X-ray and γ-ray observations of Mrk 421 outburst detected around February 16-17, 2010 (MJD 55243-55244). The X-ray outburst detected by the MAXI-GSC and Swift-BAT (Burst Alert Telescope) was found to be the brightest among the previous high activity states of the source (Isobe et al. 2010). The strength of the jet magnetic field derived from the X-ray observations was found to be weaker than the earlier reported values for this blazar. In the γ-ray band, the outburst was simultaneously detected by the Fermi -LAT (Large Area Telescope) at MeV-GeV energies (Singh et al. 2012). Significant spectral hardening in both X-ray and MeV-GeV γ-ray bands was observed irrespective of the large flux variations during the outburst (Isobe et al. 2010;Singh et al. 2012). The observed spectral variation during this outburst was different from the results of previous flaring episodes of Mrk 421 where a positive correlation between hardness and flux had been detected. At TeV γ-ray energies, the flaring activity of Mrk 421 was simultaneously detected by many ground based γ-ray telescopes operational during the period of broadband outburst and rapid flux variations on timescales down to few minutes were observed on the night of February 16-17, 2010 (Tluczykont et al. 2011;Fortson et al. 2012;Shukla et al. -7 -2012;Singh et al. 2015;Bartoli et al. 2016).
Near simultaneous broadband data on the Mrk 421 outburst observed in February 2010 have been extensively used to understand the physical process involved in flaring activity of the source in the literature. Zheng et al. (2014) investigated the time-dependent properties of the outburst using a single zone SSC model and assuming a strong magnetic turbulence for the stochastic acceleration of electrons to relativistic energies. They concluded that the X-ray and γ-ray emissions observed during outburst were produced via synchrotron and SSC processes by the population of electrons whose injection into the emission region (Smith et al. 2009). SPOL is a high throughput and moderate resolution dual beam instrument with a waveplate and Wollaston prism. This instrument contributes to the measurements of the degree of linear polarization in the wavelength range λ= 500-700 nm and optical magnitudes in V and R bands from the nightly monitoring of the γ-ray bright blazars. The linear polarization is measured by using a λ/2 waveplate in the telescope. The optical magnitude/flux measurements are provided from the differential spectrophotometry. A Johnson band pass filter with effective wavelength of 540 nm is used for V band differential photometry whereas R band measurements are performed using Kron-Cousins band pass filter with effective wavelength of 640 nm. The optical measurements by SPOL in V and R bands are not corrected for the interstellar extinction and contamination from the host galaxy starlight. In this work, we have used the polarization data and optical (V and R bands) flux measurements from the SPOL 1 https://fermi.gsfc.nasa.gov/ssc/observations/multi/programs.html
Optical and Radio Light curves
The daily light curves of Mrk 421 in optical and radio wave-bands for the period January 1, 2010 and March 31, 2010 (MJD 55197-55286) are shown in Figure 1(a-c) respectively. The time interval between the two vertical lines (MJD 55240-55246) in Figure 1 indicates the period covering the giant flaring activity of the source detected in the X-ray and γ-ray bands. Visual inspection of the optical light curves, Figure 1(a-b), suggests that the source was in a relatively higher emission state much before the X-ray and γ-ray outburst. During the period of flaring activity in high energy bands, the optical emission in both bands (V and R) is almost steady and consistent with the long term quiescent state of the source. It is important to note here that the optical fluxes reported in Figure 1(a-b) 2 http://james.as.arizona.edu/ psmith/Fermi/ 3 http://www.astro.caltech.edu/ovroblazars/data.php/ -10are not corrected for the contamination due to thermal emission from the host galaxy and interstellar reddening. The host galaxy of most of the blazars is observed to be relatively bright in optical bands and its contribution is very strong in near-infrared (Nilsson et al. 2007). The host galaxy contribution in the optical emission from Mrk 421 is discussed in detail in Section 5.1. The radio observations at 15 GHz from OVRO, as reported in Figure 1(c), also indicate a steady state of the source during the whole period and do not exhibit any change in the source activity during the X-ray and γ-ray outburst. We use χ 2 -test of null-hypothesis of constant emission to characterize the nature of broadband emission during the different epochs. During the period of outburst, the optical emission in the V and R bands is characterized by constant flux levels of (1.11±0.03)×10 −10 erg cm −2 s −1 and (1.09±0.02)×10 −10 erg cm −2 s −1 respectively using a χ 2 -test of null-hypothesis. Whereas the χ 2 -test for fluxes measured during the period excluding the flaring episode suggests that the optical emission in both bands is variable. This is consistent with the visual inspection of the optical light curves where a high optical activity state of the source is observed prior to the X-ray and γ-ray outburst. The radio emission at 15 GHz described by a constant flux of (6.72±0.05)×10 −14 erg cm −2 s −1 throughout the period considered in this study is compatible with the long term quiescent state of the blazar Mrk 421. Therefore, the optical and radio light curves indicate that there is no change in the source activity at low energies during its giant flaring activity in X-ray and γ-ray energy bands observed in February 2010.
Observed Polarization
The degree of optical linear polarization and corresponding polarization angle measured The linear polarization of incoherent synchrotron emission is complex with its real and imaginary parts as observable quantities. It can be expressed as (Burn 1966;Sokoloff et al. 1998) where p and φ are the observed degree and angle of linear polarization respectively. The -12observable quantities p and φ are estimated in terms of the measured Stokes parameters using the relations where u and q are Stokes parameters normalized by the total synchrotron intensity (I). The
Variability and Correlations
The observation of highly variable and polarized radiation at optical and radio wavelengths is one of the important characteristics of broadband emissions from blazars during the flaring episodes. We use three different statistical parameters to characterize the variability in optical and radio emissions from the blazar Mrk 421 during the X-ray and γ-ray outbursts observed during February 2010 (Figure 1). To quantify the -13intrinsic variability, we first estimate the fractional variability (F var ) which is defined as (Vaughan et al. 2003;Singh et al. 2018) and the error in F var is given by where S 2 is the variance, E 2 is the mean square measurement error, F is the mean and N is the number of measurements available in a given period. This parameter takes into account the uncertainties in the measurement of a physical quantity and gives degree of intrinsic variability of the source. The value of F var close to zero implies no significant variability whereas value close to one indicates strong variability. Next, we use the amplitude of variation (A mp ) to calculate the peak-to-peak variability, which is expressed as (Heidt & Wagner 1996;Singh et al. 2018) and the error in A mp is given by where F max and F min are the maximum and minimum values of the measurement with uncertainties ∆F max and ∆F min respectively, ∆F is the error in mean, and σ is the average measurement error. The third parameter we estimate is the relative variability amplitude (RV A) or variability index, which is defined as (Kovalev et al. 2005;Singh et al. 2018) -14and the uncertainty on RV A is given by The variability parameters described above have been computed for all the observables reported in Figure 1 for two epochs namely during the flare and excluding the flare. The values of the above variability parameters estimated for the two epochs are given in Table 1 & 2 respectively. The both suggest that the degree of linear polarization is very weakly correlated with other observables during the two epochs. This is found to be compatible with the variability analysis described above for different physical quantities measured from the blazar Mrk 421. Very weak or no correlation has been reported between the TeV γ-ray emissions and near simultaneous optical and radio flux measurements during the flaring activity of Mrk 421 detected in February 2010 (Singh et al. 2015). However, a strong correlation between γ-ray and X-ray fluxes derived during the same period supports one zone SSC process for high energy emission from the blazar Mrk 421. This also suggests that the broadband emissions in low (optical and radio) and high (X-ray and γ-ray) energy bands can be produced from different regions in the jet of Mrk 421 during the outburst.
Intrinsic Jet Polarization
The degree of polarization is quantified as the fraction of polarized flux measured from a source. The synchrotron emissions from the relativistic jet of blazars are characterized by strong linear polarization in optical and radio bands. However, several depolarization effects like host galaxy contamination and wavelength dependent Faraday rotation play an -16important role in decreasing the degree of synchrotron polarization. The degree of linear synchrotron polarization intrinsic to the jet is given by (Carnerero et al. 2017) where Π obs and F obs are the measured degree of linear polarization and flux in a given wavelength band respectively and F host is the unpolarized flux contribution from the host galaxy. The polarization measurement by the SPOL in the wavelength range 500-700 nm has an effective wavelength close to the R band. The host galaxy contribution due to the thermal emission in R band is estimated to be ∼ 3.68×10 −11 erg cm −2 s −1 (Carnerero et al. 2017). The average optical flux in R band observed from the source during the period of X-ray and γ-ray outburst is ∼ 1.09×10 −10 erg cm −2 s −1 (Section 4.1). This indicates that the host galaxy contribution is ∼ 30% of the observed flux from the blazar Mrk 421 and hence its effect on the depolarization of the synchrotron radiation from the jet will be significant. The intrinsic polarization to the jet in R band is found to be Π jet ≈ 1.5 × Π obs (Equation 9). In V band, the contamination of host galaxy (F host ∼ 1.37×10 −11 erg cm −2 s −1 ) is relatively small as compared to R band (Nilsson et al. 2007). This implies that the host galaxy has significant depolarization effect on the linear synchrotron polarization measured from the jet of Mrk 421 in optical R band. In 1966, Burn proposed that the degree of polarization can be expressed as (Burn 1966) where λ is the observational wavelength. Spectropolarimetric study of blazars also suggests the wavelength dependence of the degree of linear polarization (Kulshrestha et al. 1987;Blinov et al. 2016). Therefore, Faraday dispersion can depolarize the synchrotron polarization due to the wavelength dependence of its rotation measure (RM ∝ λ −2 ) in the optical band. However, we are not able to explore the depolarization effect of Faraday rotation in this work due to the unavailability of the near simultaneous observations -17of polarization in different wavelength bands. According to Jones et al. (1985), the synchrotron radiation from blazars can be depolarized due to the superposition of emissions from more than one region in the jet with independent orientation of magnetic fields and it can predict smaller variability for BL Lacs type of blazars (characterized by more than one emission regions). The emission regions in the jet can be filled with relativistic electron populations of different energy distributions (Jones et al. 1985;Björnsson 1985). Here, we consider only the depolarization effect of host galaxy contamination in R band to estimate the intrinsic jet polarization using the observed degree of polarization in the wavelength band 500-700 nm from the SPOL observations.
Linear Synchrotron Polarization
For blazars, it has been widely accepted that the non-thermal emission at radio and optical frequencies is the synchrotron radiation due to ultra-relativistic leptons (electrons/positrons) in the magnetic field of localized regions which move relativistically along the jet axis. Owing to the synchrotron origin, the radiation emitted in these frequency bands is expected to be highly polarized depending on the nature and geometry of the magnetic field in the emission region. The general formalism for estimating the degree of polarization of the radiation produced by the synchrotron process is properly understood in the literature (Westfold 1959;Legg & Westfold 1968). The degree of polarization measured at radio wavelengths is observed to be less than that in optical bands. This suggests that the optical emission is produced from a smaller region with more uniform and ordered magnetic field in comparison to the radio emission (Jorstad et al. 2013). In the present study, we consider a simple scenario that there are two emission regions in the jet of Mrk 421 during the flaring episode in February 2010 where optical emission comes from one region and X-ray and γ-ray are produced from another region. The optical emission -18observed in V and R bands is attributed to the synchrotron radiation of the relativistic electrons in the emission region. The electrons are assumed to be accelerated to relativistic energies by the well known first order Fermi acceleration process before injecting into the emission region. The energy spectrum of relativistic electrons in the emission region can be approximated by a power law of the form where α is the electron spectral index and γ min and γ max are the minimum and maximum electron Lorentz factors respectively. The magnetic field in the emission region (B) is considered to be where B o is the strength of ordered magnetic field produced by shocks and B c is the strength of chaotic magnetic field. The synchrotron photons produced by the above population of electrons (Equation 11) gyrating in the resultant magnetic field B (Equation 12) is described by a power law with energy spectral index The degree of linear polarization (Π linear ) for optically thin synchrotron radiation depends on the photon spectral index (energy distribution of electrons) and structure of the magnetic field. It is expressed as (Westfold 1959;Björnsson 1985;Fraija et al. 2017) Π linear = (s + 1)(s + 2)(s + 3) where β is the ratio of ordered to chaotic magnetic fields strength in the emission region.
The synchrotron radiation from blazars in low energy bands is observed to be optically thin.
The expected percentage of synchrotron polarization (Π linear ) as a function of the observed synchrotron spectral index (s) for different configurations of B o and B c is shown in Figure -19 -4. It is evident from Figure 4 that the degree of linear synchrotron polarization for a given synchrotron spectral index (s) increases monotonically with increase in the ratio of two magnetic fields in the optical emission region. The increasing value of the ratio (β) implies that the ordered magnetic field dominates over the chaotic magnetic field. A completely isotropic chaotic magnetic field produces synchrotron radiation with zero linear polarization whereas perfectly ordered uniform magnetic field will result in the maximum degree of linear synchrotron polarization of Π max linear ∼ 69-75% for relativistic electrons with typical spectral indices of α ∼ 2-3. The chaotic magnetic field in the emission region can be compressed by the moving shock waves. If the optical emission region is highly magnetized, a fast moving shock leads to the significant variation in the polarization leaving the flux level unchanged (Zhang et al. 2016).
We compare the theoretical degree of linear polarization (Π linear ) shown in Figure 4 with the intrinsic jet polarization (Π jet ) calculated using the observed polarization (Π obs ) from the SPOL observations (Section 5.1). The results from the comparison of the theoretical and measured degree of polarization during different epochs are summarized in Table 3. It is observed that the measured optical polarization in the wavelength range 500-700 nm by the SPOL is broadly consistent with the theoretically expected synchrotron polarization for spectral index of s = 0.6 corresponding to α = 2.2 with different values of the ratio of two magnetic fields (β). If the spectral distribution of the relativistic electrons is assumed to be described by a power law with index ∼ 2.2 in the optical emission region, the change in degree of polarization during different epochs can be attributed to the relative variation in the strength of ordered and chaotic magnetic fields. The lowest value of the linear polarization (∼ 1.6%) measured near simultaneous X-ray and γ-ray peak around
Conclusion
In this paper, we have studied the behavior of linear polarization in the wavelength range 500-700 nm measured by the spectro-polarimeter at the Steward Observatory during the extreme flaring activity of the blazar Mrk 421 detected around February 16-17, 2010 (MJD 55243-55244). This outburst of the source was observed by many ground and space based multi-wavelength instruments world wide. The flaring activity was detected to be dominant in the X-ray and γ-ray energy bands. A summary of the nature of optical linear polarization obtained from the present study is given in the following points: (Fraija et al. 2017). This explains the the long-term quiescent state of the source in optical bands.
• Variability analysis of the optical (V and R bands) and radio light curves using different parameters indicates the absence of significant variability and the emission is consistent with the quiescent state of Mrk 421 derived from the long term observations. The variability parameters estimated for the period without flaring episode have values higher than that during the outburst due to relatively high optical emission state in January 2010.
• Polarization measurements from the blazar Mrk 421 suggest an average value of ∼ 4.2% for the degree of linear polarization in the wavelength range 500-700 nm for the period without flaring activity of the source. The optical linear polarization during February 16-17, 2010 outburst is found to be ∼ 1.6% which is less than the average value of ∼ 2.2% for the week of X-ray and γ-ray outburst. This indicates that the degree of optical linear polarization decreases during the flaring activity of the source in the X-ray and γ-ray energy bands. The angle of polarization has an average value of ∼ 137 • during the entire period. • Our study suggests the presence of two emission regions in the jet of Mrk 421 where optical emission is produced from one region and X-ray and γ-rays are produced from another region. The optical emission region can be filled with ordered and chaotic magnetic fields and steady population of relativistic electrons. The decrease in optical linear polarization during the X-ray and γ-ray outburst is caused due to sudden dominance of the chaotic magnetic field over the ordered magnetic field in the emission region. Whereas, the X-ray and γ-ray flaring activity possibly originates due to the sudden increase in the density of relativistic electrons in the second emission region (Singh et al. 2017).
-23 -We thank the anonymous reviewer for his/her useful suggestions to improve the contents of the manuscript. This research has made use of data from the OVRO 40-m monitoring program (Richards, J. L. et al. 2011, ApJS, 194, 29) which is supported in part by NASA grants NNX08AW31G, NNX11A043G, and NNX14AQ89G and NSF grants AST-0808050 and AST-1109911. Data from the Steward Observatory spectropolarimetric monitoring project were used. This program is supported by Fermi Guest Investigator grants NNX08AW56G, NNX09AU10G, NNX12AO93G, and NNX15AU81G. | 2019-05-13T10:18:48.000Z | 2019-05-01T00:00:00.000 | {
"year": 2019,
"sha1": "d7f785d85a9796b76e4ba325198fc3bad30e5d62",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1905.04944",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d7f785d85a9796b76e4ba325198fc3bad30e5d62",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
243918173 | pes2o/s2orc | v3-fos-license | Dehydration Enhances Prebiotic Lipid Remodeling and Vesicle Formation in Acidic Environments
The encapsulation of genetic polymers inside lipid bilayer compartments (vesicles) is a vital step in the emergence of cell-based life. However, even though acidic conditions promote many reactions required for generating prebiotic building blocks, prebiotically relevant lipids tend to form denser aggregates at acidic pHs rather than prebiotically useful vesicles that exhibit sufficient solute encapsulation. Here, we describe how dehydration/rehydration (DR) events, a prebiotically relevant physicochemical process known to promote polymerization reactions, can remodel dense lipid aggregates into thin-walled vesicles capable of RNA encapsulation even at acidic pHs. Furthermore, DR events appear to favor the encapsulation of RNA within thin-walled vesicles over more lipid-rich vesicles, thus conferring such vesicles a selective advantage.
■ INTRODUCTION
Protocells, the hypothetical precursor to the first biological cell, likely consisted of a self-replicating genome encapsulated within a membrane vesicle. 1 The membrane would have played key roles in protocell function, such as promoting prebiotic reactions, 2 protecting protocells from parasitic genetic material, 3 and defining an individual replicating unit that could potentially become capable of growth and division, Darwinian competition, and subsequent evolution. 4,5 One commonly proposed class of lipid molecules that could have formed prebiotic membranes are fatty acids. 6 Fatty acids were abiotically available, produced from both exogenous sources such as meteorites 7 and endogenous sources such as Fischer−Tropsch-like synthesis on Earth. 8 Importantly, fatty acids spontaneously form vesicles when the pH of the solution is near the apparent pK a of the fatty acid, for example, between approximately pH 7 and 9 for decanoic acid. 6 The pH range for stability is determined by the fraction of carboxylic acid groups that are deprotonated, with complete protonation leading to the formation of a neat oil phase and complete deprotonation leading to micelle formation. 9 Of all of the selfassembled structures possible, oligolamellar or thin-walled vesicles are the most effective structures for encapsulation because they have a semipermeable membrane that delineates an internal aqueous volume. In contrast to dense lipid droplets or very multilamellar (or thick-walled) vesicles, they also most closely resemble the cells of modern organisms.
The narrow pH range of vesicle stability constrains the environments in which fatty acid vesicles can form. Multiple studies have raised the conundrum that because fatty acid vesicles could not have formed in acidic conditions, then neither could life. 10,11 Many important prebiotic reactions, including RNA polymerization 12 and nucleotide activation chemistry, 13 are optimized in acidic conditions. The Archean oceans are proposed to be acidic. 14 While mixtures of fatty acids with their corresponding alcohol or monoglyceride are capable of withstanding more alkaline conditions or higher salt concentrations, 15 fatty acid mixtures are generally observed to form dense oil droplets in acidic solutions, 16,17 unless prebiotically implausible cationic lipids such as sodium dodecylbenzenesulfonate, 18 or more complex lipids such as cyclophospholipids or N-acyl amino acids, are used. 10,19 Recently, Bonfio et al. 13 reported the formation of a small fraction of decanoic acid:decanol:decanal (4:1:1) vesicles alongside oil droplets and aggregates in the presence of 100 mM 4,5-dicyanoimidazole (DCI) buffer at pH 5.5. What is still unknown is whether a vesicle phase can be strongly favored at low pHs, or indeed whether vesicle formation is even possible in the absence of high DCI concentrations.
In one previous study, Milshteyn and co-workers observed a 12-carbon fatty acid/monoglyceride system form vesicles in unfiltered hot spring fluids at pH 3.3. 20 While it is a proof of concept, a major cause of hot spring pool acidity is dissolved gases (e.g., SO 2 and CO 2 ), and thus, fluid samples taken in the field can degas after collection. 21,22 Because vesicle imaging in the previous study occurred up to six months from the time of sampling and vesicle samples were heated during imaging, it is highly likely that the pH of the vesicle solution when imaged had increased from the time of measurement in the field. Furthermore, the critical effects of dissolved salts 23 and organic materials 24,25 were not controlled for, with the authors stating that "an explanation is still uncertain".
In this current study, we take a more controlled approach by monitoring the sample pH and using nonvolatile acids, while using a significantly more selective RNA dye and controlling for dissolved matter, to better understand the system. We exploit the dynamic nature of fatty acid lipid assemblies and use dehydration/rehydration (DR) events to remodel dense lipid assemblies into vesicles possessing a large aqueous lumen. DR events are significant in prebiotic research because they would have been commonplace on exposed land surfaces on early Earth, ranging from micro-events induced by humidity 26 to larger events in daily tidal pools and even yearly weather patterns. 27 Hot springs are of particular interest as an environment capable of protocell production, as they not only display regular DR events on a range of time scales but also can capture meteor-delivered organics, concentrate prebiotically essential elements, and were likely present on the early Earth's surface when life was forming. 28,29 While DR events are known to enable encapsulation of solutes such as genetic material into bilayer vesicles, 6,30,31 the ability of DR to favor certain vesicle topologies, or remodel membranes, has not been previously investigated. Recently, Sankar et al. explored how multiple DR events (referred therein as wet/dry cycles) affected bulk vesicle properties such as turbidity and dye encapsulation, demonstrating that lipid systems can undergo multiple DR events and still maintain their ability to form vesicles and encapsulate solutes. 32 In this work, we focus on understanding the effect of a single DR event on remodeling prebiotically plausible lipid mixtures at the individual vesicle level, leading to encapsulation of RNA. In particular, we use microscopy to glean insight into population heterogeneity rather than average properties of a bulk sample. By doing so, we demonstrate that a single DR event can remodel dense lipid aggregates into vesicles at acidic pH. Furthermore, by using a fluorescent dye that targets singlestranded RNA with excellent selectivity, we show that DR biases the encapsulation of RNA and other solutes into vesicles that have thinner walls (oligolamellar), rather than thicker walls (very multilamellar). Vesicles containing more encapsulated solute have been shown to grow at the expense of vesicles with lower solute loading. 4 Our work thus demonstrates that DR events have the potential to not only remodel lipids into protocells at lower pHs but also provide a selective advantage to vesicles that have an architecture more akin to modern cell membranes.
■ RESULTS
Thin-Walled Vesicles Encapsulate RNA whereas Multilamellar Vesicles Do Not. We initially examined vesicle formation of the well-accepted prebiotic mixture of decanoic acid and glycerol monodecanoate (DA and GMD) at a pH near the apparent pK a of decanoic acid. In a system buffered with PBS (phosphate-buffered saline) at pH 7.4, vesicles spontaneously formed after slight agitation of lipid (30 mM DA:GMD, 1:1), as observed by bright-field (no phase ring) microscopy ( Figure 1A). We chose this pH to optimize vesicle formation by working near the apparent pK a of the lipid mixture (approximately 7.5 16 ). As expected, when yeast RNA (0.1 mg/mL) along with a fluorescent RNA dye QuantiFluor was included in the buffer, no biased encapsulation was observed, as indicated by uniform fluorescence across the entire image ( Figure 1B). This is because, without exposing the system to an encapsulation process, the RNA is evenly distributed throughout the sample.
We then subjected the DA:GMD and RNA aqueous system to a single DR event by evaporating to dryness via a heat bath (90°C) and then rehydrating with Milli-Q water to mimic a natural dehydration event by heating, followed by rehydration from rainfall or a hot spring geyser. (B) Fluorescence microscopy shows that no biased encapsulation of the RNA has occurred. (C) After one DR event, vesicles are still present, including thin-walled vesicles (orange arrows), multilamellar vesicles (white arrow), and thicker-walled vesicles (yellow arrow). (D) Fluorescence microscopy reveals that enhanced RNA encapsulation (orange arrows) occurs only within thin-walled vesicles. (E) After a single DR event, enhanced encapsulation occurred for some thin-walled vesicles. This graph depicts the normalized vesicle brightness, a proxy for amount of material encapsulated, against the standard deviation of a vesicle transect in bright-field, a proxy for vesicle wall lipid density for N = 49 vesicles (see the Experimental Section and Figure S2). The orange box outlines thin-walled vesicles (normalized transect value <0.03), and the blue box outlines multilamellar walled vesicles (normalized transect value >0.03). Scale bar represents 10 μm.
Upon rehydration, a range of different vesicle morphologies formed, including oligolamellar thin-walled vesicles, multilamellar thick-walled vesicles, and multilamellar onion-like vesicles. However, while most vesicles exhibited minimal RNA encapsulation relative to the background, we observed that certain vesicle morphologies contained much higher concentrations of RNA ( Figure 1C,D).
To understand this uneven encapsulation better, we use two imaging modalities to investigate the observed bias in encapsulation. The broader encapsulation process has been reported in previous works, with Deamer and Barchfeld first proposing solute entrapment between lipid membranes during dehydration and then subsequent vesicles budding off during rehydration as a method of encapsulation. 30 However, in some previous studies, cationic dyes were used to visualize the RNA and thus also labeled the anionic fatty-acid-based membranes. 20 To confirm this, we tested a commonly-used RNA dye (acridine orange) at the amounts used in these previous studies and found that it readily labeled fatty acid vesicles even in the absence of RNA ( Figure S1). As a result, the localization of RNA as opposed to lipid should only be inferred indirectly when acridine orange and other cationic dyes are used.
Instead, we use a recently developed RNA dye that selectively labels single-stranded RNA to better probe the system. Additionally, we exploit the fact that the variation in bright-field microscopy intensity corresponds to optical density (and by proxy, material density) to quantitatively distinguish between vesicle types in a label-free manner. Oligolamellar and multilamellar vesicles can be distinguished by measuring the intensity of light in a transect across the vesicle in bright-field microscopy and taking the standard deviation of intensity across the transect ( Figure S2).
We find that when vesicle brightness under fluorescence microscopy (a proxy for RNA encapsulation) is plotted against the standard deviation of vesicle intensity transects (a proxy for wall thickness) for each vesicle, there is a clear correlation between low membrane optical density (i.e., thin-walled vesicles) and increased RNA encapsulation ( Figure 1E). In other words, while not all oligolamellar vesicles necessarily have high solute loading, the optical micrographs show that there is a clear trend of enhanced RNA encapsulation occurring almost exclusively within oligolamellar thin-walled vesicles (as opposed to thick-walled multilamellar vesicles) ( Figure 1C,D). While previous studies have observed that DR events can lead to the formation of vesicles that exclude RNA, 20,30 our results show that some vesicles may in fact have the opposite behavior.
We also repeat our single DR event experiment with a wellknown membrane label (Rhodamine B) and encapsulation marker (pyranine) instead of RNA. Again, we observe that the vesicles encapsulating the fluorescent pyranine dye are thinwalled (Figure 2A−D), as confirmed by taking transects across the fluorescence images ( Figure 2E,F). Pyranine is clearly encapsulated within a lipid envelope ( Figure 2E), whereas lipid-dense vesicles encapsulate very little pyranine ( Figure 2F). These fluorescence microscopy results confirm that when biased encapsulation does occur, the vesicles with enhanced solute encapsulation are thin-walled vesicles rather than thickwalled (see also Figure S3). In other words, thin-walled vesicles are conferred an advantage by DR events.
To determine if selective encapsulation within thin-walled vesicles was exclusive to decanoic acid systems, we use a different lipid system, oleic acid (OA) and glycerol monooleate (GMO). Because the melting point of GMO is approximately 40°C, we use a 2:1 OA:GMO ratio rather than 1:1 to avoid working with a gel-phase bilayer. After a DR event in PBS buffer (adjusted to pH 8.2 using NaOH), we find similar results to the DA:GMD system, i.e., that the vesicles that exhibit biased encapsulation are also thin-walled ( Figure S4).
Dehydration/Rehydration Events Remodel Dense Lipid Aggregates into Vesicles at a pH below Their pK a . The effects of a DR event on lipid vesicles are more profound at acidic pHs. When 30 mM of total lipid (1:1 DA:GMD) is added to an unbuffered solution of 10 mM NaCl and 0.1 mg/mL yeast RNA, the resulting pH is 5.4. Whereas an abundance of vesicles was observed under bright-field in the pH 7.4 system buffered with PBS, at pH 5.4, the lipid mixture forms optically dense spheres that have a propensity to wet the glass slides, indicative of their high surface energy. Under fluorescence microscopy, no biased encapsulation is observed ( Figure 3A,B). These results are consistent with previous results, where a decanoic acid decanol mixture was not able to form vesicles at pH 5.5, 6,33 and a decanoic acid/decanal/ decanol mixture was only observed to have a small fraction of vesicles present at pH 5.5. 13 However, when we expose our lipid system to a single DR event, the optically dense spheres remodel to form a diverse range of vesicles, similar to those observed in the pH 7.4 Figure S5). We find that the lipid system buffered at pH 3.9 with 0.01 M citrate does not form vesicles upon rehydration, indicating a lower pH limit for the remodeling phenomenon ( Figure S5).
We also confirm that although the remodeled vesicles are capable of encapsulating RNA present within solution, lipid remodeling is not driven by the specific chemical or ionic nature of the encapsulated molecules. This phenomenon is reproduced using an uncharged encapsulation material (sucrose) as well as with no additional encapsulation material present. In both instances, dense lipid aggregates remodel into thin-walled vesicles after rehydration (Figures S6 and S7).
Finally, we verify that dehydration/rehydration is not dramatically shifting the equilibrium of self-assembled structures. We use the lipid probe Laurdan to reveal the polarity of the probe's environment: a single emission peak at 450 nm indicates a droplet phase for fatty acids, whereas peaks at 430 and 500 nm indicate the presence of lipid bilayer vesicles. 32 We confirm that the optically dense spheres that we see at pH 5.5 are in fact a bilayer phase ( Figure S8). An increase in Laurdan generalized polarization (GP, see the Experimental Section and Figure S8) values is also consistent with DR appearing to remodel the lipids from being in a very dense, multilamellar form, into vesicles that have larger interlamellar spacing.
■ DISCUSSION
Our research presents evidence for a purely physical remodeling method to form fatty acid vesicles, enabling increased encapsulation inside oligolamellar vesicles even at acidic pHs. Because heating was used to accelerate dehydration in our experiments, we questioned whether heat itself was driving the remodeling, as previous work has noted that elevated temperatures can cause temporary phase transitions (i.e., melting) for both phospholipids 34 and fatty acids 35,36 that lead to vesicle formation at higher temperatures. However, this is a transient effect, as vesicles formed in these systems transform back into dense lipid droplets or crystals once the temperature is reverted back to room temperature. Despite DA and GMD being solids at room temperature, a 1:1 combination of the two results in a mixture that is a liquid at room temperature ( Figure S9). This eliminates lipid heating as a primary driver for vesicle remodeling in this system, because the fatty acid mixture is already a liquid at room temperature. This was confirmed experimentally: when pH 5.4 DA:GMD solution was dried down with passive evaporation at room temperature (∼20°C), the lipid still remodeled to form vesicles similar to those formed after drying at elevated temperatures ( Figure S10). Regardless of drying temperature, the vesicles produced by our method are stable at room temperature, demonstrated by all vesicles microscopy images being captured at ∼20°C. In fact, this vesicle system is stable for long time periods after initial remodeling, still containing thin-walled vesicles nearly 11 months after they first formed ( Figure S11). We can therefore rule out heat itself as a driver for lipid remodeling.
While heat is not a major factor in these experiments, the dispersal of lipid within an aqueous solution is crucial for lipid remodeling to occur. Lipids are commonly dissolved nonaqueous solvents as a first step toward creating vesicles. 37,38 However, when a DA:GMD mixture was added to methanol instead of water or aqueous buffer and then dried down, only emulsion droplets were observed upon rehydration ( Figure S12). Methanol is an excellent solvent for the lipids and therefore does not promote any lipid self-assembly. Removal of methanol by dehydration simply leads to amorphous oil droplets forming. By contrast, attempts at dispersing the lipids into aqueous solution result in lipids preorganizing via hydrogen bonding and hydrophobic forces.
One additional factor is that sufficient amphiphilicity is required to ensure lipid remodeling. At pH 3.9, too few DA are ionized to support bilayer formation ( Figure S5). We also tried the experiment at pH 5.5 with other lipids with smaller headgroups than GMD. When this experiment was repeated with pure decanoic acid (DA), without GMD present, the DA formed a neat oil droplet at the bottom of the microcentrifuge tube after dehydration at 90°C. Upon rehydration, the bulk solution was clear with solidified decanoic acid adhering to the side of the microcentrifuge tube, and no oligolamellar vesicles were present under phase contrast microscopy ( Figure S13). No vesicles were observed when the experiment was also repeated with a 4:1 mixture of decanoic acid and decanol ( Figure S13). We hypothesize that this is because, at pH 5.5, DA alone and the DA:DO mixture remain an interfacially inactive oil. In other words, without ionizing the DA, the polarity of the headgroups is simply too low to support bilayer formation. The addition of GMD with its substantial headgroup is crucial in this process, as it also is to protocell thermostability. 39 Proposed Model for Membrane Remodeling. We imaged the lipid before and after a dehydration event to determine what type of restructuring takes place. Before dehydration, the DA:GMD mixture is an oil that spreads as a droplet on the microscope slide ( Figure S14). Upon mixing with a simple 10 mM NaCl solution and subsequent dehydration without rehydration, complex heterogeneous structures formed. These include layered lipid structures showing birefringence under crossed polarizers and halite crystals incorporated into lipid aggregates ( Figure S14). This suggests that physical restructuring and prestacking of lipid is contributing to lipid remodeling upon rehydration. We propose two separate effects that are promoting lipid remodeling: physical prestacking of lipid membranes owing to evaporation, and osmotically-driven swelling of lipids during rehydration ( Figure 4).
During dehydration, solutions are concentrated, and thus, the ionic strength increases with time. Interestingly, the apparent pK a of fatty-acid-based vesicles depends on ionic strength. Maeda et al. 40 reported salt dependence in the titration curve of oleic acid, with the apparent pK a decreasing by 0.7 upon an increase in ionic strength from 10 mM NaCl to 100 mM NaCl. Mele et al. 41 modeled that the apparent pK a for oleic acid decreases by 2.2 upon an increase in ionic strength from 1 to 150 mM NaCl. This behavior is predicted by the Poisson−Boltzmann equation. In brief, charged surfaces such as fatty acid bilayer membranes recruit counterions, including protons, reducing the pH of the surface. The apparent pH at the bilayer surface is thus lower than the bulk pH, leading to an increase in the apparent pK a of the fatty acid. An increase in salt concentration decreases the efficiency of proton recruitment to the interface, leading to a smaller apparent pK a shift. Thus, as the solution evaporates, we expect the apparent pK a of the lipid to decrease to lower pHs. If the pH of the sample decreases minimally or stays constant owing to buffering agents, the deprotonation of the membrane is expected to increase with evaporation, thereby favoring lipid deprotonation ( Figure 4A). The negatively charged carboxylate residues can then interact more strongly with cations such as sodium ions, hereby forming preorganized lipid layers upon further dehydration (as shown in Figure S14). This physical stacking of lipid during dehydration plays a crucial role in this system, as simply increasing the salt concentration of a DA:GMD lipid system at pH 5.5 without any dehydration event does not result in vesicle formation ( Figure S15). Increasing the ionic strength for an oleic acid solution at pH 8.06, however, does promote oligolamellar vesicle formation in lieu of dense aggregates ( Figure S16), highlighting the complex impact of salt. Upon rehydration with water, well-separated bilayer structures may be able to form more readily from these preorganized sheets than from homogeneous oil droplets, leading to an increase in vesicles as opposed to dense lipid-rich aggregates.
Encapsulation within Oligolamellar Vesicles. There is an additional process occurring in the DA:GMD system (at both pH ranges) that is promoting encapsulation of solutes (including RNA) within oligolamellar vesicles over multilamellar vesicles. During rehydration, there is a large osmotic driving force for preorganized lipids that are colocalized with solute to be hydrated. The rate of lipid swelling and vesicle formation is then determined by water permeation across the membrane. The flux of water J across the lipid is related to the solute concentration difference across the membrane Δc by J = PΔc/n where P is the permeability of water across a single bilayer, and n is the number of bilayers across which water needs to permeate. When solute is trapped underneath a thick layer of lipid, water permeates slowly, and the lipid film swells at a slower rate. Conversely, when pockets of solute are trapped underneath a thin layer of lipid, water is able to readily permeate across the lipid layers and swell the film, leading to the formation of solute-rich oligolamellar vesicles ( Figure 4B).
More broadly, it appears that an increase in interlamellar spacing is an observed consequence of both a dehydration/ rehydration cycle for fatty acid/monoglyceride admixtures, as well as during freeze/thaw cycles for phospholipids. 42 Commonalities between the two processes include the dehydration of the lipid bilayer potentially leading to membrane fusion, and osmotic imbalances potentially leading to inhomogeneous swelling of bilayers. Both processes could have occurred in surficial systems and worked in alternate seasons to promote protocell formation and solute encapsulation.
Origin of Life Implications. This research provides new insight into the environmental conditions suitable for forming life on Earth. Our findings open up new regions of geochemical parameter space, creating the potential for prebiotically plausible vesicles to form in acidic conditions, making encapsulation accessible to chemical reactions that favor lower-pH environments (e.g., RNA polymerization). 2 Furthermore, DR appears to confer an advantage to thinwalled protocells owing to their increased encapsulation of prebiotically useful macromolecules relative to other vesicle types. These oligolamellar vesicles are closer in morphology to the unilamellar membranes that encapsulate modern cells 43 and may also be more prebiotically preferable because of their increased permeability. 44 This is because membrane permeability is an important feature in biology, with cells possessing pumps and pores to allow the exchange of material. Multilamellar thick-walled vesicles would have been a disadvantage for early protocells, restricting the exchange of food and waste with the surrounding environment. 1 A protocell formation process that biases encapsulation of solutes into thin-walled vesicles, which are relatively permeable and readily exchange material with their environment, could have been extremely advantageous. Although this study focused on the effects of a single DR event, it has been clearly shown in other studies that fatty acid vesicle systems can undergo multiple DR events and still maintain their ability to form vesicles and encapsulate solutes, 32 increasing its suitability as a potential prebiotic protocell system.
Our findings provide new insight into the ongoing debate on whether surficial pools 45 or deep-sea hydrothermal vents 46 were the environment in which life formed on Earth. As the observed fatty acid vesicle stability in acidic conditions and selective encapsulation within thin-walled, cell-like vesicles relies on dehydration as a physical remodeling process, this research serves as further evidence against life forming in permanently submerged vents because they are incapable of widespread dehydration events.
Lastly, our findings highlight that special attention should be given to the method of vesicle formation and hence the path of lipid assembly when comparing results from different studies. This is because while prebiotically plausible lipids are vastly more soluble and their assemblies more dynamic than their phospholipid counterparts, prebiotic lipid systems are still capable of being kinetically trapped and are not true equilibrium systems. 47 In the origins of life field, researchers use a range of vesicle preparation methods such as titration, 48 thin film hydration, 10 wet/dry cycling with varying surfaces and solvents, 20,49 self-assembly with and without shear, 43 (for example and extrusion. These different methods can have a substantial effect on the resulting vesicle characteristics, as they are well-known to do for phospholipids. 50 While this provides exciting opportunities to the variety of different membranebound architectures that may have been present on early Earth, it also necessitates care when comparing vesicles produced by different methods.
■ EXPERIMENTAL SECTION
Reagents. For RNA solution (10 mg/mL, ribonucleic acid from torula yeast, Type VI, Sigma-Aldrich), 100 mg of yeast RNA was added to 10 mL of 10 mM EDTA solution (ChemSupply Australia) in Milli-Q water. The RNA solution was adjusted to pH 6 with 5 M NaOH (Lowy Solutions). The QuantiFluor RNA System (Promega) was used as the RNA dye; other dyes used include 1 mM pyranine (Sigma-Aldrich) as an encapsulation marker, 0.1 mM acridine orange as a commonly-used cationic dye, and 5 μm Rhodamine B (Sigma-Aldrich) as a membrane dye. Sucrose (ChemSupply Australia) at 0.1 M was also used as a neutral encapsulation molecule. Buffers used include 1× PBS made from 10× PBS stock solution (Lowy Solutions) and 0.01 M citrate buffer (ChemSupply Australia), with pH adjusted with 5 M NaOH or HCl (Lowy Solutions). NaCl solutions (10 mM) were made by appropriate dilutions of 5 M NaCl solution (Lowy Solutions). pH was measured using an Orion Star A121 pH meter with an Orion 8103BN ROSS probe.
Vesicle Preparation. All reagents for each specific experiment, including the appropriate lipid, encapsulation solutes, and buffers, were added to the Eppendorf tube, vortexed for 15 s, and then agitated by scraping the tube three times against a microcentrifuge rack ("rumble-stripped" 51 ). Tubes selected for dehydration were then partly submerged in a heat bath (90°C) for 1 h. For analysis, dehydrated samples were rehydrated with 100 μL of Milli-Q water directly before analysis. Both samples (dehydrated and non-dehydrated) were rumble-stripped 5 times before microscope analysis to distribute vesicles through sample fluid and ensure a representative selection of vesicles. Experiments were repeated a further three times on separate days with fresh stock solutions, with consistent results recorded each time.
Imaging. Images were captured on a pco.edge 4.2 sCMOS camera mounted on a Nikon Eclipse TE-2000 inverted microscope, using a 100× Ph3 objective [Plan Fluor, numerical aperture (NA) = 1.3]. We focused on the solution phase of the sample instead of focusing on the surface of the glass slide to ensure that imaging was representative of the whole solution and to avoid imaging vesicles that are known to grow from the surfaces of glass slides. 52 Image Analysis. For Figure 1E, all vesicles larger than 5 μm visible in bright-field were analyzed, with a total of N = 49 across 12 bright-field micrographs and their 12 corresponding fluorescence micrographs.
Normalized Vesicle Brightness for Fluorescence Images. The intensity of encapsulated dye relative to the background was measured in Fiji. 53 The mean gray scale value for approximately 5 × 5 μm rectangles inside (I in ) and outside (I bg ) the vesicles was determined using the measure tool. The normalized vesicle brightness was calculated by (I in − I bg )/I bg .
Normalized Transect Standard Deviation for Bright-Field Images. Transects across vesicles T, including a background overlap on each side of the vesicle that is at least 10% of the total vesicle width on each side, were taken using the line tool and plot prof ile tool in Fiji. 53 The average of the first 10 pixels at the beginning of the transect was taken as the background value B. The transect T was then normalized against background T/B. The standard deviation σ of T/B was then reported as the normalized transect standard deviation.
Laurdan GP Measurements. 100 μL of a vesicle sample and 10 μM Laurdan (Sigma-Aldrich) were premixed in a microcentrifuge tube before being loaded into a quartz cuvette (Starna) and then into a Cary Eclipse fluorescence spectrometer (Agilent). Excitation was at 360 nm, and emission was measured from 400 to 600 nm. For fatty acids, the emission spectrum peaks are approximately at 430 and 500 nm, 32 so the generalized polarization GP is defined as (I 500 − I 430 )/(I 500 + I 430 ). By the definition used here, a highly polar environment such as micelles has GP ∼ 1. 32 Safety Statement. No unexpected or unusually high safety hazards were encountered.
Optical micrographs in phase contrast, bright-field, and epifluorescence mode; photographs of bulk samples; as well as details of analyses and fluorescence spectra (PDF) | 2021-11-10T16:23:35.059Z | 2021-11-08T00:00:00.000 | {
"year": 2022,
"sha1": "71369ed9d0a33cc305e79c0ad18dccd7bb25d95e",
"oa_license": "CCBYNCND",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8796310",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "6bad7eba58ad6120df9f575832ce1e68e64a7805",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
119357397 | pes2o/s2orc | v3-fos-license | eps'/eps in the standard model with hadronic matrix elements from the chiral quark model
I discuss the estimate of the CP-violating parameter eps'/eps based on hadronic matrix elements computed in the chiral quark model. This estimate suggested, before the current experimental results, that the favored value of eps'/eps in the standard model is of the order of 10^{-3}. I briefly review the physical effects on which this result is based and summarize current estimates.
I discuss the estimate of the CP-violating parameter ε ′ /ε based on hadronic matrix elements computed in the chiral quark model. This estimate suggested, before the current experimental results, that the favored value of ε ′ /ε in the standard model is of the order of 10 −3 . I briefly review the physical effects on which this result is based and summarize current estimates.
If we imagine to be back in 1997-looking at the experimental results for the ratio ε ′ /ε and its theoretical estimates-we will find ourselves in a rather confusing situation in which the theoretical estimates favor values of the order of 10 −4 and the experiments disagree by more than 3σ of their errors, and, moreover, do not rule out the super-weak scenario in which ε ′ /ε vanishes (for a review, see, 1 ).
That was the situation when we decided to assess our theoretical understanding and possibly provide a new estimate. The crucial point was, and still is, that, if there is no sizable cancellation between some of the relevant effective operators, the order of magnitude of ε ′ /ε is bound to be of the order of 10 −3 . A simple argument for this is presented in 2 . The problem is that any cancellation, or the lack thereof, among the operators heavily depends on the size of the hadronic matrix elements and, in 1997, there was no estimate of them that was free of hard-to-control assumptions.
Was it possible to improve on this situation? We wanted to estimate the hadronic matrix elements in a systematic manner without having first to solve QCD (not even by lattice simulation). To do this we needed a model that would be simple enough to understand its dynamics and, at the same time, not too simple so as to still include what we thought was the relevant physics. We chose the chiral quark model 3 in which all coefficients of the relevant chiral lagrangian are parameterized in terms of just three parameters: the quark and gluon condensates, and the quark constituent mass. The model makes possible a complete estimate of all matrix elements, it includes non-factorizable effects, chiral corrections and final-state interaction, all of which we thought to be relevant. In order to determine the three free parameters of the model, the experimental CP-conserving, isospin I = 0 and 2 components of the K → ππ amplitudes, respectively A 0 and A 2 , are fitted to obtain the values reported in 4 for the parameters. The systematic uncertainty of this approach is included by varing the fit by 30% around the experimental values of the amplitudes. Notice that the parameter values turn out to be rather close to those found by independent estimates, even though a priori they could have been any number. Moreover, the ∆I = 1/2 rule is reproduced in a natural manner (see 5 for a discussion). This rule is such a fundamental feature of kaon physics that no estimate of ε ′ /ε can be said reliable unless it also reproduces this selection rule. These results are stable under changes of the renormalization scale and γ 5 -scheme (see 4 for details).
Having fixed the model-dependent parameters, we can proceed and compute the ratio ε ′ /ε. As it can been seen from fig. 2, the gluon penguin operator Q 6 dominates all other operators so that the final value of CPviolating ratio turns out to be of the expected order of 10 −3 , and the standard model does not mimic the super-weak scenario. This is the main result of our analysis; its publication in 1997 7 correctly predicted the current experimental results. The present estimate is an update of the short-distance inputs which also contains an improved treatment of the uncertainties. To estimate the uncertainty of Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10 All our result we can vary, according to a Gaussian distribution, all the short-distance inputs and by a flat distribution the modeldependent parameters to obtain the distribution of values shown in fig. 3. Such a distribution gives the value in good agreement with the current experi- where the error has been inflated according to the Particle Data Group procedure to be used when averaging over experimental data with substantially different central values. In a more conservative approach all inputs are varied with uniform probability over their whole ranges to obtain 0.9 × 10 −3 < ε ′ /ε < 4.8 × 10 −3 . (3) Given the intrinsec difficulty of the computation, I do not expect in the near future smaller uncertainties. It is easy to go back into the computation and understand the final result. Chiral loops and final-state interactions both tends to enhance the A 0 amplitudes by making the gluon penguin contribution larger. Larger gluon penguins dominate the contribution of the electro-weak sector in ε ′ /ε and no effective cancellation between the two occurs. Nonfactorizable (soft) gluon corrections make A 2 smaller. They play an important role in the ∆I = 1/2 rule and in the determination of the model-dependent parameters although not directly in ε ′ /ε where only penguin operators enter. Most of these effects can be summarized by saying that the bag factor B 6 of the the gluon operator Q 6 is much larger (at a given scale) than its vacuum-saturation value of 1.
Many of the points suggested by the chiral quark model analysis have been taken up by other groups after the current experiments favored a value of ε ′ /ε of the order of 10 −3 . In particular, chiral corrections 8,9 , non-factorizable effects 10 , final-state interactions 11 and effective-model estimates 13 have been discussed recently.
In Fig. 4 current estimates 4,8,9,12,13 are summarized; the same figure shows that, nowadays, contrarily to what is still too often repeated in papers and seminars, most standard model estimates agree with the experiments and with the prediction of the chiral quark model. Because of its simplicity, the chiral quark model is clearly not the final word and it can now been abandoned-as a ladder used to climb a wall after we are on the other sideas we work for better estimates, in particular, those from the lattice simulations. | 2019-04-14T02:31:47.767Z | 2000-09-01T00:00:00.000 | {
"year": 2000,
"sha1": "c76c332412f82c7434008eb92dba26d990bdee38",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "44de12ad53d640e77ca39db2d3d9bbe3a470237b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
28420029 | pes2o/s2orc | v3-fos-license | A Web Page Segmentation Approach Using Visual Semantics
SUMMARY Web page segmentation has a variety of benefits and potential web applications. Early techniques of web page segmentation are mainly based on machine learning algorithms and rule-based heuristics, which cannot be used for large-scale page segmentation. In this paper, we propose a formulated page segmentation method using visual semantics. Instead of analyzing the visual cues of web pages, this method utilizes three measures to formulate the visual semantics: layout tree is used to recognize the visual similar blocks; seam degree is used to describe how neatly the blocks are arranged; content similarity is used to describe the content coherent degree between blocks. A comparison experiment was done using the VIPS algorithm as a baseline. Experiment results show that the proposed method can divide a Web page into appropriate semantic segments.
Introduction
On the internet, web pages are often considered the smallest indivisible units of information. For example, the major commercial search engines build an index from web pages. However web pages are not atomic units on the web. A typical web page consists of multiple segments with different functionalities, such as main content, navigation bar, menu list, advertisements. The process to divide a web page into visually and semantically cohesive segments is so called web page segmentation.
Web page segmentation has a variety of benefits and potential web applications, such as browsing web pages on mobile devices [1]- [3], detecting duplicate web pages [4], information extraction [5]- [7].
The early techniques of web page segmentation are mainly based on machine learning algorithms [1], [4], [8], [9] and rule-based heuristics [2], [3], [5]- [7], [10]- [12], [15]. Because of the small scale training data set, machinelearning-based methods can only be applied in some certain fields of web pages. The heuristics-based approaches involve simple rule-based heuristics either by interpreting the meaning of tag structures or visual analysis. While a heuristic approach might work well on small sets of pages, it isn't Manuscript suitable for large-scale sets of pages.
In this paper, we assume that a web page is made up of finite blocks and the web pages can be segmented using the visual features of blocks. The visual features can be considered as the following three parts: (1) similar visual blocks have similar semantics, e.g. in a shopping site, the product records are arranged in a similar layout; (2) relevant blocks are always neatly arranged and put visually close together; (3) blocks with different functionalities contain different types of contents, e.g. in a news site, long text may be the main content; a link list may be the related news list; a big picture may be an advertisement, etc. Due to the different visual features, humans can easily identify each of the segments without any descriptions. We call these visual features visual semantics. However, these semantics are intuitive and human friendly. In other words, they are not machine friendly and therefore difficult to be understood by computers. This issue gives rise to the question: How can these visual semantics be formulated? We use three formulated measures to represent these visual semantics: layout tree [18] is used to recognize the similar visual blocks; seam degree is used to describe how neatly the blocks are arranged; content similarity is used to describe the content coherent degree between the blocks. Based on these three measures, we proposed a web page segmentation method. The experiment results show that the proposed method can divide a web page into appropriate semantic segments.
The rest of the paper is organized as follows: Related works are reviewed in Sect. 2. Visual block and preprocessing of web pages are introduced in Sect. 3. A web page segmentation method using visual semantics is proposed in Sect. 4. Experiment analysis and results are reported in Sect. 5. Finally, conclusion and future work are given in Sect. 6.
Related Work
In the past few years, there has been plenty of work on automatic web page segmentation.
Some of the early approaches are based on machine learning algorithms [1], [4], [8], [9]. These approaches segment pages by training the clues from DOM or simple vision cues. Machine-learning-based approaches can only be applied in some certain fields of web pages, because of the limitation of the training data set. Because these algorithms need to be trained, they can be regarded as semi-automatic approaches.
Copyright c 2014 The Institute of Electronics, Information and Communication Engineers In order to automatically segment web pages, heuristics-based approaches were proposed. Some of the heuristics-based approaches use HTML structure tags or DOM tree to segment a web page [3], [11], [12], [15]. These methods also have some limitations, for example: these methods may falsely separate closely related contents and combine unrelated contents together. Some other heuristicsbased approaches rely on visual cues from browser renderings [2], [5]- [7], [10]. Most of them focus on the location, size or font cues of web pages. Hereinto, VIPS [10] is considered to be the most representative visual-cue-based algorithm. It has three steps: first, a web page is recursively divided into blocks by using a number of heuristics; second, horizontal and vertical separators are determined; third, the structure of the page is constructed. These approaches can make good use of the visual features of web pages. However, heuristics are often based on simple models that cannot be generalized. In other words, even through a heuristic approach might work well on small sets of pages it isnt suitable for large sets of pages.
Because of the limitation of heuristics-based approaches, many non-heuristics-based approaches [13], [14], [16], [17] were proposed. X. Liu et al. [13] proposed a Gomory-Hu Tree based Web page segmentation algorithm. The algorithm firstly extracts vision and structure information from a web page to construct a weighted undirected graph, whose vertices are the leaf nodes of the DOM tree and the edges represent the visible position relationship between vertices. Then it partitions the graph with a Gomory-Hu tree based clustering algorithm. G. Hattori et al. J. Kong et al. [14] proposed Spatial Graph Grammar (SGG) to perform the semantic grouping and interpretation of segmented screen objects. Instead of analyzing HTML source codes, they applied an image processing technology to recognize atomic interface objects from the screenshot of an interface and produce a spatial graph, which records significant spatial relations among recognized objects. However, there are not enough quantified experiment results to indicate that SGG is effective to segment any kinds of web pages. J. Kang et al. [16] proposed repetition-based web page segmentation method. They consider the repetitive tag patterns to be key patterns in the DOM tree structure of a page. By detecting key patterns in a page and generating virtual nodes to correctly segment nested blocks, the method can segment pages into logical blocks. However, this method is only suitable for the pages that contain repetitive patterns. C. Kohlschtter et al. [17] utilized the notion of text-density as a measure to identify the individual text segments of a web page. Although, his method can reduce the problem to solving a 1Dpartitioning task, it can be used for small-scale pages that have certain patterns.
Our work can be classified as a vision-based approach. Different from the visual-cue-based method, e.g. VIPS, our work formulates the visual feature as quantified and formulated measures. Based on these measures, the proposed approach can divide web pages into semantic segments.
Visual Blocks
A web page is made up of finite blocks. We also call these blocks visual block or block for short. We consider a visual block as a visible rectangular region on a web page. The definition of a visual block is as follows: Definition 3-1: Visual block B = (Ob j, Rect), where Ob j is a DOM object, and Rect represents the visible rectangular region where B is displayed in the web page.
According to W3C standard, a web page can be transformed into a DOM tree, and each DOM object has a corresponding element in the web page. If an element is visible, it will be displayed within a rectangular region in a web page. Therefore, a DOM object and its rectangular region (if it is visible) represent a block in a web page. Moreover, we define the child block and leaf block as follows: Definition 3-2: For two given visual blocks B 1 = (Ob j 1 , Rect 1 ) and B 2 = (Ob j 2 , Rect 2 ), if Ob j 1 is a child node of Ob j 2 , then B 1 is the child block of B 2 . Definition 3-3: If a visual block B = (Ob j, Rect) does not have any child blocks, then B is a leaf block.
Pre-Processing Web Pages
According to Definition 3-1, each visual block has a corresponding DOM object. Thus, we first obtain the DOM tree of the web page.
In a DOM tree, each node is a DOM object. The DOM objects can be divided into five types of objects: element, attribute, text, comment and document. We further classify the element objects into two categories: visible element objects and invisible element objects. The visible element objects whose width and height properties are not zero and the display property is not none can be seen through the browser. The invisible element objects contains objects whose tags are <head>, <script>, <meta>, etc, which do not have visual attributes. Moreover, we classify the visible element objects into two categories: inline objects and line-break objects. Inline objects affect the appearance of text and can be applied to a string of characters without a line break, including objects whose tags are <b>, <big>, <font>, etc. The other visible element objects are line-break nodes. Obviously, only the visible element objects and text objects can be displayed in Web pages. Thus we need to prune the DOM tree. The pruning rules are as follows: Rule 1: The attribute nodes, comment nodes, and document nodes should be cut.
Rule 3: The visible nodes whose width and height properties are zero and display property is none should be cut.
Rule 4: If a node contains only one node whose node name is <#text>, then the <#text> node should be cut.
Rule 5: If a node only contains <#text> nodes and inline nodes, and each inline node has only one <#text> node, then all the <#text> nodes and inline nodes should be cut.
As mentioned before, Rule 1, Rule 2, and Rule 3 aim to prune the nodes that cannot be displayed in web pages. As for Rule 4, the text that appears in a web page is within a tag in HTML, such as text within a <p> tag. However, in the DOM tree, the <p> node contains a child node whose node name is <#text>. The <#text> node does not have width and height properties, and its parent node <p> also contains the text information of the <#text> node. Even if the <#text> nodes are cut, its text information will not be lost. Rule 4 is used to cut such <#text> nodes. Similarly, Rule 5 aims to prune the inline nodes with nested <#text> nodes.
After the DOM tree has been pruned, only a part of visible nodes and text nodes remains in the DOM tree. It should be noted that both element nodes and text nodes have corresponding objects of the DOM. According to Definition 3-1, each visual block has a corresponding DOM Element object and a rectangular region. Thus, it is necessary to get the corresponding rectangular region of each Element object. The Element object does not contain the absolute coordinate of the corresponding HTML element, and it only contains a relative coordinate to the parent HTML element. Fortunately, some browsers provide APIs to get absolute coordinate easily. As for the text nodes, the width and height can be indirectly calculated by analyzing the width and height of parent node and sibling nodes. After rectangular regions are determined, the corresponding visual blocks are also determined.
Recognizing Similar Visual Blocks Using Layout Tree
Some pages contain similar visual blocks. For example, Fig. 1 shows a page of a shopping site, and the red rectangles indicate the similar visual blocks. Each block is a product record, thus we consider that these blocks have independent semantic and they should not be divided into smaller segments. Therefore, we should recognize the similar visual blocks in advance.
In our early work, we proposed a layout tree based method to identify the similar visual blocks [18]. For a given block, if the block is not a leaf block, we can transform the block into a layout tree as shown in Fig. 2.
In the Fig. 2, there are two separators S 1 and S 2 . Each separator can divide the block into two smaller parts. The separators can be considered as a root of a tree, and the two smaller parts can be considered as the left subtree and the right subtree. Generally, if the separator is horizontal, the upper part is the left subtree and lower part is the right subtree. If the separator is vertical, the left part is the left subtree and right part is the right subtree. Therefore, the given block can be transformed as a tree. We call the tree a "layout tree".
For two given blocks, first they are transformed into two layout trees respectively. Then the Tree Edit Distance (TED) algorithm [19] is used to calculate the similarity of two layout trees. If the similarity is less than the threshold, then the two blocks are visual similar. According to our early experiments, the optimal threshold is 0.4. Using this method all the similar visual blocks can be recognized. Due to paucity of space we only introduce the method roughly. See paper [18] for the detailed algorithm.
Calculating Seam Degree of Blocks
Because the blocks are visible rectangles in a web page, they are always arranged by certain rules. The relevant blocks are always neatly arranged and put visually close together. For any two given blocks, their arrangements can be classified into three types as shown in Fig. 3. In Fig. 3 (a), a 1 and a 2 are not adjacent, thus we consider them to be visually irrelevant. In Fig. 3 (b) (c), b 1 and b 2 , c 1 and c 2 are adjacent blocks. Intuitively, we consider c 1 and c 2 are closer than b 1 and b 2 . Suppose there is a minimum rectangle that can just cover the two blocks in Fig. 3 (b) and Fig. 3 (c). c 1 and c 2 can fully fill up the minimum rectangle, but b 1 and b 2 cannot fill up it. It is known that each segment has a corresponding rectangle appearing in the page. In other words, c 1 and c 2 are more likely to be a segment, but b 1 and b 2 cannot be considered as a segment. We utilize seam degree to describe how close the two blocks are arranged. For two given adjacent blocks (1): (2): where Height(B i ) represents the height of B i . S D(B 1 , B 2 ) is between 0 and 1. Since the seam degree is based on the visual information of blocks, it can indicate the visual coherent degree of adjacent blocks.
If a block has child blocks, the average seam degree of adjacent child blocks can indicate the visual coherent degree of the child blocks in the block. For a given visual block B, the set of child blocks in B is Child(B) = {b 1 , b 2 , · · · , b n }. If two child blocks b i and b j are adjacent, we count 1 pair. Let us assume that there are m pairs of adjacent child blocks. The averaging seam degree AvgS D(B) can be calculated as in formula (3): where b i and b j (i j) are two adjacent child blocks. AvgS D(B) degree is also between 0 and 1. If it is closer to 0, the visual coherent degree of child blocks is lower. If it is closer to 1, the visual coherent degree of child blocks is higher.
Calculating Content Similarity of Blocks
Blocks with different semantics always have different types of contents. For example, a navigation bar has a list of short link text; an advertisement has a big picture; a user registration form has some text boxes, pull-down menus, buttons, etc. These different contents have different visual features. If the contents of two blocks are similar, the two blocks have a high content coherent degree. We introduce the Content Similarity to describe the content coherent degree. We roughly classify the contents into four categories: For a given block B, the content set of B is C = {c 1 , c 2 , · · · , c n }. First, the contents are classified into the four categories mentioned above. Then four types of content sets can be obtained, denoted TC, LTC, IMC, and INC. Obviously, TC, LTC, IMC and INC are the subsets of C. If one of the content subsets is φ, it means that B does not contain the corresponding type of the contents. The contents are also the elements of web page, thus each of them has a corresponding block. We use Area(c i ) to represent the area of the corresponding block of content c i . If c i is a text content or link text content, we approximately calculate the area as in formula (4): where Length(c i ) represents the character byte size of text or link text, FontS ize(c i ) represents the font size of text or link text. For a given content subset (TC, LTC, IMC or INC), according to the area of contents, the content of the given subset can be sorted from large to small area. By utilizing the sorted content subsets, four content area vectors can be obtained, denoted V tc , V ltc , V imc and V inc . The values of elements in the four vectors are the areas of corresponding contents. After the content vectors are determined, the content similarity of two blocks can be calculated.
If the content area vectors of two given blocks are determined, the similarity of each content area vector (V tc , V ltc , V imc and V inc ) can be calculated. There are many algorithms to calculate the similarity of two vectors, of which the cosine similarity is a simple and efficient algorithm [20]. Here we take the vector of the text content as an example to explain the calculation of cosine similarity. For two given blocks B 1 and B 2 , their text content area vectors are V tc 1 = (u 1 , u 2 , · · · , u m ) and V tc 2 = (v 1 , v 2 , · · · , v n ). Let us assume that V tc 1 φ, V tc 1 φ, and n > m. Because the cosine similarity requires that the two vectors must have the same number of elements, we need to add (n − m) elements whose value are 0 into V tc 1 , denoted V tc 1 = (u 1 , u 2 , · · · , u m , u m+1 , · · · , u n ). The cosine similarity of V tc 1 and V tc 2 can be calculated as in formula (5): If both V tc 1 and V tc 2 are φ, Cos(V tc 1 , V tc 2 ) is illformed. In this case, we define the Cos(V tc 1 , V tc 2 ) to be zero. Similarly, the cosine similarity of other content area vectors (including V ltc , V imc and V inc ) can also be determined.
Additionally, the four types of contents may have different weight in B 1 and B 2 . Also, we take the text content as an example to explain the calculation of weight. For two given blocks B 1 and B 2 , their text content area vectors are V tc 1 = (u 1 , u 2 , · · · , u m ) and V tc 2 = (v 1 , v 2 , · · · , v n ). The weight of text content can be calculated as in formula (6): where the Area(B i ) represents the total area of all contents in B i . It means that the greater the area of the corresponding type of contents is, the higher its weight will be. After the cosine similarity and weight of each content area vector are determined, the content similarity CS (B 1 , B 2 ) of B 1 and B 2 can be calculated as in formula (7): where Weight i represents the weight of four types of contents (TC, LTC, IMC and INC), and Cos i represents the cosine similarity of the corresponding content area vector. CS (B 1 , B 2 ) is between 0 and 1. Since the content similarity is based on the content information of blocks, it can indicate the content coherent degree of blocks. If a block has child blocks, the average content similarity of adjacent child blocks can indicate the content coherent degree of the child blocks in the block. It should be noted that only the content similarity of adjacent child blocks is considered. For a given visual block B, the set of child blocks in B is Child(B) = {b 1 , b 2 , · · · , b n }. If two child blocks b i and b j are adjacent, we count 1 pair. Let us assume that there are m pairs of adjacent child blocks. The average content similarity AvgCS (B) can be calculated as in formula (8): where b i and b j (i j) are adjacent child blocks. AvgCS (B) is also between 0 and 1. If it is closer to 0, the content coherent degree of child blocks is lower. If it is closer to 1, the content coherent degree of child blocks is higher.
Segment Web Pages Using Visual Semantics
After the pruned DOM tree is obtained, the similar visual blocks can be recognized and the seam degree and content similarity of each block can be determined in advance. Here, we introduce the threshold α of AvgS D(B) and the threshold β of AvgCS (B). Empirically, we set α to be 0.9 and β to be 0.8. Our web page segmentation algorithm is a top-down method. It begins from the root node of the DOM tree and which is set to be the current node. The corresponding block of the current node will be judged according to the steps Table 1 Steps for judging a block.
Step 1 If the current block is a leaf block, then do not divide it. Otherwise go to Step 2.
Step 2 If the current block contains recurrent blocks, then divide it. Otherwise go to Step 3 Step 3 If the current block is one of recurrent blocks, then do not divide it. Otherwise go to Step 4 Step 4 If the current block contains only one child block, then divide it. Otherwise go to Step 5 Step 5 If the AvgS D(B) of the current block is less than α, then divide it. Otherwise go to Step 6 Step 6 If the AvgCS (B) of the current block is less than β, then divide it. Otherwise go to Step 7.
Step 7 If the area of the current block is greater than the half of client area, then divide it. Otherwise go to Step 8.
Step 8 If the current block does not satisfy all of the above conditions, then do not divide it. shown in Table 1. If the current node should be divided, then its child blocks will be judged as well. If the current node should not be divided, then it will be pushed into an array of segments and its child blocks will not be judged anymore. The detailed algorithm is shown in Fig. 4.
Data Set
We randomly selected 50 query keywords from the chiebukuro category † of Yahoo Japan. We submitted the 50 queries to Yahoo Japan. For each of the 50 queries, we randomly collected 10 pages from the top-100 search results. As a result we collected 500 web pages. Since 81 pages are invalid or cannot be correctly displayed, we chose the remaining 419 pages as experiment data, in which 348 (83.0%) pages are written in HTML 5 and 71 (17.0%) pages are written in HTML 4. Moreover, 207 (49.4%) pages use javascript to generate HTML elements automatically. The 419 pages are from 117 different web sites, thus the diversity of the data set can be guaranteed. The diversity can be used to test the robustness of the proposed method. In the 419 web pages there are 261 pages that contain similar visual blocks.
Evaluation Method
Different from other automatic evaluation experiments, the evaluation of web page segmentation is a human-involved task. A lot of previous work manually labeled the segments of web pages in advance and compared the labeled segments with the segment results using their method. However, labeling the segments is a time-consuming process. Therefore, we developed an evaluation program by using the APIs of Chrome Extension † , which is a small piece of software program that can modify and enhance the functionality of the Chrome browser. The Chrome Extension can help the proposed method to obtain the visual information (e.g width, height, and coordinate) of blocks after the browser transforms an HTML file into a web page. Therefore, even if some HTML elements are generated by javascript, the information of these elements can also be obtained. This evaluation program can visualize the segmentation results as shown in Fig. 5. The colored overlay rectangles represent the segment results. When a rectangle is clicked, the area and number of the clicked rectangle will be recorded by our program automatically.
Direct comparison of the proposed method with all of the related work describe in Sect. 2 is difficult, since the data set and evaluation program used are not open to the public. Therefore we implemented the VIPS [10] algorithm as the comparison baseline. VIPS is a popular page segment method that is often taken as a comparison baseline by other work. Our evaluation experiment contains two steps.
In the first step, we utilized our evaluation program to segment the 419 pages using VIPS and the proposed method respectively. The badly divided segments (bad segments) were then manually checked. For each page, the program recorded the area and number of both the checked segments and all segments. Given a page P, S = {s 1 , s 2 , · · · , s m } is the set of segment results of P, and S = {s 1 , s 2 , · · · , s n } is the set of bad segments of P. We used the area rate of bad segments (Bad Area Rate, BAR) and number rate of bad segments (Bad Number Rate, BNR) to evaluate the segment † http://www.chromeextensions.org/
Categories
Descriptions Perfect There are no bad segments. Good There are few bad segments, and these bad segments have little effect on segment results. Fair There are some bad segments, and these bad segments have an effect on segment results. Bad There are a lot bad segments, and the segment results are not acceptable. Too Bad Almost all the segments are bad segments. results of a page P. BAR(P) and BNR(P) can be calculated as in formula (9): where Area(s i ) represents the area of segment s i . The BAR(P) and BNR(P) are the less the better.
In the second step, we manually classified the segment results into 5 categories according to the results of the first step. The 5 categories and their descriptions are shown in Table 2. When we were classifying the segment results we considered not only the number of bad segments, but also the effect of bad segments. The effects are associated with the area of segments and the place where the segments are displayed. Image that, there are two bad segments, if one is at an inconspicuous place and the other one is at an important place, then their effects on the segment results are different.
Experiment Results
Our experiments are done on CORE i5 2.6 GHz, 4 GB, 12.1 inch, 1280*800 PCs. Table 3 presents the average BAR and BNR. In Table 3, the "419 pages" represents the experiment results of the total 419 pages, and the "261 pages" represents the experiment results of the 261 pages that contain similar visual blocks. From Table 3, the 21.9% segments of VIPS are bad segments, and the area of the bad segments accounts for 34.2% of the total area of all segments. Contrarily, only 3.55% segments of our method are bad segments, and their area accounts for only 1.65%. Moreover, in the results of VIPS, the BAR is greater BNR. It infers that the bad segments of VIPS have a large area. Contrarily, the bad segments of the proposed method have a small area. As mentioned above, we consider the smaller area that the bad segments have, the less effect they will have. Therefore, it is obvious that our method performances better than VIPS.
Because there are 261 pages containing recurrent blocks, we analyzed the segment results of these 261 pages in much more detail. Similar to the results of 419 pages, our method has less bad segments and a smaller area of bad segments. It also should be noted that the average BAR and BNR changed little when the 261 pages were segmented by VIPS. Contrarily, the average BAR and BNR of our method decreased considerably. This is because VIPS doesn't consider the cases of similar visual blocks. It shows that our method can better segment the web pages that contain similar visual blocks.
Besides the analysis of bad segments, we also analyzed the distribution of all 419 web pages in 5 categories. As mentioned in the previous section, the 5 categories are perfect, good, fair, bad, and too bad. Their descriptions are shown in Table 2. Within the 5 categories, we consider that perfect is the most significant category. Let us see an example. Suppose there are two page segment approaches (A1 and A2), and they use the same data set (10 web pages). A1 can divide each page into ten segments, but every page has one bad segment. So, the average rate of bad segments is 10%. A2 can also divide each page into ten segments. Nine pages of them have no bad segments and the tenth page has ten bad segments. Obviously, the average rate of bad segments is also 10%. Based on the average rate of bad segments, A1 and A2 have the same performance. But in real applications, A2 may be the better choice. To make the rate of bad segments 0%, A1 has to be manually tuned for all the ten pages, while A2 just needs to be manually tuned for only one page. In other words, though A1 and A2 have the same rate of bad segments, A1 cannot perfectly segment any pages while A2 can perfectly segment nine pages. In this case, we consider A2 is better than A1. Table 4 presents the results of five categories. The perfectly segmented pages of VIPS account for only 29.8% while the perfectly segmented pages of our method account for 76.6%. It indicates that our method needs less manual intervention in order to make 100% perfectly segmented pages. If we consider both the perfect and good results are acceptable results, then VIPS has only 63.5% acceptable results while our method has 93.1% acceptable results.
We analyzed the 93.1% web pages and noticed that these are typical web pages which have different HTML tags (the tags of HTML 4 and HTML 5 are not totally the same), patterns, colors, fonts, and even different languages. Therefore the heuristic-based methods (e.g. VIPS) cannot divide all the web pages into appropriate segments. There are two reasons why the proposed method is suitable for these pages. The proposed method does not use the tags to divide a web page, therefore, even if a web page is written in HTML 5 or HTML 4, the segment result will not change. And, these pages still have some common visual features. First, a segment arranges its blocks in the same way and different segments have different arrangement. Second, a segment has the same type of contents and different segments have different types of contents. By formulating these visual features, the proposed method can be suitable for various web pages. We also analyzed the "Bad" results and found they were due to "Text Only" web pages. In these pages, there are only text contents. In this case, the content similarity would be ineffective and the whole web page is just one segment. In many web applications these segment results are "Bad" segmentation. However, in some specific applications, these results can also be regarded to be acceptable. For example, the web segmentation is used for informative content extraction. This is because there is no uninformative content in the page. In this case, the whole web page can be considered as an informative content and this segment result will not reduce the accuracy of informative content extraction.
Conclusion and Future Work
Web page segmentation has a variety of benefits and potential web applications. However, early techniques of web page segmentation are mainly based on machine learning algorithms and rule-based heuristics, which cannot be used for large-scale page segmentation. In order to overcome these limitations, in this paper, we proposed a formulated page segmentation method using visual semantics. Instead of analyzing the visual cues of web pages, this method utilized the following three measures to formulate the visual semantics: layout tree was used to recognize the visual similar blocks; seam degree was used to describe how neatly the blocks are arranged; content similarity was used to describe the distance between the blocks. Web pages are first transformed into a DOM tree. After pruning the invisible nodes and meaningless text nodes, the proposed method judges the DOM tree nodes top-down based on the three measures. Compared with VIPS, the experiment results show that the proposed method can divide a web page into appropriate semantic segments with few bad segments. The perfectly segmented pages of the proposed method are also more than that of VIPS. Finally, we can conclude that the proposed method can effectively divide various web pages into appropriate segments.
In the future work, we will investigate how to improve the current measures of visual semantics and discover other meaningful visual semantics to segment web pages effectively and accurately. | 2018-04-03T01:12:14.126Z | 2014-02-01T00:00:00.000 | {
"year": 2014,
"sha1": "c8b2ffd9a1f11e011a2c7a07d4807b5e3d1a43c3",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/transinf/E97.D/2/E97.D_223/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4e81bba7ee46e57429544aa1c679eccb4271f2e7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
249907318 | pes2o/s2orc | v3-fos-license | Evaluation of innovative thermal insulation systems for a sustainable envelope
Dynamic building envelope is primarily responsible for the improvement of energy saving. Strategies for improving the performance of the housing. Experimentation of innovative solutions that involve vacuum type systems such as VIG. Thermal insulation solutions that use renewable materials from the reuse and recycling of waste materials from agricultural production. The waste material of the Opunthia Ficus Indica (prickly pear) pruning from widespread and luxuriant plantations in Sicily has been transformed into a system of insulation panels and granules (Patent n. 1402131). Thermal bridges control. The paper presents the results of the research on thermal insulation systems for the building casing. We present the results of computer simulations that have enabled us to verify and optimize the transmission of heat through several innovative insulation systems, such as vacuum solutions for transparent façades, or the possibilities offered by the recycling of waste material of Opunthia Ficus Indica (prickly pear) pruning, which, properly treated, allowed to obtain an insulating material (Patent n. 1402131), in the form of panels or bulk grains (thermal coefficient values 0.071 0.057 W/mK).
energy performance. In this sense moves an important architectural sector, which studies the thermal and acoustic insulation systems.
Especially post-war Italy was affected by a strong expansion, especially in the residential sector.
The construction of housing, providing timing and methods exclusively suited to meeting the growing demand, ignoring the architectural quality of the buildings themselves, leading to results without any care to environmental protection and reducing energy consumption.
According to these criteria, it has been realized most of the buildings built until today, responsible for high-energy consumption and the harmful emissions into the atmosphere, both in the production phase and in the end use.
It is therefore a clear need to adapt to the new demands of current regulations for the construction of new buildings and to act on existing buildings, taking into account the further effort imposed by the need to adapt the space available and the previous construction techniques to strategies energy improvement.
One of the main problems is represented building casing, complex dynamic system and in charge of the key relationships between internal and external areas. Eliminate obsolescence to increase the energy performance means to minimize the energy losses from the casing made possible by optimizing the insulation ability of its opaque and transparent components, in order to reduce the need for heating and cooling of the inner rooms and to increase the occupant comfort.
In particular, from the seventies to today different researches about it, with huge progress we have been made on the possibility of improving the thermal insulation in buildings through innovative materials and techniques for housing insulation itself. The use of insulating materials, from the thermal and hygrothermal aspect, allows the reduction of the heat transfer in each of the technical elements increasing their thermal inertia and reducing the thermal bridges. Current lines of research in this area mainly concern three issues: • The application of innovative materials with high thermal insulation values and reduced thickness; • Environmental Protection by the use of recycled or renewable materials; • The thermal bridges control.
In every situations, the ultimate common goal, not easy to solve, is to work on reducing CO 2 emissions.
VACUUM INNOVATIVE INSULATED SYSTEMS
In this direction, we have been made significant progress in the field of innovative technology VIP (Vacuum Insulation Panel) that achieves very low This latter aspect is even more appreciable in the energy retrofit of constructions, where the request for an adjustment to higher levels of energy efficiency requires an addition of insulating material layers with reduced thickness. Particularly, the VIP (Figure1) [1] consist of an open cell structure, made with different kinds of materials, for the realization of an evacuated chamber and an envelope to maintain the very low internal pressure (10 -5 -10 -3 mBar) [2]. The same methodology has been applied to transparent envelope systems that, starting from the glassware technology, has led to the design and manufacture of vacuum glasses through the possibility of replacing the gas contained in the gap with the vacuum, that eliminates the heat transfer between the glass plates for gaseous conduction and convection, making it more effective than any gas. (Figure 2). Some manufacturers have already implemented this system, which, however, shows some limitations on visibility through and cost currently too high for spreading ( Figure 3).
In particolare, i VIP (Figura 1) [1],
The system can be made up of different solutions that include two or three glass plates (hardened, tempered, laminated with or without low-e glass treatment) sealed to the edges with glass, metal or other organic-inorganic polymer based materials.
The air inside the cavity is extracted through a hole on one of the outer plates, subsequently sealed. The vacuum generated (10 Pa) tends to bring together the slabs but the separation is maintained by an array of micro spacers generally made of ceramic materials or metals such as stainless steel.
These spacers have a diameter between 0,25 e 0,50 mm with a height varying between 0,10 e 0,20 mm and placed at a distance of 20 -25 mm [3]. This is currently a limitation for the vision through, also they require the integration of different aspects such as the choice of sealing material so that it is tight but does not reach melting temperatures too high, to avoid damaging the low emissive treatment of the inside of the glass plate. To get a good seal it must have loss rates less than or equal to 10-12 mbar l/s [4].
RECYCLING FOR INSULATION
For several years, there have been, on the housing market, insulation systems that combine satisfactory thermal transmittance values with the ability to help the environment protect and to contain costs. In this sense, we have been developed insulation systems formed by highly renewable materials, such as wood and its derivatives, or which use waste from other industrial processes.
The novelty of the proposed solution is represented by the type of used material which until now has had no application in the construction sector and costituiti da una struttura a celle aperte, possono essere realizzati con diversi tipi di materiali, in modo da ottenere sistemi sottovuoto con una pressione interna molto bassa (10 -5 -10 -3 mBar) [ [3]. Ciò, attualmente, costituisce un limite per la visione attraverso, inoltre richiedono l'integrazione di diversi aspetti come la scelta del has instead a great potential for both the ease of retrieval which constitutes a source of renewable raw material, both for the great availability on the Sicilian country, both for the good thermo-acoustic performance that are reached.
Particularly a new panel was produced by using the cladodes of Opuntia
Ficus-Indica [5], a widespread plant in Sicily (Figure 4), according to the principles of sustainable and eco-friendly development.
Particularly, cladodes of this plant were properly dried, shredded, sorted and mixed with a glue to make a rigid panel. From this study, the new eco-friendly panel realized from cladodes of Opuntia ficus-indica could be used in several building application as insulating layer.
We report in detail some information on research development.
MATERIALS AND MANUFACTURING
The prickly pear or Opuntia ficus-indica is a plant, member of the Cactaceae family [9], originated from the American continent and reached the Mediterranean countries during the 16th century.
Opuntia ficus-indica is commonly used also in traditional medicine both for its hypoglycaemic actions and for its healing activity as cicatrizant [10].
In spring or late summer of every year pruning is done both to prevent contact between cladodes and to eliminate those sick or damaged.
Therefore, it produces a large amount of waste material. One of the advantages proposed is precisely to reuse the scrap material to achieve an environmentally friendly insulating material.
The registered patent describes the different operations that were carried out to transform the waste pruning of the insulating material in the form of granules and rigid panel [11]. It has been manufactured with a production technology similar to those commonly used for panels commercially available and thus could be industrially produced at competitive costs.
The thermal conductivity of the samples was measured according to ISO standards by using a heat flow meter mod ( Figure 6).
THERMAL PERFORMANCE
The Opuntia ficus-indica panel has shown an average value of thermal conductivity equal 0.071 W/mK. This result has been compared with the conductivity data of rigid panel actually used in several fields as thermal insulators. As shown in Figure 8, the thermal conductibility of the new eco-friendly panel, is comparable to that of the fibre mineralized wood panel, whose performances result significantly improved la costituzione in pannello e o come materiale in granuli sfusi. Da questo confronto si è verificata la possibilità di utilizzare questo nuovo materiale isolante in diverse applicazioni edilizie. Si riportano in dettaglio alcune informazioni sullo stato di avanzamento della ricerca.
On the other hand, our Opunthia Ficus Indica panel shows lower insulation
properties than the other commercially panels considered.
In spite of that, our panel can be considered less expensive in terms of energy consumption (during the production phase) compared to these commercial panels. For instance, the porous wood fibre panel shows lower conductivity (0.038 W/mK) but his technology production is more expensive (energy consumption = 17.00 MJ/kg). To evaluate the influence of the glue on the insulation properties, it was also measured the thermal conductivity of a reference panel, realized with only Opuntia ficus-indica granules without glue as binder, with thickness equal to 21 mm.
As expected, the presence of glue lead to decrease the insulating properties: the reference panel (without resin) has shown an average thermal conductivity equal to 0.057 W/mK [12].
Regarding the bulk materials commonly used as insulators, the thermal conductivity of the Opuntia Ficus Indica granules are comparable to that of vermiculite, less than those of expanded clay, pumice, granulated foam glass and cellulose granules [13] [14]. (Figure 7).
Moreover, the reference panel is less insulating but also less expensive
THERMAL BRIDGES CONTROL
A fundamental criterion that guides contemporary design requires the elimination of all thermal bridges in order to ensure an optimal level of energy performance and making minimum heat loss. The UNI EN 10211 defines the thermal bridge as "part of the building envelope where the thermal resistance elsewhere uniform, significantly changes due to: total or partial interpenetration of materials with different thermal conductivity into the building envelope, the thickness variation of the construction or differences between the area of the dispersing surface on the inner side and that of the outer side, as happens for example at the joints between wall and floor and wall and ceiling.
In correspondence of these areas, there are an increase of heat flux, associated with a distortion of the flux lines and to a modification of the temperature distributions.
The simulations were carried out by fixing an outside boundary condition of 0° C and an indoor of 21° C.
In figure 9 you can observed, from the heat flow, the indoor insulation provides an important obstacle to the spread of the cold stream, causing internally perceived temperature to the touch is on 15 ° C along the frame and between 19 and 21° C along the wall, effectively reducing the presence of thermal bridges.
The priority objective in the design phase, that is to reach in each node examined an optimal level of energy performance, can be achieved through different and successive simulations that can gradually improve the thermal insulation characteristics of the components, eliminating or reducing to the maximum the heat dispersion -thermal bridges -and thus guaranteeing a considerable reduction in costs for the heating systems. In a study case, the insertion between the insulation panels proposed, of a panel of only 15 mm gypsum fibre has interrupted, for example, the cold heat flow, ensuring a high internal temperature, and thereby obtaining an optimal level of internal comfort. The adoption of such solutions, from a thermal point of view and thermohygrometric, allows to reduce the heat flows of the individual technical elements increasing the thermal inertia.
CONCLUSIONS
The presented paper should provide some insights a way to intervene to improve the building's envelope insulation conditions.
What we have shown is not an exhaustive value but thanks to a set of studies that we have developed in different conditions we would like to demonstrate how innovation in recent years, in various fields, can allow an effective action to obtain the energy sustainability in building. | 2022-06-22T15:06:56.885Z | 2017-05-02T00:00:00.000 | {
"year": 2017,
"sha1": "4415d21f1df99114b074804f932f98abe8ad58b7",
"oa_license": "CCBY",
"oa_url": "https://rivistatema.com/sito/wp-content/uploads/2021/10/132-1-376-1-2-20170703.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "85019187d0d87092f97bac2e619b7e47140a2cea",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
} |
16227876 | pes2o/s2orc | v3-fos-license | Tyrosine triple mutated AAV2-BDNF gene therapy in a rat model of transient IOP elevation.
PURPOSE
We examined the neuroprotective effects of exogenous brain-derived neurotrophic factor (BDNF), which provides protection to retinal ganglion cells (RGCs) in rodents, in a model of transient intraocular pressure (IOP) elevation using a mutant (triple Y-F) self-complementary adeno-associated virus type 2 vector encoding BDNF (tm-scAAV2-BDNF).
METHODS
The tm-scAAV2-BDNF or control vector encoding green fluorescent protein (GFP; tm-scAAV2-GFP) was intravitreally administered to rats, which were then divided into four groups: control, ischemia/reperfusion (I/R) injury only, I/R injury with tm-scAAV2-GFP, and tm-scAAV2-BDNF. I/R injury was then induced by transiently increasing IOP, after which the rats were euthanized to measure the inner retinal thickness and cell counts in the RGC layer.
RESULTS
Intravitreous injection of tm-scAAV2-BDNF resulted in high levels of BDNF expression in the neural retina. Histological analysis showed that the inner retinal thickness and cell numbers in the RGC layer were preserved after transient IOP elevation in eyes treated with tm-scAAV2-BDNF but not in the other I/R groups. Significantly reduced glial fibrillary acidic protein (GFAP) immunostaining after I/R injury in the rats that received tm-scAAV2-BDNF indicated reduced retinal stress, and electroretinogram (ERG) analysis confirmed preservation of retinal function in the tm-scAAV2-BDNF group.
CONCLUSIONS
These results demonstrate the feasibility and effectiveness of neuroprotective gene therapy using tm-scAAV2-BDNF to protect the inner retina from transiently high intraocular pressure. An in vivo gene therapeutic approach to the clinical management of retinal diseases in conditions such as glaucoma, retinal artery occlusion, hypertensive retinopathy, and diabetic retinopathy thus appears feasible.
BDNF protein into the vitreous (used as a positive control) provided retinal protection, no such acute phase protection was obtained through administration of an ssAAV2 vector encoding BDNF.
AAV vectors are considered optimal for ocular gene therapy because of their efficiency, persistence, and low immunogenicity [24]. Moreover, intravitreous injection of ssAAV2 leads to transduction of the RGCs lining the vitreous. Until now, however, the transduction efficacy has been low. Despite thousands of publications describing the protection of neurons in the laboratory, no beneficial treatment for human patients has yet been achieved. Therefore, we speculated that the lack of successful ocular gene therapy in humans is due to poor retinal transduction efficiency. We tested that idea in the present study by assessing the RGC neuroprotection provided by a mutant (triple Y-F) self-complementary AAV2 vector (tm-scAAV2), a vector with reportedly high transduction efficiency, in an experimental rat model of transient IOP elevation.
Animals:
All experiments were performed using 7-week-old Sprague-Dawley rats weighing 200-250 g. For most experiments, the rats were divided into four groups: control, I/R injury without treatment, I/R injury with tm-scAAV2-green fluorescent protein (GFP) and I/R injury with tm-scAAV2-BDNF (n = 6 in each group). All animals were treated in accordance with the Association for Research in Vision and Ophthalmology (ARVO) Statement for the Use of Animals in Ophthalmic and Vision Research. The studies were approved by the Animal Care and Use Committee of Nippon Medical School.
Expression pattern of tm-scAAV2-GFP in the rat control retina and the I/R injured retina: To compare the patterns of GFP expression in healthy and I/R-injured retinas, the rats were anesthetized with an intraperitoneal injection of pentobarbital (50 mg/kg), and their pupils were dilated using topical phenylephrine hydrochloride and tropicamide (Santen, Osaka, Japan). After topical application of 0.4% oxybuprocaine hydrochloride (Santen), 3 μl of tm-scAAV2-GFP were intravitreally injected using a 33-gauge Hamilton needle and syringe, as described previously [30]. Three weeks after administration, the eyes were enucleated, sectioned, and immunostained using an anti-GFP antibody, as described in the Immunohistochemistry section.
Injection of AAV vectors and induction of rat I/R injury:
The rats were anesthetized with an intraperitoneal injection of pentobarbital (50 mg/kg), and their pupils were dilated using topical phenylephrine hydrochloride and tropicamide (Santen). After topical application of 0.4% oxybuprocaine hydrochloride (Santen), 3 μl of tm-scAAV2-BDNF or tm-scAAV2-GFP were intravitreally injected using a 33-gauge Hamilton needle and syringe, as described previously [30]. The same vectors were injected into both eyes. Two weeks later, retinal I/R injury was induced essentially as described previously [31]. The anterior chamber was cannulated with a 30-gauge infusion needle connected to a reservoir containing normal saline. The IOP was then raised to 110 mmHg for 60 min by elevating the saline reservoir. Retinal ischemia was confirmed by whitening of the iris and fundus. After 60 min of ischemia, the needle was withdrawn from the anterior chamber, and the IOP was normalized. We also set up a control non-transduced group that was defined as "no vehicle." Semiquantitative real-time PCR in vivo: To measure BDNF gene expression in the rat retina (n = 6), samples were transduced in the same manner described for the enzyme-linked immunosorbent assays (ELISAs). BDNF mRNA was then measured using a previously described method [32] with primers obtained from OriGene (Cat#; HK201084). As a control, GAPDH was used. PCR conditions were as follows: 95 °C for 10 s, followed by 40 cycles of 95 °C for 5 s and 60 °C for 34 s.
ELISA in vitro and in vivo:
An ELISA was used to assess expression of tm-scAAV2-BDNF in vitro. C2C12 cells (ATCC ® CRL-1772™, Manassas, VA) maintained in Dulbecco's modified Eagle's medium (DMEM, Sigma-Aldrich, St. Louis, MO) supplemented with 8% fetal bovine serum were seeded to a density of 3×10 4 cells/well in a 24-well plate and allowed to adhere for 6 h, after which they were transduced with 5 μl of tm-scAAV2-BDNF. Twenty-four hours later, the cells were washed 3 times to remove any BDNF that originated from the C2C12 cells before transduction. After incubation for an additional 3 days, the supernatants were collected and preserved at −80 °C until analyzed (n = 3 in each group). The BDNF levels were determined using a human BDNF Quantikine ELISA kit according to the manufacturer's protocol (R&D Systems, Minneapolis, MN).
To measure retinal BDNF production, the sclera-choroid-RPE complexes and neurosensory retinas were isolated from the rats 7 days after the induction of I/R. Thereafter, samples of the sclera-choroid-RPE complex and neurosensory retina were placed in 200 μl of saline and homogenized. The resultant lysate was centrifuged at 45,000 ×g for 5 min at 4 °C, and the BDNF levels in 50 μl aliquots of supernatant were determined using a human BDNF Quantikine ELISA kit. In addition, the total protein concentrations were measured using a protein assay system (Bio-Rad Protein Assay Dye Reagent Concentrate; Bio-Rad Laboratories, Hercules, CA). The final results are expressed as picograms of BDNF per milligrams of total retinal protein in the in vivo experiment.
Thickness of the inner retina and cell counts in the RGC layer:
Thinning of the inner retina is apparent after retinal I/R injury [31]. To measure retinal thickness, the rats (n=6) were anesthetized with an intraperitoneal injection of pentobarbital (50 mg/kg) 7 days after injury, after which sections were prepared as previously described [31]. In brief, The eyes were enucleated and fixed for 1 h in 1% glutaraldehyde (GA) and 4% paraformaldehyde (PFA) in 0.1 M phosphate-buffered saline (PBS; 0.1 g/l MgCl 2 • 6H 2 O, 0.2 g/l KCI, 0.2 g/l KH 2 PO 4 , 8.0 g/l NaCl, 1.15 g/l Na 2 HPO 4 , pH7.2, D5773, Sigma-Aldrich), after which the anterior segments were removed, and the corneas and lenses were discarded. The entire eye cups were then further fixed overnight in 1% GA and 4% PFA at 4 °C and then sequentially transferred in a stepwise manner to PBS with 10% sucrose for 3 h, 20% sucrose for 6 h, and 30% sucrose overnight. The eyes were then frozen in optimum cutting temperature (OCT) compound on dry ice, and 10-μm cryostat sections were cut in a plane parallel to the vertical meridian of the eye. Sections were then stained with hematoxylin and eosin (H&E). Retinal thickness, defined as the total width between the inner limiting membrane and the interface of the inner nuclear layer, was then measured. These measurements were made in an area 1 mm from the optic disc using a light microscope, and the thicknesses measured in three sections were averaged. Simultaneously with the retinal thickness, the cell density in the RGC layer was determined in three sections per eye at a final magnification of 400× using a light microscope. All measurements and cell counts were performed using image analysis software (Photoshop; Adobe Systems, San Jose, CA).
Immunohistochemistry: Eyes were enucleated 7 days after reperfusion and fixed for 1 h in 1% glutaraldehyde (GA) and 4% paraformaldehyde (PFA) in 0.1 M PBS, after which the anterior segments were removed, and the corneas and lenses were discarded. The entire eye cups were then further fixed overnight in 1% GA and 4% PFA at 4 °C and then sequentially transferred in a stepwise manner to PBS with 10% sucrose for 3 h, 20% sucrose for 6 h, and 30% sucrose overnight. The eyes were then frozen in optimum cutting temperature (OCT) compound on dry ice, and 10-μm cryostat sections were cut in a plane parallel to the vertical meridian of the eye. After the eyes were frozen in OCT compound on dry ice, 10-μm cryostat sections were cut in a plane parallel to the vertical meridian of the eye. Three sections from each eye were analyzed immunohistochemically, as described previously [33,34]. The primary antibodies used were rabbit anti-GFP (1:1,000; Santa Cruz Biotechnology, Heidelberg, Germany), goat anti-Brn-3a (1:400; sc-31984, Santa Cruz Biotechnology), and rabbit anti-cow glial fibrillary acidic protein (GFAP; 1:500; Dako, Copenhagen, Denmark). The secondary antibodies used were Alexa Fluor 488-conjugated goat anti-rabbit IgG (1:500; Invitrogen, Life Technology Japan, Tokyo, Japan) for GFP, Alexa Fluor 568-conjugated donkey anti-goat immunoglobulin G (IgG, 1:300; Invitrogen) for Brn-3a, and biotinylated anti-rabbit IgG (Vector Laboratories, Burlingame, CA) and Cy3-conjugated streptavidin (Jackson ImmunoResearch Laboratories, West Grove, PA). Sections were then mounted using medium containing 4',6'-diamidino-2-phenylindole (DAPI, Vector Laboratories) and observed under a fluorescence microscope (Olympus DP50, Tokyo, Japan).
Electroretinograms: Six days after I/R injury, full-field electroretinogram (ERG) responses from the experimental and control eyes were simultaneously recorded using a synchronized trigger and summing amplifier (Primus; Mayo, Nagoya, Japan) with a stimulation device (LS-W; Mayo), as described in our previous report [30]. In brief, rats were darkadapted overnight, then anesthetized with an intraperitoneal injection of ketamine (90 mg/kg) and xylazine (10 mg/kg). Using topical 0.4% oxybuprocaine hydrochloride, the cornea was anesthetized, and the pupils were dilated with 0.5% tropicamide and 0.5% phenylephrine hydrochloride. Electroretinograms (ERGs) were recorded using a synchronized trigger and summing amplifier (Primus; Mayo, Nagoya, Japan) with a stimulation device (LS-W; Mayo). Dark-adapted ERG responses were recorded using a white light-emitting diode with built-in corneal contact-type bipolar electrodes, which were placed on each cornea. Subcutaneous needle electrodes were also placed in the forehead and earlobe as the negative and ground electrodes. After overnight dark-adaptation, dilated pupil and scotopic responses were examined. ERG responses were measured according to the International Society for Clinical Electrophysiology of Vision guidelines. Scotopic-adapted standard white flash stimuli were set at 1000 cd/m 2 for 3 ms. At least three ERG readings were collected from each eye. Experimental and control eyes from each rat were compared at each time point to minimize the effect of individual conditions.
Intensity index of GFAP expression: Levels of GFAP expression in the four groups were analyzed using ImageJ software (Version 1.44, NIH, Bethesda, MD), as previously described [30,35]. Each image was captured using the same camera settings for gain and time (400×, 1/10 s). Data were obtained for each region of interest (ROI) based on pixel intensity and averaged across three images for each eye. Quantitation was performed in a blinded manner.
Statistical analysis: Morphometric data from different regions in each eye were averaged to provide one value per eye. The mean and standard deviation (SD) for these measurements were calculated for each group, and comparisons between groups were made by using the Student-Newman-Keuls (SNK) method (Excel; Microsoft, Tokyo, Japan). A p value of less than 0.05 was considered statistically significant.
Pattern of tm-scAAV2-GFP expression in normal and I/Rinjured retinas:
Petrs-Silva et al. reported that the tm-scAAV2 vector exhibited high transduction efficiency following intravitreous injection [26] and that this vector enhanced the efficiency of ganglion cell transduction by >30-fold after intravitreal injection [25]. Upon intravitreal administration, tm-scAAV2-GFP transduced numerous ganglion cell layer (GCL) neurons, Müller cells, and inner nuclear layer (INL) cells in healthy and I/R-injured retinas (Figure 1). In subsequent experiments, therefore, the tm-scAAV2 vector was used to express BDNF in the rat retina.
BDNF expression: Using semiquantitative real-time PCR to analyze BDNF gene expression, we found that the BDNF/ GAPDH ratio for the tm-scAAV2-BDNF group was significantly higher than in the tm-scAAV2-GFP group (Figure 2A In addition, a specific ELISA showed that there was a corresponding increase in the levels of the BDNF protein ( Figure 2B). Furthermore, the BDNF of the tm-scAAV2-BDNF-transduced C2C12 cells were significantly higher than the BDNF of the non-transduced cells (16.6±0.85 ng/ml versus 0.175±0.05 ng/ml; p<0.01; data not shown). The BDNF levels in the neurosensory retina from rats treated with tm-scAAV2-BDNF were also significantly higher than in retinas treated with tm-scAAV2-GFP
Effect of AAV vector-mediated BDNF expression on histopathological changes, cell density in the RGC layer, and
Brn-3a-positivity in retinal cross sections: To evaluate the protective effect of tm-scAAV2-BDNF, we examined histopathologic and morphometric changes 7 days after retinal I/R injury. The retinal thickness of the normal group was 45.5±2.9 μm. In the tm-scAAV2-BDNF-treated group, the retinal structure was thicker and nearly normal, whereas retinas in the tm-scAAV2-GFP-treated group exhibited marked thinning and atrophy, as did retinas in the no vehicle I/R-injured group (26.6±3.7 μm; Figure 3A). Using quantitative morphometry, we also determined that the I/R-injured retinas treated with tm-scAAV2-BDNF (45.4±4.2 μm) were significantly thicker than the retinas treated with tm-scAAV2-GFP (30.2±3.0 μm; Figure 3B; p<0.01; n = 6 in each group).
GCL cell counts were made on cross-sectional slides stained with H&E [36,37]. No attempt was made to distinguish RGCs from displaced amacrine cells or other neuronlike cells, but morphologically distinguishable glial cells and vascular endothelial cells were excluded. The mean number of cells in the RGC layer or section in rats treated with tm-scAAV2-BDNF was significantly greater than in rats treated with tm-scAAV2-GFP ( Figure 3C Electroretinograms: Scotopic ERG results are summarized in Figure 5. Figure 5A contains representative ERG recordings; Figure 5B shows the b-wave amplitudes. There was no statistical difference in b-waves between the normal (990.4±148.2 μv) and tm-scAAV2-BDNF (862.6±146.6 μv) groups. The tm-scAAV2-BDNF treatment provided significant protection to retinas, compared to the no vehicle (467.1±256.7 μv; p<0.01) and tm-scAAV2-GFP (484.8±201.6 μv; p<0.01) treatments. In addition, the absence of a significant difference between the no vehicle-and tm-scAAV2-GFP groups indicates AAV2 transduction causes no obvious toxicity.
Transduction of retinal neurons and expression of GFAP in I/R-injured retina:
GFAP staining is a widely used molecular indicator of retinal stress [38]. We investigated the immunohistochemical changes in GFAP 7 days after retinal I/R injury. Shown in Figure 6A are representative images of normal and I/R-injured retinas. GFAP immunostaining was significantly diminished in the tm-scAAV2-BDNF (4.8±4.4) Figure 2. BDNF gene and protein expression. Relative levels of the brain-derived neurotrophic factor (BDNF) mRNA (ratio of brainderived BDNF/GAPDH) (A) and protein (B) were significantly increased in the neurosensory retinas of rats treated with mutant (triple Y-F) self-complementary adeno-associated vir us t y pe 2 ve c t o r e n c o d i n g BDN F (tm-scAAV2-BDNF). n = 6 in each group. Bars depict means ± standard deviation (SD). Figure 3. Thickness of the inner retina and cell counts in the ganglion cell layer. A: Images of representative slices of healthy retinas and ischemia/reperfusion (I/R)-injured retinas with no vehicle, or with mutant (triple Y-F) self-complementary adeno-associated virus type 2 vector encoding green fluorescent protein (tm-scAAV2-GFP), or tm-scAAV2-brain-derived neurotrophic factor (BDNF) treatment. Thickness is defined as the total width between the inner limiting membrane and the interface of the inner nuclear layer (double arrow). Scale bar, 50 μm. B: Inner retinal thicknesses in each treatment group. n = 6 in each group. Bars depict means ± standard deviation (SD). The thickness of the I/R-injured retinas treated with the tm-scAAV2-BDNF vector was significantly greater than that of the no vehicle (**p<0.01) retinas and the retinas treated with the tm-scAAV2-GFP vector (**p<0.01). C: Number of cells in the retinal ganglion cell (RGC) layer or section. A significantly greater number of cells was retained after I/R injury in the RGC layer of retinas treated with tm-scAAV2-BDNF than in the no vehicle (*p<0.05) retinas or those treated with the tm-scAAV2-GFP vector (**p<0.01). Bars depict means ± standard deviation (SD). vector encoding green f luorescent protein (tm-scAAV2-GFP), or t m-scA AV2-brain-der ived neurotrophic factor (BDNF). B: b-wave amplitudes 6 days after ischemia. Bars depict means ± standard deviation (SD). There was no statistical difference in the b-waves between the normal retinas (n = 8) and the tm-scAAV2-BDNFtreated retinas (n = 6; p>0.05). The tm-scAAV2-BDNF-treated retinas were significantly protected compared to the no vehicle (n = 8; **p<0.01) and tm-scAAV2-GFP (n = 3; **p<0.01) groups. group compared to the no vehicle (15.8±8.8; p<0.01) and tm-scAAV2-GFP (12.3±0.6; p<0.01; Figure 6B) groups.
DISCUSSION
In this study, we used the tm-scAAV2 vector to achieve high transduction efficiency in retinas after a single intravitreous injection. The ELISA results showed that high levels of BDNF were produced in the rat neural retina after tm-scAAV2 administration, and histological analysis demonstrated that the BDNF exerted protective effects at the inner retina resulting in the presence of a greater number of cells in the RGC layer after transient IOP elevation. Functionally, ERG analyses confirmed the preservation of retinal function after transient IOP elevation. Collectively, these results indicate that tm-scAAV2-BDNF is potentially useful for neuroprotective gene therapy.
We used the rat transient IOP elevation model, which induces retinal I/R injury and has the characteristics of retinal artery occlusion [39] and glaucoma [40]. In an earlier study [23], supplemental BDNF protein and ssAAV2-BDNF were administered 6 h after retinal ischemia/reperfusion. The supplemented BDNF protein was effective for the acute phase of retinal artery occlusion, and the slow-onset AAV-mediated BDNF expression was effective for the later phase, which is probably more similar to the gradual loss of RGCs observed in chronic glaucoma models [8,41]. In the present experiment, the tm-scAAV2-BDNF vector was administered 2 weeks before retinal ischemia-reperfusion. We reasoned that 2 weeks should be sufficient to induce gene expression using the tm-scAAV2 vector. An intravitreally delivered tm-scAAV2 vector was previously reported to have marginally higher transduction efficiency than ssAAV2 and to mediate rapid expression in the inner retinal cells [42]. Similarly, we observed tm-scAAV2-GFP expression 2 days after intravitreous injection, and the intensity of the tm-scAAV2-GFP expression gradually increased thereafter (data not shown). These results suggest that at the time of retinal I/R, the tm-scAAV2 vector was mediating sufficient BDNF expression to provide a neuroprotective effect.
To evaluate retinal damage due to I/R injury, tissue sections were immunostained for GFAP, a widely used molecular indicator of retinal stress. Rats administered tm-scAAV2-GFP showed substantial GFAP expression following I/R injury, whereas tm-scAAV2-BDNF suppressed GFAP expression, presumably reflecting amelioration of the I/R injury. In healthy retinas, the GFAP-positive cells are astrocytes. In injured retinas, Müller cells (retina-specific glial cells) react with the anti-GFAP antibody across the Figure 6. Immunohistochemical analysis of GFAP. A: Representative slice showing glial fibrillary acidic protein (GFAP) expression in healthy (n = 4) and ischemia/reperfusion (I/R)-injured retinas. Scale bar, 50 μm. B: Intensity index (GFAP) calculations revealed significantly lower GFAP levels in I/R-injured retinas following mutant (triple Y-F) self-complementary adenovirus-associated virus type 2 vector encoding brain-derived neurotrophic factor (tm-scAAV2-BDNF) treatment (n = 4) than in the no vehicle (n = 3; **p<0.01) retinas or the retinas treated with tm-scAAV2-GFP (n = 3; **p<0.01). Bars depict means ± standard deviation (SD).
retinal layers [43]. Similar GFAP activation has also been observed in the neuronal retina in human glaucoma [44] and in an experimental glaucoma model [45]. Our findings suggest tm-scAAV2-BDNF reduces the retinal damage caused by transient IOP elevation.
BDNF plays a key role in the central and peripheral nervous systems, supporting the survival of existing neurons and encouraging the growth and differentiation of new neurons and synapses [46,47]. Compared to the retinas of healthy mice, the retinas in mice lacking BDNF have smaller retinal ganglion cell axons, reduced hypomyelination, and lower density of GCL cells and horizontal, bipolar, and amacrine cells [48,49]. Administration of BDNF to mice lacking BDNF increased the density of the GCL, horizontal, bipolar, and amacrine cells [48]. We found that tm-scAAV2-GFP has the capacity to directly transduce GCL and Müller cells. BDNF derived from tm-scAAV2-transduced GCL cells may act directly to protect neuronal cells, as tyrosine kinase receptor B (trkB), the BDNF receptor, is expressed in GCL and Müller cells [50]. TrkB mediates the effect of BDNF, which includes neuronal differentiation and survival. In addition, it was also recently reported that TrkB plays a role in indirect neuronal protection and is indispensable for the renewal of neuronal cells [51]. It may be crucial that GCL and Müller cells are transduced with BDNF for neuronal protection to be achieved.
Although the present findings suggest strong BDNF expression could play an important role in the treatment of various retinal diseases, traditional therapeutic approaches remain essential. These therapies include reduction of IOP in glaucoma [52], thrombolytic therapy for retinal artery occlusion [53], reduction of blood pressure in hypertensive retinopathy [54], and photocoagulation in diabetic retinopathy [55]. We suggest BDNF gene therapy may be useful in combination with these traditional approaches.
In conclusion, we have shown that gene therapy with tm-scAAV2-BDNF is an effective means of preventing retinal ischemic damage in rats. This finding demonstrates the feasibility of using an in vivo gene therapeutic approach to the clinical management of retinal diseases such as glaucoma.
ACKNOWLEDGMENTS
This work was supported in part by Grant-in-Aid for Scientific Research (C; No. 23792014 and 26462650) from MEXT (Ministry of Education, Culture, Sports, Science and Technology). We thank Dr. Arun Srivastava at the University of Florida for providing the pACG2-3M (pAAV2-Y730+500+444F) packaging plasmid and scAAV-GFP vector plasmid. Author Disclosure Statement: Igarashi T, Miyake K, Takahashi T and Okada T receive research support through the Nippon Medical School from Teika Pharmaceutical Co.,Ltd. | 2018-04-03T01:44:25.587Z | 2016-07-16T00:00:00.000 | {
"year": 2016,
"sha1": "75e594e5e944ebfae0391b3ba7c638471c3ce12d",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "75e594e5e944ebfae0391b3ba7c638471c3ce12d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
232185337 | pes2o/s2orc | v3-fos-license | Hot electron generation through near-field excitation of plasmonic nanoresonators
We theoretically study hot electron generation through the emission of a dipole source coupled to a nanoresonator on a metal surface. In our hybrid approach, we solve the time-harmonic Maxwell's equations numerically and apply a quantum model to predict the efficiency of hot electron generation. Strongly confined electromagnetic fields and the strong enhancement of hot electron generation at the metal surface are predicted and are further interpreted with the theory of quasinormal modes. In the investigated nanoresonator setup, both the emitting source and the acceptor resonator are localized in the same volume, and this configuration looks promising to achieve high efficiencies of hot electron generation. By comparing with the efficiency calculated in the absence of the plasmonic nanoresonator, that is, the dipole source is located near a flat, unstructured metal surface, we show that the effective excitation of the modes of the nanoresonator boosts the generation efficiency of energetic charge carriers. The proposed scheme can be used in tip-based spectroscopies and other optoelectronic applications.
I. INTRODUCTION
Light-matter interactions in metal nanostructures can be strongly enhanced by plasmonic resonance effects [1,2]. Hot electron generation, which attracted significant attention in recent years [3][4][5][6][7][8][9][10], is one important effect resulting from the absorption of plasmons by metal surfaces. With this effect, visible light can be harvested and its energy can be transferred to an adjacent semiconductor, where the energy can then be used for photocatalytic processes [11]. The impact of morphology and materials on local field enhancement and hot electron generation is typically investigated in setups with illumination from the far field, e.g., solar illumination and other macroscopic illumination settings [12][13][14]. However, there are also various types of localized light sources accessible, such as plasmonic tips, single molecules, quantum wells, or quantum dots [15][16][17], which have so far not been considered for the generation of excited charge carriers.
The efficiency of hot electron generation in metal nanostructures depends on the magnitude of the electric fields in the vicinity of the nanostructures [5]. Nanofabrication technologies allow fabrication of plasmonic nanoresonators of various shape and characteristic size well below 100 nm [18], which enables light confinements at the nanometre scale: The plasmonic resonances of the deep-subwavelength resonators can be efficiently excited by localized emitters resulting in highly localized electromagnetic fields at the metal surfaces [19,20]. For the design and optimization of nanophotonic devices based on emitter-resonator excitations, modal approaches are a common theoretical tool [21]. The localized surface plasmon resonances of the systems, which are quasinormal modes (QNMs) [21,22], are electromagnetic field solutions to the time-harmonic source-free Maxwell's equations. The corresponding resonance problems are solved numerically [23], and the solutions allow to obtain insights into the physical properties of the nanophotonic devices.
In this work, we investigate hot electron generation with a localized emitter placed in the near field of a metal nanostructure. In particular, we numerically study a circular nanogroove resonator on a silver surface with a characteristic size of ∼ 40 nm and compare the efficiency of hot electron generation in the presence and absence of the nanoresonator. We compute and analyze the hot electron generation with a quantum model assisted by full-wave simulations and further investigate the impact of geometrical parameters. We numerically demonstrate that the excited localized resonance of the nanoresonator leads to an enhancement of the hot electron generation efficiency of more than one order of magnitude compared to the flat surface.
A. Theoretical background and numerical methods
In nano-optics, in the steady-state regime, the electric fields E(r, ω 0 ) ∈ C 3 resulting from a source field are solutions to the time-harmonic Maxwell's equations in second-order form, Poles Ω k and amplitudes σ k for the generalized Drude-Lorentz model [24] , where 0 is the vacuum permittivity, ∞ = 0.77259, γD = 0.02228 eV, and ωp = 9.1423 eV.
where ω 0 ∈ R is the angular frequency, r is the spatial position, and J(r) ∈ C 3 is the electric current density corresponding to the source. The source field for a localized source can be modeled by a dipole source J(r) = jδ(r − r ), where δ(r − r ) is the delta distribution, r is the position of the emitter, and j is the dipole amplitude vector. In the optical regime, the permeability tensor µ typically equals the vacuum permeability µ 0 . The permittivity tensor (r, ω 0 ) describes the spatial distribution of material and the material dispersion.
We investigate a dipole emitter placed close to a nanoresonator. The nanoresonator is a circular slit on a silver surface with a depth and width of 10 nm. The structure has corner roundings with a radius of 2 nm. Figure 1 shows a sketch of the geometry of the resonant system. The dipole emitter is polarized parallel to the z direction and located on axis above the central nanocylinder at a separation distance z de of the metal surface. For clearly separating the effect of localized resonances supported by the circular nanogroove resonator, we also investigate a second setup: A localized source is placed at z de above a flat, unstructured silver surface. In both cases, the permittivity of the silver material is described by a generalized Drude-Lorentz model resulting from a rational fit [24,25] to experimental data [26], see Tab. I. For the investigations, we choose a spectral region in the optical regime, 200 nm ≤ λ 0 ≤ 700 nm, with the wavelength λ 0 = 2πc/ω 0 .
To numerically analyze the dipole emitter interacting with the nanoresonator and with the flat surface, we use the finite element method. Scattering and resonance problems are solved by applying the solver JCMsuite [27]. The solver employs a subtraction field approach for localized sources, adaptive meshing, higher order polynomial ansatz functions, and allows to exploit the rotational symmetry of the geometry [28].
B. Quasinormal mode analysis
When a localized emitter is placed close to a nanostructure, then the optical properties of the system are determined by its underlying resonances. Localized surface plasmon resonances, which are QNMs of the system, are one important resonance phenomena. Figure 1 contains a sketch of a QNM of the nanoresonator which is investigated in this study. QNMs are solutions to Eq. (1) with outgoing wave conditions and without a source field, i.e., J(r) = 0. We denote the electric and magnetic field distributions of a QNM byẼ(r) andH(r), respectively. The QNMs are characterized by complex eigenfrequenciesω ∈ C with negative imaginary parts. The quality factor Q of a resonance, describes its spectral confinement and quantifies the relation between the stored and the dissipated electromag- Simulations of the circular nanogroove resonator supporting one dominant localized resonance in the spectral region of visible light. The associated QNM and its eigenfrequencyω depend on the radius r of the nanoresonator. The permittivity model metal,bulk given in Tab. I is used. (a,b) Resonance wavelengthλ = Re(2πc/ω) and quality factor Q of the dominant QNM, respectively. (c) Log-plot (a.u.) of the electric field intensity |Ẽ| 2 corresponding to the dominant QNM of the nanoresonator with r = 10 nm. The QNM is normalized [29] such that Ω Ẽ · ∂ω ∂ωẼ − µ0H ·H dV = 1, i.e., the map allows a direct estimation and visual comparison of the interaction strength of the mode with point-like unpolarized dipoles. The corresponding eigenfrequency is ω = (4.330−0.018i)×10 15 s −1 and the resonance wavelength is λ = 435 nm. (d) Log-plot of the electric field intensity of the normalized QNM corresponding to the circular nanogroove resonator with r = 30 nm. netic field energy. In the following section, we investigate how hot electron generation can be increased by the excitation of localized resonances. The physical intuition behind this effect is the following: When a localized source radiating at the frequency ω 0 efficiently couples to a localized resonance, i.e., it is spectrally (ω 0 ≈ Re(ω)) and spatially matched with the resonance, then a large electric field E(ω 0 , r) around the nanoresonator can be induced by the source. At the resonance frequency ω 0 = Re(ω), the induced field intensity |E(ω 0 , r)| 2 is proportional to Q 2 , which can significantly enhance the hot electron generation. Note that |E(ω 0 , r)| 2 is also proportional to (Re(1/Ṽ )) 2 , whereṼ is the mode volume [29] describing the spatial confinement of the electromagnetic field of a resonance.
In the optical regime, the circular nanogroove resonator sketched in Fig. 1 supports one dominant localized resonance. The resonance wavelengthλ = Re(2πc/ω) decreases with an increasing circular slit radius r, see Fig. 2(a). Figure 2(b) shows Q, depending on r, where Q = 120 can be observed for r = 10 nm. Note that, for smaller radii, due to the decreasing radiation loss, the quality factor would increase further. However, we restrict the investigations to r ≥ 10 nm. Figure 2(c) shows the electric field intensity of the dominant resonance for r = 10 nm. The resonance is strongly localized at the circular slit and is characterized by high electric field values inside and close to the metal. Figure 2(d) shows the electric field intensity of the dominant resonance for r = 30 nm. It can be observed that, in comparison to the resonance for r = 10 nm, the electric field intensity becomes smaller at the metal surface. The ratio between stored and dissipated electromagnetic field energy decreases with an increasing radius. For the following investigations, we consider the circular nanogroove resonator shown in Fig. 2(c), which has a radius of r = 10 nm and a quality factor of Q = 120.
C. Dipole emission and absorption
To quantify the interaction of the circular nanogroove resonator with a dipole emitter close to the resonator, we investigate the total power emitted by the dipole, which is also called dipole emission. The dipole emission can be computed by where E * (r, ω 0 ) is the complex conjugate of the electric field, r is the position of the emitter, and j is the dipole amplitude vector. The electric field E(r, ω 0 ) is computed by solving Eq. (1) with a dipole source. Based on the modal results from the previous subsection, we place the dipole emitter at z de = 20 nm, which is in a spatial region of high electric field intensity of the dominant resonance shown in Fig. 2(c). In this way, the localized resonance of the circular nanogroove resonator has a significant influence on the emission properties of the dipole emitter. Figure 3(a) shows the dipole emission p de (λ 0 ). In the case of the nanoresonator, the spectrum is characterized by two significant maxima, which are based on different resonance effects: The dipole emitter couples to the dominant localized resonance with the resonance wavelengthλ = 435 nm and it couples also to a continuum of surface plasmons, which are propagating on the metal surface. As expected, the propagating surface plasmons occur not only in the presence of the nanoresonator, but also in the case of the flat surface. Their high density of states give rise to a peak in the spectrum between λ 0 = 300 nm and λ 0 = 400 nm, as indicated in Fig. 3(a), where the coupling of the dipole emitter to the propagating surface plasmons is stronger in absence of the nanoresonator.
It can be expected that, for the investigated systems, all energy that is not radiated into the upper hemisphere is absorbed by the metal. Therefore, the total absorbed energy can be computed using the expression The dipole emission radiated into the upper hemisphere, p rad (ω 0 ), is computed by a near-field to far-field transformation and an integration of the Poynting vector over the upper hemisphere. Figure 3(b) shows the absorption p abs (λ 0 ) normalized by the dipole emission p de (λ 0 ) for z de = 20 nm. It can be observed that, close to the wavelength of the localized resonance, most of the energy is absorbed. As the presence of the nanoresonator increases the electromagnetic field energy in the metal, the system with the nanoresonator leads to a higher absorption efficiency than the system with the flat surface.
To summarize, the simulations in this subsection show that a localized source can efficiently excite localized resonances supported by a nanoresonator, as well as propagating surface plasmons on flat metal surfaces. In the following section, it is shown that especially excited localized resonances can have a significant impact on the rate at which hot electrons can be generated in our model system.
A. Theoretical background
Considering quantum surface effects in plasmonics, one should start from an elegant theory by Feibelman developed to describe a surface plasmon dispersion in metals [30,31]. The so-called Feibelman's d-parameters characterize the dispersion and damping of the surface plasmon mode beyond the classical electromagnetic theory. Furthermore, it was discovered that the plasmon excitations in small nanoparticles experience an additional damping mechanism, the so-called surface-scattering decay [32]. In this quantum mechanism, collective plasmon excitations turn into hot electrons due to scattering at the surfaces [33][34][35][36][37][38]. A full kinetic picture of the plasmon excitation in a nanostructure involves both low-energy "Drude" electrons forming the coherent plasmon oscillation and the energetic (hot) electrons generated through the surface-assisted Kreibig's mechanism [39]. The lowenergy excitations, regarded above as Drude electrons, can also be derived directly from the quasi-classical theory based on the Boltzmann equation [40,41]. Another related work, which should be mentioned here, is the theory of hot electron photocurrents generated at metal-semiconductor interfaces [42][43][44][45]. In our approach, we combine some of the quantum formalisms mentioned above [33,38,39,45] with the classical formalism of computing the electromagnetic fields at the surfaces by solving Maxwell's equations. The theoretical treatment below, which incorporates the surface-assisted generation of hot electrons, is very convenient since it allows to investigate nanostructures with arbitrarily complex shapes, in which hot-spot and shape effects determine the formation of plasmonic modes. We note that our formalism does not include a bulk mechanism of hot electron generation due to the electron-phonon scattering [46]. However, such a phonon-assisted channel should not play a dominant role in relatively small nanostructures where plasmonic mode sizes are less than 40 nm [46]. In our case, the groove size of the nanostructure is just 10 nm, and we expect that the leading mechanism is the surfaceassisted hot electron generation. Another argument for the importance of the surface-generated hot electrons is that those carriers are created at the surface and, therefore, can be transferred to surface acceptor states for photochemistry or for other detection methods.
B. Quantum efficiency of hot electron generation
The rate of energy dissipation based on the generation of hot electrons at a surface is given by [47] p he (ω 0 ) = 1 2π 2 where e is the elementary charge, E F is the Fermi energy, and is the reduced Planck constant. The normal com-ponent of the electric field E n (r, ω 0 ) is integrated over the surface S. For a detailed derivation of Eq. (2), the reader is referred to ref 47. The quantum dissipation p he (ω 0 ) is based on optically induced quantum transitions of electrons near to the surface: The energy of photons can be transferred to the electrons because of breaking of linear momentum conservation. This surface scattering effect can be accounted for by a phenomenological approach for metal nanostructures [34,37,38]. An additional damping mechanism with the quantum decay parameter γ s is incorporated in the material model, where metal,bulk (ω 0 ) is the permittivity model for the metal bulk material, and ω p and γ D are the plasma frequency and the damping constant from the Drude model, respectively, see Tab. I. The quantum decay parameter γ s describes the broadening due to the scattering of electrons at the surface. For the calculation of γ s , we consider the total absorption power in a metal nanostructure, given by p abs = Im ( (ω 0 )) ω0 2 V E · E * dV , where (ω 0 ) is the permittivity model from Eq. (3). It is assumed that ω 2 0 (γ D +γ s ) 2 , which holds for typical cases in nanophotonics. Applying the resulting simplification Im ( (ω 0 )) ≈ Im ( metal,bulk (ω 0 )) + 0 ω 2 p γs ω 3 0 and splitting the absorption power p abs into contributions corresponding to bulk and surface effects yield, in particular, the surface-scattering term p s = 0 [37,38]. This term can be also computed using Eq. (2). The equation p he = p s can be transformed and allows to compute the quantum decay parameter γ s . A corresponding numerical iterative approach is given by [38] γ s,n = 3 4 v F S |E n (r, ω 0 , γ s,n−1 )| 2 dS V E(r, ω 0 , γ s,n−1 ) · E * (r, ω 0 , γ s,n−1 )dV , n = 0, 1, . . . , where γ s,0 = 0, v F is the Fermi velocity, and the electric fields are computed by solving Eq. (1) numerically, and subsequently, they are integrated over the surface S and the volume V of the considered nanostructure. For the computation of the electric fields within the iteration, the material model given by Eq. (3) is used. Note that, for γ s,0 = 0, we obtain (ω 0 ) = metal,bulk (ω 0 ) as used for the calculations for the optical problem in the previous section. We further note that a formalism for γ s,n can also be derived without the assumption ω 2 0 (γ D + γ s ) 2 [38]. The consideration of the quantum decay parameter γ s,n is equivalent of solving a self-consistent quantumclassical formalism which fully accounts for the change of the surface response caused by the generation of hot electrons. With this approach, the total power emitted by a dipole can be expressed as p de (ω 0 ) = p abs,bulk (ω 0 ) + p he (ω 0 ) + p rad (ω 0 ), where p abs,bulk (ω 0 ) is the absorption in the metal bulk. We define the quantum efficiency of hot electron generation as the ratio η he (ω 0 ) = p he (ω 0 )/p de (ω 0 ). This parameter describes the fraction of the dipole energy converted into hot electrons. The efficiency of the absorption in the metal bulk and the radiation efficiency are defined as η abs,bulk (ω 0 ) = p abs,bulk (ω 0 )/p de (ω 0 ) and η rad (ω 0 ) = p rad (ω 0 )/p de (ω 0 ), respectively.
To investigate the effect of hot electron generation for the circular nanogroove resonator, we choose, as in the previous section, the dipole-to-surface distance z de = 20 nm, and solve Eq. (1) with the introduced permittivity model in Eq. (3). The Fermi energy and the Fermi velocity of silver are given by E F = 5.48 eV and v F = 1.39 × 10 6 m/s [48], respectively. The quantum decay parameter γ s,n is obtained by the iteration in Eq. (4), where the abort condition for the iteration is |γ s,n − γ s,n−1 |/γ s,n < 10 −2 . For all simulations, with an initial value of γ s,0 = 0, this convergence condition can be achieved within a maximum of four iterations. The electric fields E(r, ω 0 ) resulting from this procedure are used to compute p de (ω 0 ), p he (ω 0 ), and p rad (ω 0 ). To obtain the absorption in the metal bulk, we use the expression p abs,bulk (ω 0 ) = p de (ω 0 ) − p he (ω 0 ) − p rad (ω 0 ). Note that the quantum decay parameter γ s,n and, therefore, the quantum dissipation p de (ω 0 ), depend on the size of the surface S and on the size of the volume V in Eq. (4). For example, for a system radiating at the wavelength of the localized resonance shown in Fig. 2(c), p he (λ 0 = 435 nm) changes less than 1 % when the radius of the integration domains is doubled from 1 µm to 2 µm. We choose a fixed integration radius of 2 µm for all simulations. Figure 4(a) shows the computed efficiencies η abs,bulk (λ 0 ), η he (λ 0 ), and η rad (λ 0 ) and the corresponding absolute values for the dipole emission p de (λ 0 ). In the full spectral range, due to the small dipole-tosurface distance, a large part of the power emitted by the dipole is absorbed in the metal bulk, and only a smaller part is radiated to the upper hemisphere. The quantum efficiency of hot electron generation η he (λ 0 ) is significant in the spectral regions corresponding to the localized resonance shown in Fig. 2(c) and corresponding to the propagating surface plasmons. A comparison of the results for p de (λ 0 ) in Fig. 4(a) and in Fig. 3(a) shows a slight reduction of p de (λ 0 ) when the quantum decay parameter γ s,n is incorporated in the material model. However, the peaks of p de (λ 0 ) are still present, which demonstrates that the optical resonance effects are the main drivers for hot electron generation in our model system. In both cases, with and without including the surface-scattering effect in the material model, the maximum of the dipole emission p de (λ 0 ) is located at the resonance wavelength of the localized resonance, at λ 0 = 435 nm. Next, we compare the quantum efficiency in the presence of the nanoresonator with the quantum efficiency for a flat, unstructured surface. Figure 4(b) shows the corresponding spectra η he (λ 0 ). In the case of the nanoresonator, the maximum of the quantum efficiency is located close to the resonance wavelength of the localized resonance, and is given by η he (λ 0 = 431 nm) = 0.52, which is about one order of magnitude larger than in case of the flat surface. The propagating surface plasmons are responsible for another maximum η he (λ 0 = 346 nm) = 0.32. In the case of the flat surface, the quantum efficiency shows one maximum at the wavelength λ 0 = 360 nm, given by η he (λ 0 = 360 nm) = 0.17. The spectra η he (λ 0 ) demonstrate that the presence of the nanoresonator has a significant influence on the generation of energetic charge carriers. Figure 4(c) and (d) emphasize this by showing, for the circular nanogroove res-onator and the flat surface, respectively, the electric field intensities in the vicinity of the dipole emitter radiating at the wavelength λ 0 = 431 nm, where the quantum efficiency is maximal. The localized source strongly excites the localized resonance of the nanoresonator, which leads to high electric field values at the metal surface enabling enhanced hot electron generation. Note that, close to the wavelength of the localized resonance, the absolute values for the dipole emission p de (λ 0 ) are more than one order of magnitude larger for the system with the nanoresonator than for the system without the nanoresonator, see also Fig. 3(a).
C. Dependence of hot electron generation on emitter placement
Localized light sources can excite resonances that cannot be excited by illumination from the far field, such as dark surface plasmon modes [19] or modes where the overlap integral with the field caused by the farfield illumination is negligible. This allows for additional degrees of freedom in tailoring the light-matter interaction. It can be expected that the position of the dipole emitter in our model system is a degree of freedom that has a significant influence on the generation of excited charge carriers. For investigating this impact, we perform simulations of the hot electron generation with various dipole-to-surface distances. The corresponding results are shown in Figure 5(a). In the full spectral range, with a decreasing dipole-to-surface distance from z de = 500 nm to z de = 10 nm, the quantum efficiency η he (λ 0 ) strongly increases. The most significant effect can be observed at the peak in the spectrum corresponding to the localized resonance. This can be explained through the z de -dependent overlap between localized resonance and source near fields: The resonance excitation and the resulting electromagnetic near fields increase when the dipole-to-surface distance becomes smaller. Note that, below 20 nm, the efficiency at the peak does not further increase significantly with a decrease of the distance. This can be understood by considering that, below 20 nm, almost all emitted energy has already been funneled into the localized resonance, and a further decrease of the distance does not change the electric field distribution near the metal surface. Such a saturation of the hot electron generation efficiency can only be predicted with self-consistent formulas, as given by Eqs. (1), (3), and (4).
Next, we investigate the behavior of the resonanceinduced hot electron generation peak by performing a fine sampling of the dipole-to-surface distance z de . Figure 5(b) shows the corresponding dependence of the quantum efficiency η he . In the case of the nanoresonator, the quantum efficiency varies over one order of magnitude, from 3 % to 52 %, when the distance decreases from 150 nm to 20 nm. In the case of the flat surface, the quantum efficiency only increases from 2 % to 7 % when the 3) is used. (a) Quantum efficiency η he as a function of emitter wavelength for various distances z de . (b,c) Quantum efficiency η he and quantum decay parameter γs,n, respectively, depending on z de . The number n is the last step of the iteration in Eq. (4). Note that the emitter wavelength changes as z de is varied to match the spectral position of the peak in the spectrum due to the localized resonance. The same wavelength is used for the flat surface.
distance decreases from 150 nm to 20 nm.
By changing the dipole-to-surface distance further, from z de = 20 nm to z de = 10 nm, an additional significant effect can be observed in the case of the flat surface: The quantum efficiency increases by more than one order of magnitude, up to η he = 0.46. For such small distances, high-k surface plasmon polaritons can be excited [49]. These high-k surface plasmons have a very small skin depth, which leads to strongly confined electric fields close to the metal surface. This strong effect is not observed when the nanoresonator is present because, in this case, the response is fully dominated by the localized resonance and the energy does not funnel into high-k surface plasmons. As a result, when z de = 10 nm, the same order of magnitude of quantum efficiency is obtained in the presence and in the absence of the nanoresonator. Figure 5(c) shows the dependence of the quantum decay parameter γ s,n on the distance z de . The quantum dissipation at the surface and the absorption power in the metal bulk are related to the nominator and the denominator in Eq. (4), respectively. For decreasing dipoleto-surface distances, the quantum dissipation increases faster than the absorption in the metal bulk leading to an increase of γ s,n .
Along with the additional broadening of the plasmon resonance described by γ s,n , the surface-assisted hot electron generation processes create a peculiar, nonthermal energy distribution of excited electrons inside a driven plasmonic nanocrystal [38,47]. The computed shapes of nonthermal energy distributions in a nanocrystal can be found in the refs 38 and 47. The intraband hot electrons, which we study here, are generated near the surface, and their spectral generation rate has a nearly-flat distribution in the energy interval E F < E < E F + ω 0 . Because of the frequent electron-electron collisions, the high-energy hot electrons experience fast energy relaxation. Therefore, the resulting numbers of hot electrons in the steady states of plasmonic nanostructures are always limited. However, those hot electrons, when generated, have a good chance to be injected into electronic acceptor states at the surface [3,6,7,10,50,51]. These electronic acceptors can be in the form of semiconductor clusters (TiO 2 ) [50,51] or adsorbed molecular species [7,10]. Consequently, the injected longlived hot electrons can cause chemical reactions in a solution [6,7,10] or surface growth [52]. Such chemical and shape transformations can be observed in experiments.
Based on the above results, we expect that in potential experimental setups that use hot electron generation by localized sources and nanostructured samples, the significant spectral dependence and position dependence of the generation rate can provide strong experimental signatures and thus can provide guidelines for settings with high-efficiency hot electron generation.
IV. CONCLUSIONS
We analyzed the hot electron generation due to the emission of light by a localized emitter placed in the near field of a metal nanoresonator with electromagnetic field calculations and an approximate quantum model. For a resonant nanostructure on the metal surface, enhanced hot electron generation was observed. This enhancement is based on a plasmonic resonance excited by the emitter. We showed that, for a specific nanoresonator on a silver surface, the quantum efficiency is about one order of magnitude larger than the quantum efficiency of hot electron generation in the case of a flat silver surface. We further demonstrated a strong spectral and position dependence of the hot electron generation on the placement of the emitter. In particular, the resonance significantly favors these effects.
The physical reason behind the efficient energy conversion in our system is that both the exciting source and the nanoresonator have the same dimensionality: They are zero-dimensional and, therefore, highly localized. Experimentally, a zero-dimensional source of radiation is the key element in the field of tip-enhanced spectroscopies, which includes scanning near field optical microscopy (SNOM) [53,54], hot electron nanoscopy [55], and hot electron tunneling settings [56]. In tip-driven spectroscopy, electromagnetic fields and the related hot electron excitation processes become strongly confined in small volumes, leading to a strong enhancement of lightmatter interaction. Our approach can also be used to investigate coatings with quantum dots or other emitters on resonance-supporting surfaces. The presented study provides a theoretical background for hot electron generation with localized light sources. | 2021-03-12T02:16:29.143Z | 2021-03-11T00:00:00.000 | {
"year": 2021,
"sha1": "08ccefb8e8bd7e0281f9bad14593bbce26760896",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2103.06652",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "08ccefb8e8bd7e0281f9bad14593bbce26760896",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
264291327 | pes2o/s2orc | v3-fos-license | Novel immune-related gene signature for risk stratification and prognosis prediction in ovarian cancer
Background The immune system played a multifaceted role in ovarian cancer (OC) and was a significant mediator of ovarian carcinogenesis. Various immune cells and immune gene products played an integrated role in ovarian cancer (OC) progression, proved the significance of the immune microenvironment in prognosis. Therefore, we aimed to establish and validate an immune gene prognostic signature for OC patients’ prognosis prediction. Methods Differently expressed Immune-related genes (DEIRGs) were identified in 428 OC and 77 normal ovary tissue specimens from 9 independent GEO datasets. The Cancer Genome Atlas (TCGA) cohort was used as a training cohort, Univariate Cox analysis was used to identify prognostic DEIRGs in TCGA cohort. Then, an immune gene-based risk model for prognosis prediction was constructed using the LASSO regression analysis, and validated the accuracy and stability of the model in 374 and 93 OC patients in TCGA training cohort and International Cancer Genome Consortium (ICGC) validation cohort respectively. Finally, the correlation among risk score model, clinicopathological parameters, and immune cell infiltration were analyzed. Results Five DEIRGs were identified to establish the immune gene signature and divided OC patients into the low- and high-risk groups. In TCGA and ICGC datasets, patients in the low-risk group showed a substantially higher survival rate than high-risk group. Receiver operating characteristic (ROC) curves, t-distributed stochastic neighbor embedding (t-SNE) analysis and principal component analysis (PCA) showed the good performance of the risk model. Clinicopathological correlation analysis proved the risk score model could serve as an independent prognostic factor in 2 independent datasets. Conclusions The prognostic model based on immune-related genes can function as a superior prognostic indicator for OC patients, which could provide evidence for individualized treatment and clinical decision making. Supplementary Information The online version contains supplementary material available at 10.1186/s13048-023-01289-w.
Introduction
Ovarian cancer (OC) is the fifth leading cause of cancer death in women which lead to 5% women die of it in 2021.According to statics from American, there are 21,410 new cases and 13,770 deaths in 2021 [1].The incidence of OC is considerably lower than the first most common female malignancy breast cancer, but the mortality is three times than breast cancer, even worse, the mortality rate of OC is predicted to be rise significantly in 2040 [2].The reason of high mortality rate of OC due mainly to lack of effective screening means and prognosis evaluating tools that result in its diagnosis in the advanced stages and harder to treat [3].
Over the past decade, precision diagnostics and treatment strategies in ovarian cancer offer opportunity to improve survival [4].Meanwhile, advances in precision oncology strategies have increased a need to identify clinically relevant predictive biomarkers within tumours and the best possible candidates for therapies have become more important [5].Precision cancer therapies will have more room for improvement as actionable predictive biomarkers are developed.
In clinical, histological cell type is applied as a significant prognostic factor in OC, it is considered significantly related to clinical outcome of OC patients [6].Other prognostic factors including clinical factors such as age and parity, biological factors such as multiple gene expressions and pathologic factors such as the presence of ascites and residual disease after surgery [7].Since clinical heterogeneity and some subjective reasons, it is still hard to predict the prognosis accurately and objective now.However, as the rapid development of sequencing technologies and bioinformatic algorithms, some researchers attempt to combine multiple molecular biomarkers to established an algorithm to evaluating prognosis accurately and Clinically practically [8,9].In our paper, we hope to construct a risk model based on immune genes for effective prognosis prediction and throw light on targeted therapy.
Epithelial ovarian cancer (EOC) account for more than 95% of OC and was considered as an immunogenic cancer since 55% of patients was found spontaneous anti-tumor immune response [10].The strongly link between OC and immune system can be inferred clearly.The immune system played a multifaceted role in OC and was a significant mediator of ovarian carcinogenesis [11].Many reports proved that various immune cells and immune gene products played an integrated role in OC progression and associated with prognosis [12][13][14].For better evaluating prognosis of OC patients and achieve individually management, besides histological analysis, we attempt to incorporate immune molecular features in the system of prognosis evaluation in this paper.The development of multiple immune-related prognostic markers in OC can benefit for accurate prognosis prediction, new molecular targets identification and personalized immune precision therapy until finally improve the survival rate of OC patients.
Immune-related genes (IRGs) play significant role in immune system.In this study, we aimed to build a novel immune gene signature based on IRGs for risk stratification and provide therapeutic targets in OC patients.The clinical validity and stability of the immune gene-based risk model for prognosis evaluation was validated in OC patients in the TCGA training cohort and ICGC validation cohort respectively.Our study provided an efficient and promising method for prognosis predicting and can give a valuable clue for personalized immunotherapy.
Data resources
The following 9 ovarian cancer (OC) expression chip datasets: GSE14407, GSE6008, GSE14001, GSE16708, GSE26712, GSE29450, GSE38666, GSE66957, and GSE105437, were downloaded from the Gene Expression Omnibus (GEO) database (www.ncbi.nlm.nih.gov/ geo/) to compare the expression of 2498 immune-related genes (IRGs) in 428 OC and 77 normal ovary tissue specimens.2498 IRGs were derived from the ImmPort database (https:// www.ImmPo rt.org/ home).The transcriptome RNA-sequencing data and corresponding clinical information of 374 and 93 OC patients were extracted from TCGA database (https:// portal.gdc.cancer.gov/) and ICGC data portal (https:// dcc.icgc.org/), respectively.The TCGA data and ICGC data were applied as a prognostic model training set an external validation set, respectively.Over 20,000 primary cancer samples and matched normal samples were included in the TCGA database, which held over 2.5 petabytes of genomic, epigenomic, transcriptomic, and proteomic data.The ICGC Data Portal is a collaborative effort to describe genetic anomalies in 50 different cancer types.
Data processing and differentially expressed IRGs (DEIRGs) screening
Expression matrix data from GEO database containing 9 datasets from different labs were normalized with Limma R package.We also removed the batch effect between TCGA and ICGC datasets by SVA package in R. The differentially expressed IRGs (DEIRGs) were identified in 428 OC and 77 normal ovary tissue specimens from 9 independent GEO datasets using the limma R package, and the cutoffs were |log 2 FoldChange| (|log 2 FC|) > 1 and p < 0.05.
Functional enrichment analysis and protein genes interacting of DEIRGs
The biological function of the DEIRGs was investigated using GO term enrichment analysis and KEGG pathway enrichment analysis through "clusterProfiler" R package.The protein-protein interaction (PPI) network was determined among all DEIRGs using STRING database (https:// string-db.org/).
Construction of immune gene signature related to prognosis by DEIRGs in the TCGA training cohort
First, DEIRGs with prognostic values were screened via univariate cox analysis in the TCGA training cohort.Then, to avoid overfitting, we performed the least absolute shrinkage and selection operator (LASSO) Cox regression analysis with the identified prognostic genes using R package "glmnet" to construct an immune gene signature for OC patients in the TCGA training cohort.The independent variable in the LASSO analysis was the standardized expression matrix of prognostic DEIRGs identified before, and the response variables were survival status and OS of OC patients in the TCGA training cohort.Finally, a prognostic immune gene signature for assessing survival risk of OC patients was finally constructed using the standardized expression levels of independent prognostic DEIRGs and their corresponding regression coefficients.The formula of the risk score for each patient was: Theriskscore = i−1,2,...,n regressioncoefficient(genei)×expressionvalueof (genei) .OC patients were divided into low-and high-risk groups by the median risk score as the threshold.
Evaluation of the immune gene signature in the TCGA training cohort and ICGC validation cohort
The immune gene-based risk model divide OC patients in the TCGA training cohort and the ICGC validation cohort into a high-risk group and low-risk group respectively.The Kaplan-Meier survival curves were plotted by R package "survminer" to compare the survival differences of OC patients in different risk groups.Five-year receiver operating characteristic (ROC) curves of risk score and clinical features were plotted via "survival ROC" package to describe accuracy and performance of the model in the TCGA training cohort and ICGC validation cohorts.The larger of the area under the curve (AUC) of ROC curve, the more accurate of the model.We also exhibited the relationship between survival status of OC patients and risk scores in the training cohort and validation cohort, respectively.Principal component analysis (PCA) and t-SNE analysis were performed by R package "stats" and "Rtsne" to verify the distribution of high-risk and lowrisk group patients.
Integrated analysis of the Prognostic Model and Clinical parameters of OC patients
The risk score was compared with the clinical traits to determine whether the risk score was associated with the clinical characteristics of OC patients in both TCGA training cohort and ICGC validation cohorts.Age, grade, pathological stage, and overall survival (OS) time were among the OC clinical data that were obtained from the TCGA database.The age and OS of patients were obtained from ICGC data portal.The relationship between the risk score ang these clinicopathological indexes were evaluated.To identify the independence of our risk score signature, univariate and multivariable Cox regression analyses were performed with R package "survival" in TCGA cohorts to identify independent prognostic indicators among risk score and clinical factors.Age, grade, pathological stage and Risk score were included in TCGA.
Construction of the nomogram for OC patients
A nomogram was generated using the R package "rms" to predict the probability of 1-, 3-, 5-and 10-year OS of OC patients based on the independent prognostic DEIRGs that screened out for building the risk model.
Correlation analysis between immune cells infiltration and immune gene signature
The immune infiltration of OC patients was derived from Tumor Immune Estimation Resource (TIMER) website (https:// cistr ome.shiny apps.io/ timer/).The association between the abundance of 6 immune infiltrates cells (CD4+ T cells, dendritic cells, CD8+ T cells, B cells, macrophages, and neutrophils) and the immune genebased risk model were analyzed using R.
Screening for DEIRGs
After quality assessment and normalized of GEO sequencing data from 9 independent labs (Table 1, Fig. 1A), a total of 1211 gene were found to be differentially expressed genes (DEGs) in 428 OC tissues compared with 77 adjacent tissues, including 567 upregulated and 644 downregulated DEGs (Fig. 1B), we then analyzed expression of 2498 IRGs and obtained 129 differentially expressed IRGs (DEIRGs), including 79 DEIRGs and 50 DEIRGs that were upregulated and downregulated respectively (Fig. 1C).
PPI network and Function Enrichment analysis of screened DEIRGs
A protein-protein interaction (PPI) network of DEIRGs was established and visualized in Fig. 2A.Enrichment analysis of DEIRGs showed that biological processes (BP), mainly leukocyte migration and cell chemotaxis were primarily enriched whereas the main molecular function (MF) consists of signaling receptor activator activity and receptor ligand activity.The most enriched cellular components (CC) were external side of plasma membrane (Fig. 2B).KEGG pathway indicated that the DEIRGs were mainly involved in Epstein Barr virus infection and antigen processing and presentation (Fig 2C ).
Screening of prognostic DEIRGs and Construction of a risk model based on prognostic DEIRGs in the TCGA training cohort
We put the 129 DEIRGs obtained in the previous step into the TCGA training cohort (n=374) for identifying DEIRGs corelated with overall survival (OS) of OC patients.Univariate Cox regression analysis revealed 12 prognostic DEIRGs were significantly related to OS in OC patients (Fig. 3A).For avoiding overfitting, the 12 prognostic DEIRGs were included in the least absolute shrinkage and selection operator (LASSO) analysis to construct a prognostic risk model.According to the penalty parameter (Lambda) obtained in LASSO analysis, there are 5 independent prognostic DEIRGs (ANGPT4, PLTP, A2M, CXCR4 and MIF) were used to establish a risk score model in the TCGA training set (Fig. 3B-C).
Verification of the Risk Score model in the TCGA training cohort and ICGC validation cohort
We calculated the risk score for 374 OC patients in the TCGA cohort and divided the patients into low-risk (n=187) and high-risk (n=187) groups based on the median cutoff value.The 93 OC patients in the ICGC cohort were also divided into low-risk (n=57) and highrisk (n=36) groups based on the same median cutoff value.
Figure 4A exhibited that in 374 OC patients from TCGA cohort, the OS of low-risk patients was markedly higher compared to that of high-risk patients.The 5-year ROC curve of Risk score and other clinical parameters were plotted to assess the reliability of the risk model, and the areas under the curve (AUCs) of risk score is 0.759, higher than any other clinical parameters (Fig. 4B).The risk score distribution of OC patients in the TCGA training cohort is shown in Fig. 4C, as the risk score increased, increase in number of deaths occurred (Fig. 4D).The expression patterns of 5 prognostic DEIRGs that composed the risk model in the TCGA training cohort was visualized (Fig. 4E).
The stability of the immune gene-based risk model was also examined in the ICGC-validation cohort.Consistent with the TCGA training cohort, OC patients in the lowrisk group showed a substantially higher survival rate (Fig. 4F).In the results of the 5-year ROC curve of the ICGC validation cohort, the AUC value of the risk score was 0.778 and the significance for evaluating the prognosis far exceeded other clinical indicators (Fig. 4G).The risk score performed well not only in the training cohort but also in the validation cohort (TCGA-AUC = 0.759, ICGC-AUC = 0.778).Risk score distribution and corresponding survival status of OC patients in the ICGC validation cohort were presented in Fig. 4H-I.The expression profile of 5 risk genes in ICGC cohort was also exhibited in Fig. 4J.
All above results suggested that the model had good accuracy and general applicability.
Evaluation of clinical practicality of the immune gene-based risk model
Complete clinicopathological data were extracted and integrated with risk score of OC patients in TCGA cohort and ICGC cohort respectively, and evaluated whether the risk score could independent irrespective of other clinical features to be a prognostic factor.We used principal component analysis (PCA) and t-distributed stochastic neighbor embedding (t-SNE) analysis for data dimensionality reduction and found the risk score was the only one independent prognostic judgment factor (Fig. 5C, right panel).To construct and visualize a survival prediction method for OC patients, a prognostic nomogram including 5 risk genes which made up the risk model was developed (Fig. 5D).
Correlation between immune gene-based risk model and immune cell infiltration
To evaluate whether our risk score model could reflect the tumor immune microenvironment in OC patients, we analyzed the relationship between risk score and immune cell infiltration in the TCGA training dataset.As show in Fig. 6, CD4+ T cells, CD8+ T cells, B cells, dendritic cells, and neutrophils were negative correlated with risk score.It indicated that the level of immune cell infiltration was downregulated in high risk OC patients.
Discussion
Ovarian cancer represented 2.5% of all female malignancies, but lead to 5% mortality among all cancer deaths.The high mortality of OC was mainly due to 80% of patients were diagnosed at an advanced stage with extensive peritoneal cavity metastases [15,16].For these patients diagnosed at an advanced stage, surgery and chemotherapy are still the standard of care [17].Since the responses of different patients to treatment is diversity, this reminder us to searching for highly reliable prognostic biomarkers.Efficiency prognostic biomarkers are conducive to distinguish patients at different levels of risk, convenient for treatment choice, and facilitate patient counseling [18].At present, it had become a hotspot to establish gene signatures based on specific characteristics for prognosis predicting in cancer research [19,20].Immunoediting is a process present in OC, it comprised of cancer cell elimination, equilibrium and escape from immune surveillance, and was a significant element of the immune system [21].The immune system plays a significant and complicated role in OC, it has been proved [22].Klemi et al confirmed that T cells in colorectal cancer specimens can predicted the outcome more accurately than standard prognostic factors [23].Other studies also showed similar results [24].These studies proved the significance of the immune response in prognosis.Although there are some researchers want to explore the relationship between OC and immune response from different perspectives, such as using ceRNA that affecting immune infiltration [25], or using macrophage-related gene [26] or immune-related gene pairs [27] to construct a risk model, our study using immune-related genes to expound the relationship between OC and immune response is more immediately and comprehensive.Our risk model was composed of only 5 risk genes, and verified in 2 independent cohort, the novel risk prediction model based on immune-related genes for OC patients was verified the accuracy and clinical validity from several aspects.Our study is a novel research that construct an immune genes signature for prognosis evaluating, and can provide clues to targeted therapy with immune related genes.
With the development of precision genomic medicine, researchers are committed to identify specific and accurate prognostic factors from massive medical data sets with clinical outcomes [28].A multigene-based model for prognosis predicting was obviously more precise and robust compared with using a single gene [29,30].To evaluate prognosis by expression of 5 immune genes in OC patients is convenient, efficient, accurate and costeffective.We constructed an immune genes signature for OC patients for the first time.There are some studies studied the relationship between 5 risk genes (ANGPT4 [31], PLTP [32], A2M [33], CXCR4 [34] and MIF [35]) that composed the immune genes signature and OC.However, our study constructed a risk model using the 5 risk genes firstly, can with the model, we can predict prognosis for OC patients more accurate than only one biomarker.According to our study, risk score may offer correct risk classification as a standalone prognosis factor, according to prognosis analysis on the risk model.Therefore, our nomograms built on DEIRG-based Of course, although our risk model based on immune genes can predict the prognosis of OC patients rather good, there are many other factors associated with the prognosis of OC patients, including metabolism, autophagy and so on.Therefore, further prospective studies should be implemented in multicenter clinical trials.All in all, for the first time, this study established and validated a novel immune gene related prognostic model using strict standards.It may contribute to the development of individualized treatments and improve OC patients' OS.
Conclusions
Our research successfully constructed and validated a risk score model composed of 5 immune-related mRNAs which had superior predictive capacities to predicted prognosis of OC patients, it can be used in clinical decision making and guide the personalized immunotherapy.
• fast, convenient online submission • thorough peer review by experienced researchers in your field • rapid publication on acceptance • support for research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year
•
At BMC, research is always in progress.
Learn more biomedcentral.com/submissions
Ready to submit your research Ready to submit your research ?Choose BMC and benefit from: ? Choose BMC and benefit from:
Fig. 2
Fig. 2 Protein-protein interaction (PPI) network, GO and KEGG functional enrichment analysis of 129 DEIRGs.A PPI network.Hexagonal nodes represents interactive proteins >20, square nodes represents interactive proteins >10, diamond nodes represents interactive proteins >6.Red and green nodes denote up-regulated and down-regulated DEIRGs respectively.The size of nodes is negative related to p-value.B Gene Ontology analysis (B) and KEGG pathway enrichment analysis (C) of 129 DEIRGs
Fig. 3
Fig. 3 Construction of the OC-specific immune-gene based risk score system.A Forest plot of univariate Cox analysis showing 12 immune-related genes (IRGs) identified as prognostic factors in OC.B The optimal penalty parameter (λ) selection in the LASSO model.(C) LASSO coefficient profiles of the 12 survival related immune genes
Fig. 4 Fig. 5
Fig. 4 Internal and external validation of the prognostic immune gene signature in the TCGA and ICGC cohorts, respectively.Kaplan-Meier survival curves of OC patients in different risk groups in the TCGA cohort (A) and ICGC cohorts (F).AUC value and ROC curves of risk score and clinicopathologic characteristics predicting 5-year survival of OC patients in the TCGA cohort (B) and ICGC cohorts (G).The distribution of the risk score (C), survival time and life status (D), and the expression profiles of the five prognostic IRG that formed the risk model (E) in 374 OC patients from the TCGA cohort.H Distribution of 93 OC patients' risk score in the ICGC cohort.I Relationship between risk score, survival time and life status in the ICGC cohort.J The expression of 5 risk genes in the ICGC cohort
Fig. 6
Fig. 6 Correlation of the immune gene based prognostic model with infiltration abundances of 6 types of immune cells
Table 1
Characteristic of microarray data from GEO database that used to do difference analysis 1See figure on next page.)Fig.1Identification of differentially expressed immune related genes (DEIRGs) between 428 OC tissues and 77 adjacent tissues from 9 independent GEO datasets.A Box plots of the expression profile data before and after normalization.Box plot with same color represents patients come from same GEO dataset.Left of them represents 77 control tissues and right represents 428 OC tissues.B Volcano plot and heatmap of differentially expressed genes.C Volcano plot and heatmap of DEIRGs.FDR, false discovery rate.Green and blue dots represent genes downregulated genes, red and yellow dots represent upregulated genes
Table 2
Coefficients of 5 independent key prognostic immunerelated genes (IRGs) that formed the risk model | 2023-10-19T14:09:10.557Z | 2023-10-19T00:00:00.000 | {
"year": 2023,
"sha1": "034ffdde8fa9dd88cedbb21a6809227f990647cf",
"oa_license": "CCBY",
"oa_url": "https://ovarianresearch.biomedcentral.com/counter/pdf/10.1186/s13048-023-01289-w",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "442cc389a0124110a715e239bae24eaac27aabc3",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
226921755 | pes2o/s2orc | v3-fos-license | Street Art and Intangible Heritage : a contextualising approach to public art in Vitoria-Gasteiz
This paper presents the results of ethnographic fieldwork carried out in the city of VitoriaGasteiz, capital of the Basque country, between the 4th and the 8th of December 2017. In the last decade, Vitoria-Gasteiz has become internationally known thanks to its urban gallery of public mural art. The murals of Vitoria-Gasteiz, in fact, were one of the main attractions of the city when it got the recognition of European Green Capital in 2012.1 They started being produced by the IMVG project in 2007 on the same basis of sustainability as the general agenda of the city. This cultural agenda became a world-class reference in the field of cultural heritage studies, management and the archaeology of architecture thanks to the project Abierto por Obras (Open for works) that integrated sustainability within the research and development processes of excavating, restoring, repairing, consolidating, documenting and 1.European Comission, n.d.. ‘2012 Vitoria-Gasteiz’. Environment. European Green Capital. (retrieved online, http://ec.europa.eu/environment/europeangreencapital/winning-cities/2012-vitoria-gasteiz/, 24-11-2018) vo l.6 1 , n r5 . Ju ly th e 10 th , 2 01 9 IS BN : 1 13 973 65 4 PO LI S RE SE AR CH C EN TR E. U ni ve rs ita t d e Ba rc el on a DOI: https://doi.org/10.1344/waterfront2019.61.6.6 exhibiting the Gothic cathedral of St. Mary through cultural interpretation.2 That program became an example of good practice recognized internationally and attracting people such as Ken Follet, who presented A World without End, his sequel to The Pillars of Earth, in the building.3 The emphasis on sustainability makes the IMVG an exceptional case-study within the current Street Art world, where normally expressions tend to be more ephemeral. One of the most singular aspects of the IMVG is its working methods based on community practice, social engagement and public participation.4 The combination of these particular features makes the IMVG an exceptional case in the Iberian peninsula, where many Street Art festivals and projects developed quickly and produced large pieces of public mural art in parallel to the IMVG since the 2000s.
Resum
Aquest article presenta els resultats del treball de camp etnogràfic dut a terme a la ciutat de Vitòria-Gasteiz, capital del País Basc, entre el 4 i el 8 de desembre de 2017. A la darrera dècada, Vitòria-Gasteiz és coneguda internacionalment gràcies a la seva urbanització. galeria de l'art mural mural públic. Els murals de Vitòria-Gasteiz, de fet, van ser un dels principals atractius de la ciutat quan va obtenir el reconeixement de la Capital Verda Europea el 2012 9 . Van començar a ser produïts pel projecte IMVG el 2007 sobre les mateixes bases de la sostenibilitat que el general agenda de la ciutat. Aquesta agenda cultural es va convertir en un referent de primer ordre en el camp dels estudis sobre patrimoni cultural, gestió i arqueologia de l'arquitectura gràcies al projecte Abierto por Obras que va integrar la sostenibilitat en els processos de recerca i desenvolupament de l'excavació, restauració, reparació , consolidant, documentant i exposant la catedral gòtica de Santa Maria mitjançant la interpretació cultural 10 . Aquest programa es va convertir en un exemple de bones pràctiques reconegudes internacionalment i atraient gent com Ken Follet, que va presentar A World without End, la seva seqüela de The Pillars of Earth, a l'edifici 11 . L'èmfasi en la sostenibilitat fa que l'IMVG sigui un cas excepcional en el món actual de l'art de carrer, on normalment les expressions solen ser més efímeres. Un dels aspectes més singulars de l'IMVG són els seus mètodes de treball basats en la pràctica comunitària, la participació social i la participació pública 12 . La combinació d'aquestes particularitats fa que l'IMVG sigui un cas excepcional a la península Ibèrica, on molts festivals i projectes de Street Art es desenvolupen ràpidament i produeixen grans peces murals públiques en paral·lel a l'IMVG des dels anys 2000. Siqueiros was a pioneer in incorporating the public space of street into the creative processes for the innovative works he produced and theorized within a well-established art discourse in 1934 in Los Angeles, Montevideo, and Buenos Aires. 19 They led him to experiment with new materials and to incorporate the street context as well as other factors that scholars have considered to be defining in the creative processes of current Street Art practice. 20 Between the 1930s and the mural art revival of the 1970s, there is little information within the art establishment which does not correspond with empirical evidence, in the sense that mural art did not stop being produced at the street. 21 During that time, mural art became an expression of subaltern ethnicities in America such as the Latin, the Hispanic or the Afro. It was considered as a folk expression to the point that gallerists used the term Urban Folk to name the first exhibitions of works made by people painting at the streets for a while in New 13. -Iosifidis, K., 2009 York in the 1970s instead than Street Art, 22 before the latter became normalised.
By the 1970s, therefore, the practices of mural art were closely related to particular social communities, so they became the factual expression of their intangible cultural heritage.
It was not until 2003 that UNESCO established the criteria to define the intangible cultural heritage as 'a mainspring of cultural diversity and a guarantee of sustainable development' after the recommendations of 1989 and the Universal Declaration on Cultural Diversity of 2001, since it means 'the practices, representations, expressions, knowledge, skills -as well as instruments, objects, artifacts and cultural spaces associated therewith -that communities, groups and, in some cases, individuals recognize as part of their cultural heritage', which are 'transmitted from generation to generation' and 'constantly recreated by communities and groups in response to their environment, their interaction with nature and their history, and provide them with a sense of identity and continuity, thus promoting respect for cultural diversity and human creativity', being indispensable criteria their compatibility with 'existing international human rights instruments, as well as with the requirements of mutual respect among communities, groups and individuals, and of sustainable development. (retrieved online, http://www.muralismopublico.com/p/en/murals/vitoria-gasteiz/no-present-nor-futurewithout-memory.php, 24-11-2017) ; Asociación de Víctimas y familiares de Víctimas del 3 de marzo, 2014. 'El mural de Zaramaga sobre el 3 de Marzo listo para ser inaugurado'. Martxoak 3 de Marzo (retrieved online, http://www.martxoak3.org/el-mural-de-zaramaga-sobre-el-3-de-marzo-listo-para-ser-inaugurado/; https:// www.gasteizhoy.com/mural-3-de-marzo-zaramaga-inauguracion/, 24-11-2017) 25. -Gibson, J.J., 1977 workshop here under similar conditions. We did an indoor collaborative mural project for a cultural association. The participants enjoyed the workshop very much but understood that the true realm of muralism is outdoors: public. The people who participated in the workshop themselves wanted to do something bigger and outdoors. It took two years. We organized the first one in 2007 with nothing, just with a little support, independently, without doing any big project. We asked permission from the neighbours of the flat's community where we wanted to paint the mural. We were lucky that it was a central facade placed in the heart of the Casco Viejo (old town), in a square that was quite degraded (FIG 1). My sister Christina and I directed the process for the participative mural with thirteen male and female volunteers. It was a success. People liked it a lot and from then on we could start to develop something larger and to communicate the concept, the foundations of the project, which are participation, collaboration and providing "non-artists" with tools in order to make public art. Everything started from there.
SV:
And what was the response? What were the strengths and weaknesses of the participative methodology? Did you have some issues with the public in the city, censorship or something like that?
VW:
The formulation of the project is quite well based in the sense that we always believed the murals had to be participative from the beginning. They have to be collaborative and the design of the mural has to come from that process. That is to say, it is not an individual artist who designs his/her artwork and then carries it out with the people's support, but the people themselves who take part from the very conception of the idea. That means there is not a previous sketch or design that can be shown when asking for permission or funding. The owners of the facade, for example, are asked to give their permission for the mural without knowing what the mural will even be. That is the first difficulty in the sense that people have to be convinced to participate in something with an unknown result. When asking stakeholders, supporters and neighbours for something (funding, permissions, support, etc.), they inevitably reply with more questions: "What are you going to paint?" and we say "Ah.
We don't know…" "Well, but, more or less…?", they ask. "We don't know because everything is about the process.…" It was difficult at the beginning, but the people liked it once we had done several murals and we had considerably fewer problems in that sense, later on.
There were some building where the owners association did not want the mural and so we never insist. A community must commit to the process unanimously. We also had some other problems with institutions. It depends on who was in charge at that moment. If they asked for a sketch, for instance, we could not provide one and said " If you want a commissioned mural then ask for it, but that is not this. This is not our mission".
Another important factor is the artist that directs each project. Each project is led by an artist or team of artists that behave a little like a coach or a coordinator with artistic skills, actually has a more social aspect which is more inclusive and less aligned with the "art world". We neither come from the world of art nor the world of Graffiti. We come from a trend which is almost artistic social work. But it is true that even though we do come from that trend, they overlap.
They overlap and the quality... one of the main aims of the project I personally trust much is which neither the social nor the collaborative or participative aspects of a project nor working with people who are not artists must necessarily diminish the quality of the work, right? There is the idea that there is an either/or paradigm in art. Where there is participation or there is quality, but not both together. Moreover artistic quality is subjective, anyway. Therefore, one of our goals is to strive for these two things to co-exist in each project: participation and quality to attempt that those two things go hand in hand: participation and quality. It is not always achieved, but it is a goal.
And so, what happens? That nowadays there is a tremendous quality within Street Art. I wonder with people. I mean in painting. People used to say that painting was dead. When you went to museums in the past you almost could not see paintings in museums of Modern art. I think painting is coming back and the Street Art world will make a public showcase for many artists which, I think, is quite good. Also, Street Art has given a lot of life, culture, to many cities. [...] Furthermore, I note a tendency in artists working individually in the public space, the responsibility they have towards the communities they work in, more than the idea of the parachute artist who arrives, paints wherever and whatever he/she wants and leaves.
SV:
Genial. So this links with the last question: Which way have you integrated the processes of generating memory and identity within the project? Could you refer to any mural in particular or some murals in particular that more or less incorporated this? Though you just said all of them…
VW: Yes, actually every mural talks about the place since the first one, that was in Plaza de las Brullerías and does it about a 13th-century fabric market (FIG 1). It is almost like a still life of drapery, but it also tells about the cultural diversity that existed there in the
Middle Ages and the one nowadays. Therefore it refers to 20th and 21st-century migration, integration, and multiculturality. And so, then, all the murals. From the one that was done about equality (FIG 2), that was about particular subjects on equality at the Basque Country: the matriarchy; the evolution, let's say it like that, of the fight for gender equality; in which every figure is a portrait of people from the neighbourhood; to the one that more clearly deals with historical memory: the first one we did in Zaramaga about the 3rd of March (FIG 6 But may you also consider the matter of memory to end up with something you can say how you see it has evolved and how do you think it will evolve?
VW: In Vitoria in particular?
SV: Yes, in the scene… It is actually a quite particular project for the case of the Spanish state...
VW: Yes
SV: ...in which normally, generally, it is more about what you said on taking a painting from the living room and setting it at the street.
VW: Yes, yes. Now it's being done, as I told you before, in Catalonia. I have just been in Sant
Feliu de Llobregat and they are working on it, but with a renown artist (Escif) who is going to work thanks to an artists residency. 26 (1975)(1976)(1977)(1978) Then the graffiti "Justice" was written on the floor right over the crime scene with the blood of one of the first people murdered (FIG 3). That very symbolic place became a memorial space for the victims since the beginning. In 1986, a memorial sculpture was installed, it is still standing today (FIG 4). [It happened] in the 10th anniversary, because anything that was placed there before was quickly removed by the police. There were people arrested during the first years because they tried to place monoliths or to build something for the place. That is why we say the conquest of public space is very important, since the state that wants to forget and to cover what happened, thoroughly took charge that no one brought that to light because it may have broken the strategy, the portrait they wanted to establish. That is the reason this [the sculpture] was placed in the early morning with fast concrete in the The women were also involved because there were women who started working in factories.
Obviously, under Franco's regime, the role of women was limited to a housekeeper position, etc. But the process of industrialization itself resulted in women working for companies, so they also participated in the strikes. In fact, there were some factories in Vitoria-Gasteiz in which almost the whole staff were women. For instance, Areitio that was a zipper factory.
So the women also organized themselves in assemblies with the aim to join the strike and then they started participating. It is very honourable to see the women in the protests, in the demonstrations an so on. That clashed with another very interesting reality which is the one of the wives of the strikers, that opened up a very interesting conflict.
Then assemblies began forming all around the city in order to support the strike: neighbours, women, students. As l more and more people were organized, some arrests started to happen and people started being fired from work as well. Therefore they [the strikers] value the small victories, but going from placing a sculpture secretly at night to [making it with] the recognition of the Council means that something has been achieved, isn't it?
The mural of IMVG is another great conquest (FIG 6) (FIG 7). It recreates an emblematic photo of the funerals. In front of the church there is a metal post with the portraits of the victims and an explanatory text written by the association itself that was placed by the Council (FIG 8) So these priests allowed the assemblies to take place, not only in St. Francis but in many other churches of the city. There was a network of assemblies that allowed to organize the isolated ones into a larger assembly of assemblies. And priests were of course involved, some of them were even victims of reprisal because they took part in the process. The hierarchy was another matter, it was one of the pillars of the dictatorship, but of course, there were priests who got involved and that was one of the reasons the assemblies were so participative.
Conclusion
This ethnographic work provides a case study which demonstrates how Street Art may fit the UNESCO guidelines on intangible cultural heritage. In order to do so, the focus on processes is crucial. The key elements in this particular case study are many. The most evident is probably the survival of a community and participative tradition in Vitoria-Gasteiz based on dialogue and collective decision-making that adapted to changes along time. Moreover, Vitoria reveals a persistence in the use of Street Art practices such as graffiti, installations and mural art in public spaces as an expression of the interpretation of historical events by local peoples. The contextualizing approach followed up in this work in order to connect global and local issues shows some benefits in the methods of community mural art for the processes of enhancing and safeguarding intangible cultural heritage as well as producing social integration and human development. These intangible aspects are likewise related to the matter of sustainability since community concern necessarily has a positive effect in caring for the murals. The informants of Vitoria-Gasteiz showed how these artworks are not understood as a final product. But, more than objects, the murals themselves are perceived as processes embedded into a wider social dynamic in which aesthetic value comes out of social engagement to a larger extent than social acceptance, which is somehow subordinated to the former. Consequently, the IMVG sacrificed quantity production in order to enhance quality, both from material and intangible points of view and so aimed to emphasize sustainability. Finally, the subject matters of the IMVG murals in general and particularly the case of No present nor future without memory and the whole local knowledge related to the 3rd of March reveal themselves as idoneous to fit the criteria agreed internationally on intangible cultural heritage by which the promotion of human rights and mutual respect between communities are key factors of sustainability. murals are like a caress to a community. | 2019-08-23T20:39:30.341Z | 2019-06-19T00:00:00.000 | {
"year": 2019,
"sha1": "2c89689a04d7d6b619b3231c6d080fc1b7082458",
"oa_license": "CCBY",
"oa_url": "https://revistes.ub.edu/index.php/waterfront/article/download/28850/29353",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e081992cfd3014d87a87800f3498b3aa8cf062d2",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Art"
]
} |
212689404 | pes2o/s2orc | v3-fos-license | High-Payload Data-Hiding Method for AMBTC Decompressed Images
Data hiding is the art of embedding data into a cover image without any perceptual distortion of the cover image. Moreover, data hiding is a very crucial research topic in information security because it can be used for various applications. In this study, we proposed a high-capacity data-hiding scheme for absolute moment block truncation coding (AMBTC) decompressed images. We statistically analyzed the composition of the secret data string and developed a unique encoding and decoding dictionary search for adjusting pixel values. The dictionary was used in the embedding and extraction stages. The dictionary provides high data-hiding capacity because the secret data was compressed using dictionary-based coding. The experimental results of this study reveal that the proposed scheme is better than the existing schemes, with respect to the data-hiding capacity and visual quality.
Introduction
The concealment of information within media files is commonly used in various applications. This process originates from the hieroglyphs used in the Egyptian civilization. Other cultures, such as the Chinese culture, adopted a more physical approach to hide messages by writing them on silk or paper, rolling the material into a ball, and covering the material with wax to communicate political or military secrets. Data hiding is nearly indispensable for every aspect in our daily lives whether for good or evil intentions.
Due to its rapid growth, the Internet has recently become far more popular than traditional media. Data is accessible by everyone due to the popularity of the Internet. Therefore, possessing the capabilities of detecting copyright violations, forgery, and fraud is crucial. Many techniques, such as steganography and cryptography, have been designed to secure digital data. The difference between steganography and cryptography is as follows: In cryptography (e.g., chaos-based encrypted systems, secure pseudo-random number generator, etc. [1]) users are aware that there is an encrypted image, but they cannot efficiently decode the encrypted image unless they know the proper key. In steganography, users can easily decode the encrypted message, but most people do not notice that there is an encrypted message. In this study, we focused on the techniques used for hiding data in images.
The schemes present for hiding data in an image can be broadly classified into two categories, irreversible data-hiding schemes [2][3][4] and reversible data-hiding schemes [5][6][7]. In the irreversible data-hiding schemes, a recipient can extract the secret information. However, the original image cannot be recovered after extracting the secret information. In the reversible data-hiding schemes, the hidden data can be extracted from the image, and the original image can be retrieved from a stego image without any distortion. Two factors affect a data-hiding scheme, i.e., visual quality and embedding payload. A high-quality data-hiding scheme should not raise any suspicions of adversaries. Therefore, this type of scheme should provide low image distortion and high payload.
To decrease the size of a digital image file or accelerate the transmission, a data-hiding scheme that employs a compressed image should be developed. Many compressed file formats have been proposed, such as JPEG and JPEG2000. Wang et al. [8] proposed a lossless data-hiding method for JPEG images by using adaptive embedding. Lee et al. [9] proposed a scheme in which a secret image was compressed using JPEG2000 and then, embedded in the cover image by using tri-way pixel value differencing. Nevertheless, both JPEG and JPEG2000 need complicated computation for image compression and decompression.
Another popular technique used for image compression is block truncation coding (BTC) [10]. Compared with the methods using JPEG and JPEG2000, BTC is a simple and efficient encoding technique that is used for image compression. Therefore, the computation cost is relatively low when a data-hiding scheme is based on BTC.
Lema and Mitchell [11] proposed the absolute moment BTC (AMBTC) technique to improve the compression performance of BTC. When AMBTC is used, the first absolute moment is maintained with the mean. To exploit the advantages of AMBTC compression, we proposed an AMBTC decompressed image-based data-hiding scheme by using a pixel adjusting strategy.
The basic idea of the proposed study is to preliminarily calculate the probability of secret data and then select the best codebook for embedding the secret data. The secret data are embedded into the AMBTC compression image by modifying the pixel value according to the codebook. Experimental results reveal that the proposed scheme is almost better than the current state of the art method in terms of the hiding capacity.
The remainder of the paper is organized as follows: Section 2 describes the relevant approaches such as the BTC and AMBTC techniques for data hiding; Section 3 describes the implementation flow of the scheme proposed for data hiding; Section 4 discusses several experimental results are presented, and some issues; and finally, Section 5 specifies the conclusions and future work.
Related Works
Before describing the high data-hiding capacity of the proposed scheme, we review the AMBTC technique and some recently developed AMBTC-based data-hiding methods.
Absolute Moment Block Truncation Coding (AMBTC)
BTC, a simple and efficient block-based lossy image compression method, is used for grayscale images. Although the BTC method provides a low compression ratio, it is a popular image compression method because of its low complexity with respect to both computation and implementation. In the BTC algorithm, an image X, with M × N pixels, is divided into nonoverlapping blocks. Each block has n × n pixels, and the pixel values can be different. The mean and standard deviation of each pixel value are calculated before conducting BTC. In general, two statistical characteristics change from one block to another.
The hardware implementation of BTC is challenging because the square and square root functions are involved. To resolve this problem, AMBTC [11] was proposed as a type of BTC. The AMBTC uses the first absolute moment and mean values instead of using the standard deviation value. The main difference between AMBTC and BTC is that the mean and standard deviation values of a block are preserved in BTC. However, in AMBTC, the high mean and low mean values of a block are preserved.
As in BTC, an image X is divided into nonoverlapping blocks with n × n pixels also in the AMBTC encoding phase. For each block, the mean x and the absolute moment α of the pixel values are calculated using Note that m = n × n.
The pixel value x i is compared with the mean x for composing a bit plane for each pixel in the block. If the pixel value x i is greater than the mean x, then x i is denoted as 1. Otherwise, the pixel value is denoted as 0. The equation of bit representation is In the AMBTC-compressed block reconstruction phase, the block reconstruction is conducted using two values L m and H m . The values of L m and H m are computed using In Equations (4) and (5), q represents the number of pixels with pixel values greater than x. Thus, a compressed block has two values L m and H m , where L m is the low mean value and H m is the high mean value. To reconstruct a block, the pixels that are assigned the value of 0 in the bit plane are replaced with the L m value, and the pixels assigned the value of 1 in the bit plane are replaced with the H m value by
Related Work of BTC and AMBTC Based Data Hiding Schemes
BTC has significantly low complexity and requires less memory. Therefore, BTC is a good scheme for data hiding. Chuang and Chang proposed a data-hiding scheme for BTC-compressed images for embedding data in the bitmaps of smooth blocks to obtain an improved image quality. There are two steps in in the embedding process of the scheme proposed by Chuang and Chang. Initially, a cover image is compressed into blocks by using BTC for calculating two quantized data and the bit plane corresponding to each block. Finally, the secret data is embedded into the bitmaps of the predefined smooth blocks that satisfy the following equation: H m − L m < Threshold. The smooth blocks were selected because bit replacement in these bit planes causes a slight distortion in the BTC image. In the extraction process of the scheme proposed by Chuang and Chang [12], the difference H m − L m has to be first calculated. If H m − L m < Threshold, then the secret bit in the bit plane p i is extracted. However, in this scheme, the stego image quality degrades significantly as the threshold values increases.
Hong et al. [13] proposed a reversible data-hiding scheme based on bit plane flipping according to the corresponding secret bit. In the embedding process, each image block was compressed using AMBTC-compressed codes to determine whether the block is embeddable or not. If L m < H m , then the block is considered embeddable. Otherwise, the block is considered non-embeddable. For each embeddable block, if the secret bit is 1, then the bit plane p i is flipped to p i , where p i is not an operator. If the secret bit is 0, then no operation is required. In the extraction process, if L m > H m in p i , then the secret in p i is 1. Otherwise, the secret bit in p i is 0. The scheme presented by Hong et al. does not hide data in blocks with L m = H m . Therefore, Chen et al. [14] proposed a reversible data-hiding method to improve the scheme by Hong et al. The AMBTC-compressed block that has L m = H m is a smooth area, which is considered unnecessary bit plane information. Thus, the secret bit can be embedded all bits in the bit plane block to improve the scheme by Hong et al.
Li et al. [15] introduced a data-hiding scheme by using the histogram shifting technique on BTC-compressed mean tables for further improving the hiding capacity, while maintaining the quality of the BTC-compressed image. The hiding scheme comprises two main steps. The first step is based on the bit plane flipping method that hides secret bits by swapping the high mean and low mean values. In the second step, histogram shifting is conducted on the resulting mean tables after swapping. This scheme requires no additional data in the stego code stream. Therefore, very low distortion is observed in this scheme after data embedding, and the security of the embedded data is enhanced. However, this technique cannot provide a sufficient data-hiding capacity and requires overhead information to record a histogram.
Lin et al. [16] proposed a technique to explore the redundancy in a block of AMBTC-compressed images to determine whether the block is embeddable. If the secret bits and bit plane combined in the block has more than three different cases, the block is marked as an embeddable block. Four disjoint sets were created using this technique of embeddable blocks for embedding data using different combinations of the mean value and its standard deviation.
Ou and Sun [17] proposed a data-hiding scheme with minimum distortion based on AMBTC. In this scheme, a predefined threshold is used to determine if a block of the AMBTC-compressed codes is a smooth or complex block in which data are embedded. If an AMBTC-compressed block H m − L m < Threshold, then the block is considered a smooth block. All bit planes in smooth blocks are used to embed data by replacing the bits of the block with secret data bits. The two quantization levels in the smooth block are then recalculated to reduce distortion in the image. In the complex blocks, a proportion of secret bits were concealed by exchanging the order of two quantization levels and toggling the bit plane. By performing this method, the payload can be increased without any distortion. Both smooth and complex blocks can be used to embed data in an AMBTC-compressed block. Therefore, the payload of this scheme was obviously enhanced.
Malik et al. [18] modified the AMBTC compression technique for embedding secret data. In their method, one-bit plane is converted to two-bit planes that can attain better image quality and high capacity. Although this scheme has high visual quality and high payload, it causes permanent distortion to the original AMBTC code and requires overhead information. Malik et al. [19] proposed an AMBTC compression-based data-hiding scheme by using the pixel value adjusting strategy. In this technique, the stream of secret bits was converted to digits with a base of three. Then, the pixel values of the AMBTC-compressed block are modified, at the most by one, to hide secret data. This scheme could maintain a balance between the hiding capacity and quality of a stego image.
As discussed above, data hiding by using the AMBTC technique is an issue worthy of more research. In this study, we extended the work of Malik et al. [19] to embed a larger amount of secret data. In the next section, the proposed scheme is discussed. Figure 1 shows the flowchart of our application. First, one monitoring image on the unmanned aerial vehicle was compressed because the transmitting volume of wireless network is limited. When the command post or chief's car receives the compression codes, they are decoded as the decompressed image. In addition, they embed secret data into the reconstructed image, thereby cheating hackers and avoiding attacks. Finally, the headquarters can extract secret data and recover the decompressed image.
Proposed Scheme
The main aim of the study is to present a data-hiding scheme with high data-hiding capacity and high image quality. In the scheme, secret data is hidden in an AMBTC decompressed image. The AMBTC decompressed image is losslessly reconstructed and the secret data, then, is losslessly revealed from the reconstructed image. The AMBTC encoding procedures are described in Section 2.
Before embedding the secret data, the cover image must be compressed using the AMBTC algorithm. In other words, the proposed scheme uses the AMBTC decompressed image to embed the secret data.
The proposed scheme involves three stages: In the first stage, an appropriate encoding and decoding dictionary is found. The dictionary is used in the second stage to embed the data. In the third stage, the secret data is extracted. The details of the proposed scheme are presented in Figure 1. The proposed scheme involves three stages: In the first stage, an appropriate encoding and decoding dictionary is found. The dictionary is used in the second stage to embed the data. In the third stage, the secret data is extracted. The details of the proposed scheme are presented in Figure 1.
Finding a Unique Decodable Dictionary
A binary secret sequence comprises 0 and 1 values and is denoted as S = { , , … , }, where ∈ {0,1} for = 1~. Consider the dictionaries D1, D2, D3, and D4 formed using subsets of , that is, , , … , . Different image quality is obtained due to the different dictionaries. Thus, we can calculate each probability of symbol in . The amount of information in each symbol can be represented by Then, the average information per symbol interval is and can be represented by The average information is referred to as the entropy. The dictionary with the smallest entropy H should be selected because it can achieve the best encoding benefit. The following explains why the dictionary of the smallest entropy is used: Assume that there is only one symbol's type in the whole secret sequence. In other words, the other types never occur. In this case, the entropy is equal to 0, i.e., = 0. Afterwards, the specific symbols are replaced by the absolute minimum value "0", thereby controlling the distortion level in the data embedding phase. Consequently, the proposed method selects the dictionary of the smallest dictionary. An example is used to explain the above procedure. Assume the secret sequence = {001110111100110110010000010011010}. In dictionary listed in Table 1, the secret sequence is represented as = {001,11,01,11,10,01,10,11,001,000,001,001,10,10} for easy readability.
According to , the total number of information is 12.4670 and the average information per symbol at is 2.1570. In dictionary , which is listed in Table 2, the secret sequence can be represented as = {00,11,10,11,11,00,11,011,00,10,00,00,10,011,010} . According to , the total number of information is 12.1451 and the average information per symbol at is 2.2264. The third and fourth dictionaries are constructed in the same manner, and their entropies values are listed in Tables 3 and 4, respectively. Obviously, the entropy of is the smallest among all the dictionaries. Therefore, we used to encode the secret sequence.
Finding a Unique Decodable Dictionary
A binary secret sequence S comprises 0 and 1 values and is denoted as S = {s 1 , s 2 , . . . , s N }, where s i ∈ {0, 1} for i = 1 ∼ N. Consider the dictionaries D 1 , D 2 , D 3 , and D 4 formed using K subsets of S, that is, S p1 , S p2 , . . . , S pK . Different image quality is obtained due to the different dictionaries. Thus, we can calculate each probability of symbol S p in S. The amount of information in each symbol I a can be represented by I a = −log 2 pr S pk .
Then, the average information per symbol interval is H S pk and can be represented by pr(S pk )log 2 pr(S pk ).
The average information H S pk is referred to as the entropy. The dictionary with the smallest entropy H should be selected because it can achieve the best encoding benefit. The following explains why the dictionary of the smallest entropy is used: Assume that there is only one symbol's type in the whole secret sequence. In other words, the other types never occur. In this case, the entropy is equal to 0, i.e., H S pk = 0. Afterwards, the specific symbols are replaced by the absolute minimum value "0", thereby controlling the distortion level in the data embedding phase. Consequently, the proposed method selects the dictionary of the smallest dictionary.
An example is used to explain the above procedure. Assume the secret sequence S = {001110111100110110010000010011010}. In dictionary D 1 listed in Table 1, the secret sequence is represented as S = {001, 11, 01, 11, 10, 01, 10, 11, 001, 000, 001, 001, 10, 10} for easy readability. According to D 1 , the total number of information is 12.4670 and the average information H S pk per symbol at S is 2.1570. In dictionary D 2 , which is listed in Table 2, the secret sequence can be represented as S = {00, 11, 10, 11, 11, 00, 11, 011, 00, 10, 00, 00, 10, 011, 010}. According to D 2 , the total number of information is 12.1451 and the average information H S pk per symbol at S is 2.2264. The third and fourth dictionaries are constructed in the same manner, and their entropies values are listed in Tables 3 and 4, respectively. Obviously, the entropy of D 1 is the smallest among all the dictionaries. Therefore, we used D 1 to encode the secret sequence. Subsequently, the symbols in the selected dictionary are encoded further to obtaining the embedded digits. According to the rule of thumb of data encoding, S p with the maximum occurrence frequency was encoded as the absolute minimum value. By contrast, S p with the lowest occurrence frequency was encoded as the absolute maximum value. Consequently, S p was sorted based on the occurrence frequency, and then, its sorted index was encoded to obtain the adjusting pixel values P v , i.e., if Sort index is an odd number, Sort index 2 , otherwise.
The following example is used to explain how to encode most symbols as smaller digits, as listed in Table 5. The occurrence frequencies of two symbols, "001" and "10", are 4, which are higher than those of other symbols. According to Equation (9), the symbol "001" is encoded as the absolute minimum value "0". Moreover, the symbol "10" is encoded as the second smallest value "1". The remaining symbols are encoded in the same manner.
Embedding Stage
The AMBTC decompressed blocks b i in the original AMBTC decompressed image T are sequentially scanned. If the difference between H m and L m is smaller than 4, then the block is considered a non-embeddable block. Otherwise, the block is an embeddable block. In the first embeddable block, the binary representation of the ID number of the selected dictionary is embedded into the least significant bits (LSBs) of the second Hm and the second L m . Note that the number of dictionaries is four, thus the two LSBS can effectively represent the ID number. The other blocks are then used to embed the secret data by using the pixel value adjusting strategy.
In each embeddable block, the first H m and the first L m are defined as non-embeddable pixels, which are used as the reference information of data extraction and image recovery. For the embeddable block b i , each pixel x i except the first H m and the first L m is increased by the adjusting pixel values P v , that is, x i = x i + P v . The difference between maximum P v and minimum P v in the difference D is equal to 4. It implies that the distortion of pixels is low. The embedding pseudocode is shown in Algorithm 1 as follows: Figure 2a presents the appropriate dictionary D found in Section 3.1. This dictionary was used to encode the secret sequence. After looking up the dictionary D, S is divided into many subsets S p , as shown in Figure 2b. These subsets are mapped using the adjusting pixel values P v , which are just the embedded value. Figure 3 illustrates the extraction and recovery example. First, the dictionary is retrieved from the first embeddable block. Second, the adjusting pixel values are calculated as = ′′ − or = ′′ − . Third, is mapped with the dictionary values to obtain . Finally, is concatenated for obtaining the secret sequence and the AMBTC decompressed block. To embed these values, the original block must be compressed and decompressed using the AMBTC algorithm, as shown in Figure 2c. After using the AMBTC algorithm, the AMBTC decompressed block can be reconstructed using a low mean value L m of 97 and a high mean value H m of 155, as shown in Figure 2d. Both the first L m and first H m are non-embeddable pixels and are marked with yellow color for easy readability. They are used as reference information of data extraction and image recovery. For the AMBTC decompressed block, the pixel, except the first H m and the first L m , is increased by the adjusting pixel values P v to obtain the stego pixel. Figure 2e shows the stego block.
If the overflow or underflow problem occurs in any altered pixel of the block, then all of the pixels in the corresponding block remain unchanged. In other words, the block cannot be used to embed any secret bit. In addition, the proposed method records the ID number of the non-embeddable block to discriminate between the embeddable block and the non-embeddable block.
Extraction Stage
In the extraction stage, the secret data is extracted from the stego image T . Moreover, T can be used to recover the original AMBTC decompressed image T. The details of the procedures are listed as follows: 1. Scan the stego AMBTC decompressed block b i in T sequentially. If the difference between H m and L m is smaller than 4, then this block is considered a non-embeddable block. Otherwise, it is an embeddable block.
2. Retrieve the ID number of the selected dictionary D from the first embedded block. In the first embedded block, both the LSBs of the second Hm and the second Lm are extracted, i.e., binary representation of the ID number of the selected dictionary. Therefore, the proposed method can reconstruct the selected dictionary. In addition, both the LSBs are replaced by the first Hm and the first Lm, thereby recovering the original decompressed pixel.
3. Calculate the adjusting pixel values by using P v = x i − H m or P v = x i − L m for each embeddable block b i . After obtaining P v , we can look up the dictionary D to obtain the symbol S p . After concatenating all S p , we obtain the secret sequence S and recover the original AMBTC decompressed image T. The extraction and recovery pseudocode are shown in Algorithm 2.
find the symbol S p in D; S = S + S p ; end end end Figure 3 illustrates the extraction and recovery example. First, the dictionary is retrieved from the first embeddable block. Second, the adjusting pixel values are calculated as P v = x i − H m or P v = x i − L m . Third, P v is mapped with the dictionary values to obtain S p . Finally, S p is concatenated for obtaining the secret sequence S and the AMBTC decompressed block.
Experimental Results and Discussion
Some experimental cover images were tested to demonstrate the efficiency of the proposed scheme. In the experiments, the proposed scheme was verified using the following six test cover images: airplane, boat, lena, mandrill, peppers, and sailboat. As shown in Figure 4, all the images had the same size of 512 × 512 pixels with 256 grayscales, and the features of the images were diverse. The block size of the image presented in the AMBTC format was 4 × 4 pixels. A random binary sequence generated using a MATLAB (R2018a) function was used in the experiments as the secret sequence, where our secret data are the same as the secret data of the related works [15][16][17]19]. Note that each bit in the sequence has equal probability of being 0 or 1.
Experimental Results and Discussion
Some experimental cover images were tested to demonstrate the efficiency of the proposed scheme.
In the experiments, the proposed scheme was verified using the following six test cover images: airplane, boat, lena, mandrill, peppers, and sailboat. As shown in Figure 4, all the images had the same size of 512 × 512 pixels with 256 grayscales, and the features of the images were diverse. The block size of the image presented in the AMBTC format was 4 × 4 pixels. A random binary sequence generated using a MATLAB (R2018a) function was used in the experiments as the secret sequence, where our secret data are the same as the secret data of the related works [15][16][17]19]. Note that each bit in the sequence has equal probability of being 0 or 1. To present the superiority of the proposed scheme, we compared our scheme with the schemes presented by Li et al. [15], Lin et al. [16], Ou and Sun [17], and Malik et al. [19], as shown in Table 6. The proposed scheme achieved the highest data-hiding capacity for all five images except for the airplane image. The data-hiding capacity of the pixel value adjusting strategy was determined using the number of smooth blocks. If there are many smooth zones in a cover image, non-embeddable blocks are observed in abundance in the image. Moreover, the pixel value adjusting strategy used in the proposed scheme is modified at the most by 2, whereas the strategy used in the scheme proposed by Malik et al. is modified at the most by 1. Therefore, compared with the scheme presented by Malik et al., our scheme has a higher number of non-embeddable blocks in the airplane image. Nonembeddable blocks can be observed in black in Figures 5a,b. This is the main factor that causes the hiding capacity of the scheme proposed by Malik et al. to be better than that of the proposed scheme for the airplane image. For the other five images, the hiding capacity of our scheme is better than that of the scheme presented by Malik et al. by an enhancement value in the range of 10.13% to 29.89%. The hiding capacity of our scheme is better than the schemes proposed by Li et al., Lin et al., and Ou and Sun. Thus, we conclude that our scheme is better than the existing AMBTC-and BTC-based datahiding schemes, in terms of the hiding capacity. Table 7 lists the comparison between the method by Malik et al. and the proposed method in terms of structural similarity index (SSIM). As mentioned above, the SSIM value of the method by Malik et al. is greater than that of the proposed method because the proposed method embeds more secret data. In other words, the maximum hiding capacity of the proposed method is higher than that of the method by Malik et al.
The PSNR is the other factor for evaluating performance of a hiding scheme. Table 6 presents that the PSNR of our scheme is better than the schemes proposed by Li et al., Lin et al., and Ou and Sun. For the airplane stego image, the proposed scheme has a better PSNR but a weaker hiding capacity than the scheme proposed by Malik et al., because our scheme has a higher number of nonembeddable blocks than the scheme by Malik et al. Note that a non-embeddable block maintains the image quality but decreases the hiding capacity. For the other five stego images, the PSNR obtained using the proposed scheme is weaker than that obtained using the scheme proposed by Malik et al. However, because the PSNR difference is less than 0.29 dB for the five stego images, they would not be distinguishable by human vision due to such negligible differences. By contrast, the hiding capacities are significantly increased by a value in the range of 10.13% to 29.89% for the other five The proposed scheme was evaluated and compared with the aforementioned schemes in terms of two performance measures, i.e., hiding capacity and peak signal-to-noise ratio (PSNR). The hiding capacity can be defined as the number of secret data bits that can be hidden into a cover image. The PSNR is an objective measure used for determining the visual quality of an image. The higher the PSNR of a stego image, the better its visual quality is. The rule of thumb is that when the PSNR is higher than 30 dB, the human eyes cannot easily perceive the difference between the cover image and the stego image. PSNR is defined by where x ij and x ij are the original and stego grayscale pixel values located at (i, j), respectively. To present the superiority of the proposed scheme, we compared our scheme with the schemes presented by Li et al. [15], Lin et al. [16], Ou and Sun [17], and Malik et al. [19], as shown in Table 6. The proposed scheme achieved the highest data-hiding capacity for all five images except for the airplane image. The data-hiding capacity of the pixel value adjusting strategy was determined using the number of smooth blocks. If there are many smooth zones in a cover image, non-embeddable blocks are observed in abundance in the image. Moreover, the pixel value adjusting strategy used in the proposed scheme is modified at the most by 2, whereas the strategy used in the scheme proposed by Malik et al. is modified at the most by 1. Therefore, compared with the scheme presented by Malik et al., our scheme has a higher number of non-embeddable blocks in the airplane image. Non-embeddable blocks can be observed in black in Figure 5a,b. This is the main factor that causes the hiding capacity of the scheme proposed by Malik et al. to be better than that of the proposed scheme for the airplane image. For the other five images, the hiding capacity of our scheme is better than that of the scheme presented by Malik et al. by an enhancement value in the range of 10.13% to 29.89%. The hiding capacity of our scheme is better than the schemes proposed by Li et al., Lin et al., and Ou and Sun. Thus, we conclude that our scheme is better than the existing AMBTC-and BTC-based data-hiding schemes, in terms of the hiding capacity. Table 7 lists the comparison between the method by Malik et al. and the proposed method in terms of structural similarity index (SSIM). As mentioned above, the SSIM value of the method by Malik et al. is greater than that of the proposed method because the proposed method embeds more secret data. In other words, the maximum hiding capacity of the proposed method is higher than that of the method by Malik et al. The PSNR is the other factor for evaluating performance of a hiding scheme. Table 6 presents that the PSNR of our scheme is better than the schemes proposed by Li et al., Lin et al., and Ou and Sun. For the airplane stego image, the proposed scheme has a better PSNR but a weaker hiding capacity than the scheme proposed by Malik et al., because our scheme has a higher number of non-embeddable blocks than the scheme by Malik et al. Note that a non-embeddable block maintains the image quality but decreases the hiding capacity. For the other five stego images, the PSNR obtained using the proposed scheme is weaker than that obtained using the scheme proposed by Malik et al. However, because the PSNR difference is less than 0.29 dB for the five stego images, they would not be distinguishable by human vision due to such negligible differences. By contrast, the hiding capacities are significantly increased by a value in the range of 10.13% to 29.89% for the other five stego images. This implies that a tradeoff exists between the PSNR and hiding capacity when the pixel value adjusting strategy is used. The visual quality of the proposed scheme is observed to be above the average value of that of the baseline schemes.
Conclusions
A high-capacity data-hiding scheme was proposed in this study for an AMBTC-compressed image. The proposed scheme has many more properties than only high capacity. In this scheme, the dictionary-based coding scheme and the pixel value adjusting strategy were combined to increase the hiding capacity and attain a satisfactory visual quality. Experimental results reveal that the proposed scheme is better than the existing AMBTC-based data-hiding schemes in terms of the hiding capacity. Moreover, the visual quality of the proposed scheme is better than that of baseline schemes. In the future, we should combine the method by Liao et al. [20] with the proposed method to discriminate the image smoothness, thereby enhancing the hiding capacity. In addition, we should try to add the concept of partition strategy [21] into the proposed method to embed more secret data into the color images. | 2020-01-30T09:06:50.566Z | 2020-01-25T00:00:00.000 | {
"year": 2020,
"sha1": "9b8761f0602e6db139c5fcb73f0b0c3e01f1f4f2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1099-4300/22/2/145/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5edb109ddc675866edc357741be71f3c37c08ef3",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
9615292 | pes2o/s2orc | v3-fos-license | Narrator profile in translation : Workin-progress for a semi-automatic analysis of narratorial dialogistic and attitudinal positioning in translated fiction
This paper presents work-in-progress for the development of a semiautomatic methodology for the analysis of shifts in narrator profile in translated fiction. Such a methodology is developed for a comparative quantitative analysis of electronic source and target texts organized in a parallel corpus. The first and main part of this paper presents the theoretical motivation for the organization of two systems of categories focusing on the relationship between the two discursive centres involved in reported speech narrator and character (but also quoter and quotee in other text types) by developing the proposals of dialogistic/intertextual and attitudinal positioning in Appraisal Theory. The second part of this paper analyses a selection of examples illustrative of such categories, and presents and comments the results of the comparative quantitative analysis of Charles Dickens's Oliver Twist and eight European Portuguese translations for juvenile and adult readerships. This comparative analysis proves the methodology operative and shows evidence of two tendencies: ‘levelling-out’ and ‘explicitation’, which, although elsewhere identified as translational universals, may here be identified as norms because they correlate with the independent variable target readership. The purpose of developing this methodology is to help describe the way interlingual translation may transform narrator profile as well as contribute to the formulation of translational norms.
Introduction: methodological issues
This article presents work-in-progress for the development of a semiautomatic methodology for the analysis of shifts in narrator profile in translated fiction.It is theory-motivated since it presents a model for the study of translational norms, based on the assumption (imported from Systemic Functional Grammar, Critical Linguistics and Pragmatics) that linguistictextual forms both create and express context, by encoding communicative meaning, pragmatic and sociosemiotic value (Hatim and Mason 1990).The methodology presented here for the semi-automatic analysis of forms expressing tenor in (translated) fiction also draws on Corpus Linguistics, Corpus-Based Descriptive Translation Studies and Discourse Analysis.This research is also data-motivated because it has been developed both based on and for a comparative quantitative analysis of an electronic parallel corpus of Dickensian Source Texts and their translations. 1As Toury suggested (1995: 38): "The normal progression of a study is thus helical, then, rather than linear: there will always remain something to go back to and discover, with the concomitant need for more (or more elaborate) explanations."The progression of this study was also helical rather than linear in that it was deeply motivated by the set of data used to test the model and the methodology in search of more accurate descriptive statements, which, in turn, raise the aspiration for enhanced explanatory capacity.
Tenor in translated fiction
Previous research (Rosa 2000(Rosa , 2003(Rosa , 2007) ) has focused mainly on tenor or interpersonal relations in fictional dialogue and its translation assuming that texts function as records of communicative transactions.The purpose of analysing fiction and its translation as communicative transaction made it necessary (1) to identify participants in translated narrative fiction, (2) to organize them in pairs of addresser-receiver/addressee and (3) to locate them at different narrative and enunciative levels.This endeavour also resulted in the hierarchical model represented in the following Figure, which was the first step to identify different sets of textual-linguistic features encoding tenor for each level identified in the model.(Rosa 2003: 254) One of the additions to previous research in TS is the fact that this model organizes participants at different enunciative and narrative levels instead of organizing them in linear succession (see Hermans 1996;Schiavi 1996;O'Sullivan 2003), since it assumes that translated fiction functions pragmatically as a hierarchy of voices orchestrated by the translator as addresser of the Target Text. 2 The model in Figure 1 starts at the lowest enunciative/narrative level, where a character says something to another character, not only because the narrator intentionally reports this transaction to a narratee, and the implied ST author conveys their transaction to the implied ST reader, but also because the Implied Translator, in turn, conveys all these transactions to the implied TT Reader.In this model, the asymmetry of power relationships in translated fiction is stressed: addressers have more power than addressees and upper level participants have more power than lower level participants, whose communicative transactions are relayed.The second addition to previous research in TS, and of particular importance, is the definition of the Implied Translator as an intratextual participant, whose profile is identifiable for each translated text and here equated with translational decisions regarding the maintenance or shift of both the profiles and the relationships of participants belonging to all subordinate levels.It is our contention that the actual power possessed by addressers located in upper levels may be either explicitly expressed or camouflaged, and that this will be revealed by the analysis of textual-linguistic and narrative feature patterning.Therefore, to analyse translation as product, it is assumed that the interpersonal relations that obtain between each pair of intratextual participants within the same level and also among participants belonging to different levels are encoded in the text by a set of textual-linguistic and narrative features, and are as such subject to translational shifts.The aim of our ongoing research so far (part of which is presented in this article) has been to identify correlations between textual-linguistic features and functions applicable to each of the enunciative/narrative levels presented in Figure 1 so as to better understand and describe macrostructural shifts that the translation of fiction seems to involve, namely shifts affecting the profiles of participants in translated narrative fiction. 3 This report presents research developments that make the initial model more sophisticated by focusing on additional textual-linguistic and narrative features that may prove relevant for the analysis of narratornarratee transaction (Addresser and Addressee 4) and its translation.Following Seymour Chatman's statement that "[i]t is less important to categorize types of narrators than to identify the features that mark their degree of audibility" (Chatman 1978: 196), the model already developed for the analysis of the communicative transaction between narrator and narratee identifies intratextual features which profile the narrator's conspicuousness or "degree of audibility" in each sentence in both ST and TT.This model considers (1) the proportion of sentences that report dialogue vs. those that do not and so conspicuously belong to the narrator's voice only, as well as (2) the most frequent categories of reported speech used to convey dialogue. 4The present study further includes (3) the most conspicuous presence of the narrator in the textual surface by way of self-reference to him-self/herself as an "I" that sometimes mentions its narrative function; and (4) the conspicuousness of the narrator's evaluation mainly regarding the characters whose speech is quoted.
Considerations (1) and ( 3) are quite straightforward.Once orthographical sentences in the corpus (taken as units for analysis) are labelled as dialogue reporting vs. not dialogue reporting and including vs. not including forms of self-reference by the narrator, the quantitative results are deemed to reveal a particular fictional style regarding narratorial conspicuousness and narrator-narratee transaction.
As for (2), since the narrator's voice is heard/read not only in narrative sentences but also in those sentences that report speech, a set of categories of reported speech were identified and organized in terms of the degree of interference or audibility of the narrator's voice.The five categories chosen were: Narrative Report of Speech Acts, Indirect Speech, Free Indirect Speech, Direct Speech and Free Direct Speech. 5The choice of this set of five categories was the result of a necessary balance between descriptive sophistication and the need to establish distinctions that are as clear and operative as possible so as to enable a semi-automatic analysis of an electronic parallel corpus. 6These five categories of speech report are a set of discursive tools, and the proportion of these categories in narrative fiction may be deemed to generate a fictional style.So, their analysis was expected to reveal the degree of interference of the narrator's voice in reported speech.These categories were organized from the apparent total control of the narrator to the apparent least control of report, from Narrative Report of Speech Acts, through Indirect Speech, Free Indirect Speech, Direct Speech and Free Direct Speech, and later grouped so as to allow for a binary analysis.
Table 1: Binary analysis of reported speech categories
Greater conspicuousness of narratorial voice
Narrative Report of Speech Acts (NRSA) Indirect Speech (IS) Free Indirect Speech (FIS)
Lesser conspicuousness of narratorial voice
Direct Speech (DS) Free Direct Speech (FDS) The left column in Table 1 groups together the categories of reported speech that confer greater conspicuousness or audibility on the narratorial voice, in which the narrator's power over reported speech by characters is more noticeable; the right column includes the categories that render the narrator's voice less audible and that camouflage narratorial power by conferring greater autonomy on the characters' voices, whose speech is reported.
The most pervasive pattern in translation does not seem to be the maintenance of ST features but the opposite: shifts.As research has already suggested (van Leuven-Zwart 1989, 1990;Gullin 1998) if microstructural features are changed through translation, as a consequence, macrostructural levels are affected too; and narrator profile is particularly prone to shifts.Therefore, the most persistent pattern in the translation of narrative fiction is expected to be a transformation of participant profiles in general and of the narrator profile in particular, brought about by an accumulation of micro-structural shifts caused by translational procedures.As used here, the word 'transformation' may be deemed too strong.However, the examples below may account for this lexical choice as well as for the wish to account for such shifts in narrator profile in a systematic and quantitative way to find out more exactly how widespread they are.
Regarding consideration (4) the conspicuousness of narratorial evaluation in the Dickensian source texts included in the parallel corpus, the narrator's evaluation in reported dialogue is present in three main forms: (1) in the choice of verba dicendi or verbs of saying; (2) in adjuncts that express manner; and (3) in the forms of reference to characters.
Examples 1 and 2 illustrate the use of the most frequent verb of saying in the ST: "said".In the target texts, this fairly neutral verb of saying is replaced by "exclaimed" or "ordered".The corresponding Portuguese verbs of saying used, "exclamou" and "ordenou", are more informative, identify the speech act, and in the case of "ordered" ("ordenou") may be said to carry an implicit negative evaluation, which results from the narrator's identification of a direct directive speech act that corresponds to an unabashed expression (and reinforcement) of an asymmetrical power relation: 7 (1) <OT S208> <p84> The board were sitting in solemn conclave, when Mr. In Example 3, we read that the doctor says something to Oliver's mother "with more kindness than might have been expected of him".In the first TT example, this kindness that is unexpected of him, as an individual, is turned into: 'with more sweetness than might have been expected from his profession' ("com mais doçura do que se poderia esperar da sua profissão"), the presupposition being that no member of the medical profession can be expected to be kind.In the second TT example, the doctor's kindness turns into indifference when we read 'turning his face away with indifference' ("voltando o rosto com indiferença").In other words, a positive and explicit evaluation present in the ST becomes negative and explicit in this TT: (3) <OT S10> As the young woman spoke, he rose, and advancing to the bed's head, said, with more kindness than might have been expected of him: 'Oh, you must not talk about dying yet.'
[Hearing the young woman's voice, he rose and advancing towards the bed said with more sweetness than might have been expected of his profession: "You should not talk about dying now."]
<OT9 S6> <p4> -Vamos, anime-se -respondeu o cirurgião, voltando o rosto com indiferença.
["Come, cheer up," the surgeon replied, turning his face away with indifference.(Our emphasis)] The narrator of Oliver Twist very often resorts to implicature 8 and example 4 starts with an excessively positive form of reference to a character, only to make a volte-face at the end of the sentence where the clause introduced by the additive conjunction takes the reader by surprise: "The elderly female was a woman of wisdom and experience; she knew what was good for children; and she had a very accurate perception of what was good for herself."(our emphasis): (4) <OT S47> [That mean and nasty woman would rather keep the money for herself and let the children starve.(Ouremphasis)] In the four translated excerpts presented here, we see the implicature of this form of reference made explicit from the start, and the elderly female becomes 'the old woman' or even 'the hag'.Consequently, if we organize the Portuguese versions in a cline, we read: 'the old woman full of wisdom and experience' ("A velha cheia de sabedoria e experiência"), 'the old woman full of cunning and experience' ("A velha cheia de manha e de experiência"), 'a woman of little scruples' ("Uma mulher de poucos escrúpulos") or 'that mean and nasty woman' ("Essa mulher mesquinha e ruim.").In these examples, explicitation of implicature transforms implicit negative evaluation into an explicit negative stance.It is our contention that these shifts of positive and negative stance, implicit and explicit stance contribute to the transformation of both narrator and narratee profile.Presuppositions inferred are that the implied reader is not expected to be able to grasp implied meaning that is transferred to the TT as explicit evaluation, through explicitation.
Theoretical framework: narratorial evaluation
In order to account for such instances of narratorial evaluation by means of a systematic quantitative analysis, we started with Genette's notion of testimonial function, that is defined as oriented towards the narrator (by association with Jakobson's well-known definition of the emotive function) and is present whenever the narrator expresses an affective, moral or intellectual stance towards the story, or, in this case, towards the characters or the speech s/he reports (Genette 1972).
For the purpose of a more sophisticated description of instances such as the examples just mentioned, we also imported a set of categories created by Appraisal Theory.In the last fifteen years, Appraisal Theory has been developed within the framework of Systemic-Functional Grammar by a group of researchers led by James Martin and Peter R. White.Appraisal Theory focuses on a descriptive study of evaluative language use, that is, of the way language is used to evaluate or negotiate stance and interpersonal relationships, mainly by the use of an interpersonal system of evaluative lexis.
Appraisal encompasses: attitudinal positioning and dialogistic positioning or intertextual positioning, the latter two of which are related to Bakhtinian dialogism and heteroglossia.As we read in White's Guide to Appraisal (2001: 2): The term 'Appraisal' is used as a cover-all term to encompass all evaluative uses of language, including those by which speakers/writers adopt particular value positions or stances and by which they negotiate these stances with either actual or potential respondents.
Under dialogistic or intertextual positioning, White comprehends (2001: 6): uses of language by which writers/speakers adopt evaluative positions towards what they represent as the views and statements of other speakers or writers, towards the propositions they represent as deriving from outside sources.At its most basic, intertextual positioning is brought into play when a writer/speaker chooses to quote or reference the words or thoughts of another.
This type of analysis focuses on the relationship between the two discursive centres involved in reported speech (the narrator and the character whose words are quoted), which has already been suggested to be expressed by the percentage of dialogue vs. non-dialogue sentences in each narrative text as well as by the most frequent categories of reported speech chosen to depict dialogue (Rosa 2003(Rosa , 2007)).However, it is attitudinal positioning that is central to this investigation for the purpose of analysing the testimonial function.It is meant to encompass the use of "meanings by which writers/speakers indicate either a positive or negative assessment of people, places, things, happenings and states of affairs" (White 2001: 2) and will be used to base the analysis of the narrator's evaluation of the story's characters.
Regarding attitudinal positioning -and so as to keep the analysis as simple as possible, irrespective of its being emotional, ethical or aesthetic, as suggested by Appraisal Theory -we only considered whether (1) the narrator's stance was evaluative or neutral; ( 2) positive (Endorsement) or negative (Disendorsement); and (3) explicitly marked by the use of euphoric or disphoric lexis or implicitly expressed through implicature. 9 To sum up, the first part of this article has presented the theoretical and data motivation for the organization of two systems of categories focusing on the relationship between the two discursive centres involved in reported speech, narrator and character (quoter and quotee in other text types).
The first system organizes in a cline a set of descriptive categories of reported speech considered expressive of different power relations towards what the narrator represents as speech by other speakers, and thus of different types of dialogistic or intertextual positioning.It includes:
•
the proportion of dialogue vs. non-dialogue reporting sentences in fiction, as expressive of the power relations that the narrator establishes with characters quoted, with the narratee and ultimately with the reader, • the presence or absence of forms of self-reference by the narrator and • two categories of forms of reported speech considered expressive of different power relations towards what the narrator represents as speech by other speakers.
The second system organizes categories expressive of the narrator's positive or negative evaluation of characters, and thus of attitudinal positioning, also as proposed by Appraisal Theory (White 2001).Therefore, the second system organizes categories expressive of the narrator's • evaluative vs. neutral stance, • positive or negative evaluation mainly of characters that intervene in the story, as well as • the classification of this evaluation as explicit or implicit.
The second part of this paper presents and comments on the results of the comparative quantitative analysis of Charles Dickens's Oliver Twist and eight European Portuguese translations for juvenile and adult readerships. 10 The aim of developing this methodology for a semi-automatic quantitative and qualitative analysis of translated narrative fiction is to describe how interlingual translation may transform the narrator profile in terms of dialogistic/intertextual and attitudinal positioning as well as to contribute to the description of translational regularities by correlating them with contextual variables (such as implied readership), the ultimate purpose being the formulation of translational norms (Toury 1995).
The parallel corpus
So as to set up a parallel corpus including samples of approximately 500 sentences, the first four chapters of Oliver Twist were selected for the nontranslated subcorpus; the translated subcorpus includes the corresponding samples of eight target texts published after the 1940s (see Table 2).The main contextual independent variable selected for this study was intended readership.As shown in Table 2 above, the translated subcorpus includes four texts for an adult readership (1942, 1952, 1980 and 1981) and four texts for teenagers or children (1968, 1972, 1988 and 1993).The chronological scope selected, resulted from the fact that -considering all 17 textually different translations published in European Portuguese (1876-1993) -it is only in the second half of the 20 th century that Oliver Twist was translated for teenagers and children, and from the sixties onwards mainly retranslated for this younger readership.
Narrative vs. dialogue reporting sentences
Let us explore the distinction we expect to find encoded in the TT as a correlate to the contextual independent variable, intended readership, by comparing the quantitative results of the semi-automatic analysis of TT published for adults against those of TT translated for teenagers or children.Table 3 shows the ratios between dialogue vs. non-dialogue sentences.In the first four chapters of Oliver Twist, 57% of sentences report the characters' speech and depict their dialogue.A smaller percentage of 43% can be identified as belonging to the narrator's voice.All TT for adults, except for one of 1952 increase by an average of 3% the predominance of dialogue reporting sentences already present in the ST (57%), and in this way contribute to making the narrator's voice quantitatively less audible than in the ST.However, as also shown in Table 3, all TT for teenagers and children, except the 1972 version, decrease the predominance of dialogue reporting sentences by an average 4%, and so make the narrator's voice, power and control more visible.This proportion seems to be pertinent and operative, as it reveals an opposite tendency in both subcorpora: in TT for adult readers the narrator's voice is heard less, since dialogue reporting sentences are more predominant than they already were in the ST; in TT for teenage/child readers the narrator's voice is heard more than in the source text.It is possible that the higher degree of condensation shown by TT for teenage/child readers (Table 2) was obtained by omitting dialogue sentences.Only an analysis of an aligned parallel corpus will tell.Nevertheless, the comparative quantitative analysis shows that the narratorial power of giving/not giving the floor is more strongly felt in TT for teenage/child reader.Moreover, the narrative vs. dialogue reporting sentences ratio present in the Source Text (57/43, 14) is reduced in the TT for teenage/child reader (53.5/46.5, 7) which may be related to a "levelling-out" universal, whereas it increases in TT for adults (59.5/40.5, 19).Therefore, TT for adults decrease the conspicuousness of narratorial power and control as shown by the number of sentences attributed to the narrator's voice.This is contrary to a general levelling-out tendency and may be contextually motivated by the influence of modernist narrative techniques upon literary and translational norms in 20 th century Portugal.The greater value attached to showing instead of telling, by means of covert or transparent narration (Chatman 1978: 254), may motivate shifts in the modern translations of the 19 th century Dickensian narrator profile, which seem to imply a translator who presupposes a lower level of tolerance to a domineering narrator voice among 20 th century adult readers.
Presence vs. absence of forms of self-reference
As to the presence or absence of forms of self-reference to the narrator using first person markers, all four target texts translated for adults maintain the low of conspicuous presence of the narrator that refers to himself; all target texts translated for teenagers or children, except for the 1972 text, exclude each and every sentence in which the narrator is most conspicuous by using forms of self-reference.Therefore, the I-presence of the narrator is different in both subcorpora analysed: TT for adults show a predominant slight decrease in the number of sentences in which the narrator's voice is most conspicuously heard through forms of selfreference, whereas TT for teenagers or children show a predominant tendency to abolish such sentences.This tendency contrasts with the one regarding narrative vs. dialogue sentences because the narrator's self-reference is more conspicuous in TT for adults than in TT for teenagers and children, where it is nearly absent.
This omission of forms of self-reference in the TTs for teenagers or children, which also show a high degree of condensation, might be related to the fact that such forms may be ambiguous when read out loud by parents to children who cannot read.
Reported speech categories
Moving on to the binary analysis of reported speech categories, results show that the ST mainly contains forms that confer more autonomy to characters' speech and the narrator's voice, power and interference is less audible (92%).This is indeed a characteristic of Dickensian fiction in which the narrator mainly resorts to direct speech to report dialogue between characters.When we compare the ST percentages with those of the TTs we understand that the prominence given to forms of reported speech that show less narratorial control is maintained, although with fluctuations.On average it decreases only 1% in TTs for adults and 3% in TTs for teenagers or children (ST: 92%; TT adults: 91%; TT teenagers: 89%).Therefore, TTs for adults as well as those for children and teenagers show a tendency to slightly increase forms of reported speech that give more prominence to the narrator's control and power.As expected, in TTs for children and teenagers this tendency is stronger and their reading, thus, seems to be more controlled by the narrator's voice.This yields a more powerful narrator profile and implies a reader who is presupposed to be less competent and, therefore, needs stronger guidance by the narrator's voice. 12
Attitudinal positioning: evaluative vs. neutral stance
If we consider the percentage of sentences that mark the evaluative stance of the narrator against those from which it is absent, there is an overall tendency in the TTs to maintain the prominence that the opinionated Dickensian narrator has.However, all TTs but one for adults (published 1981) decrease the narrator's evaluative stance.A predominant increase of the narrator's neutrality is, therefore, also noticeable here, thereby raising the question whether this may be attributed to a levelling-out tendency (universal) or to literary norms that might influence this general tendency to decrease the conspicuousness of the evaluative stance of the Dickensian narrator in translations produced in the second half of the 20 th century (norm).When we move on to consider a comparative analysis of the two TT subcorpora for different readerships, the results reveal an average decrease in the narrator's evaluative stance by 4% in the subcorpus for adults and by almost 10% in the subcorpus for teenagers.Therefore, this overall tendency to decrease the narrator's evaluative stance is quantitatively weaker in the subcorpus for adults (with an average of 61.2% of sentences in which the evaluative stance of the narrator is present against only 55.6% in the subcorpus of TTs for teenage/child readership).Although we would perhaps expect the narrator to guide the reader in a more conspicuous and powerfully evaluative way in TTs for teenagers and children, the findings contradict those expectations.This patterning may again be due to the higher degree of condensation shown by TTs for a younger readership and may require a more sophisticated analysis of an aligned parallel corpus.
But let us move on to analyse this evaluative stance that, despite the decrease, still remains predominant in the TTs.
Attitudinal positioning: positive vs. negative stance (Endorsement vs. Disendorsement)
The predominantly negative evaluative stance by the narrator in the ST remains predominant in the TTs, too, although all TTs but one tend to decrease it.Comparing the two translated subcorpora the narrator's negative evaluative stance is on average quantitatively less present in the TTs for teenagers and children (73.4%) than in TTs for adults (75.3%) showing a decrease of 4.2% and 2.3%, respectively.Only one TT for adults increases the number of sentences that in the ST mark the narrator's negative evaluation.The ratio between positive and negative stance decreases in the translated corpus, which may again be interpreted as a result of the "levelling-out" universal; however, the variable readership profile may motivate a norm governed behaviour that is more strongly felt in the subcorpus for teenagers and children, in which this levelling-out tendency again appears to be stronger.Therefore, TTs for teenage or child readers show a less negative stance by the narrator than the ST and also a less negative stance by the narrator than TTs for adult readers.Interestingly, this does not correspond to the impression left by reading all target texts, because those for teenagers and children seem more negative in the depiction of characters, actions and places when compared with both the ST and TTs for adult readers.This effect may be explained when we consider whether the evaluative stance of the narrator is marked explicitly or implicitly.
Attitudinal positioning: explicit vs. implicit stance
A Dickensian narrator is usually associated with irony, because he predominantly implies a negative evaluation by stating the opposite of what he means, thus relying heavily on implicature.The latter is retrieved either through the narrator's own contrastive words and sentences or through contrasts between his words and the character's words or actions he chooses to portray.Fairclough mentions "setting" as "the extent to which and ways in which reader/listener interpretation of secondary [quoted] discourse is controlled by placing it in a particular textual context (or 'cotext')" (Fairclough 1995: 60).The control of interpretation through implicature presupposes an addressee competent enough to identify ambivalence and to come to his own conclusions.These conclusions are still controlled by the narrator, but in a subtler way: by offering the narratee (and ultimately the reader) more interpretative leeway, the narrator camouflages his actual power and control that are felt as less forceful.This may posit translational problems and be particularly prone to explicitation and simplification procedures.Especially when the implied readership is teenagers or children, the implied translator's interference is expected to be more strongly felt.As expected, implicit evaluative stance decreases in all TTs analysed, from 87.4% of sentences in the ST to an average of 72.9% in all target texts, thus, showing a considerable degree of explicitation, disambiguation and levelling-out.This general tendency is much stronger in TTs for children, where the percentage of implicit evaluative stance decreases to 61.8%, whereas in TTs for adults it is kept at 84.1%, i.e., much closer to the ST's 87.4%.On average, TTs for children show a dramatic 25.6% decrease of implicit evaluative stance whereas in TT for adults it only decreases 3.3%.Therefore, although narratorial negative stance decreases more strongly in TTs for teenage or child readers, the fact that these TTs seem more sombre appears to be due to explicitation procedures which turn 25.6% of implicit stance into explicit stance.Pursuant to our suggestion to posit the existence of an intratextual profile of implied translator, these shifts may be interpreted to draw this profile.The presuppositions of the implied translator regarding a teenage reader seem to lead him to expect incapability to interpret implicature by the narrator.Teenage or child TT readers are presupposed to need disambiguation or explicitation procedures, which result from a solidary move by the implied translator to turn implied meaning explicit and thereby accessible.The implied translator's presuppositions regarding adult TT readers differ since these implied TT readers are deemed more competent and are therefore offered the opportunity to judge for themselves, to be surprised and enjoy the (almost) full extent of narratorial irony and humour of implicit negative stance by the narrator.Regarding implicit stance, the Dickensian narrator profile is therefore dramatically different when we compare TTs for adults with those produced for a teenage or child reader.
Final Remarks
The results of the quantitative analysis presented above may be summarized as follows, regarding each intratextual feature analysed.
•
The ST's predominance of dialogue sentences increases in TTs for adults but the opposite happens in TTs for a younger reader, where it decreases.
•
The scarce ST's I-Presence of the narrator slightly decreases in most TTs for adults and nearly disappears in TTs for a younger reader.
• The ST's Predominance of Direct Speech decreases in TTs for adults and somewhat more in TTs for teenagers or children
•
The ST's Predominance of Evaluative sentences shows an overall decrease that is twice as strong in TTs for a younger audience as in TTs for adults
•
The ST's Predominance of Negative Appraisal shows a slight decrease that is stronger in TTs for a younger audience
•
As for the ST's Predominance of Implicit Appraisal, it generally decreases, but shows a dramatic decrease of nearly 26% in TTs for a younger audience.
If we accept these intratextual features as expressive (among others) of the narrator profile, we may conclude that this profile is indeed submitted to transformation in Portuguese translations of Oliver Twist.Explicitation is the procedure that shows the most dramatic counting.Levelling-out procedures are apparent regarding reported speech categories, evaluative vs. neutral stance, positive vs. negative and implicit vs. explicit stance, since the binary analysis carried out shows percentages for opposite categories that become closer in the TT.The only exceptions are forms of selfreference that tend to disappear and the predominance of dialogue reporting sentences in TT for adults.These procedures are usually accounted for as universals.However, here they have been shown to correlate with the contextual independent variable chosen for analysis (target readership age) and are consequently interpreted as a result of translational norms.Shifts are indeed the rule, but they are quantitatively higher in TTs for teenagers and children, where translators appear to grant themselves a stronger power of intervention probably also as a consequence of a higher degree of condensation.Levelling-out appears stronger the higher the degree of condensation.The shifts analysed do not all cohere around a clear and globally identifiable strategy for each translated subcorpus; however, this may be due to the limited dimensions of the corpus under analysis.
There are some other tendencies worth mentioning at this point.TTs for teenage and child readers create a narrator profile that is more audible in terms of the number of dialogue sentences, in terms of selection of reported speech categories, but mostly in terms of explicit stance.In these TTs, narratorial power is stronger regarding these features.This happens because the translator's intervention is also stronger and the number of shifts higher (when compared with TTs for adult readers) since the younger intended reader profile seems to justify a more visible translator that reconfigures the source text for a reader that is deemed less competent.The mediating power of the translator is more strongly felt in a clearly solidary move towards a young reader.In these TTs, the narrator is less evaluative, less negative (than both ST and TT for an adult Reader) and refers less to himself as addresser; this apparently opposite patterning may be attributed to the equally higher degree of condensation shown in these TT.
TTs for adult readers recreate a narrator profile that differs less from that in the ST.These TTs profile a narrator that is less audible because he more often gives the floor to character voices in dialogue, because he is less evaluative and less negative though more explicit than in the ST.One may assume that these shifts, though lower in number and weaker in effect, may be motivated by a consideration of both the TT intended adult reader profile and literary norms in force in the target context, favouring a less audible narrator.Particularly interesting because contrary to the predominant tendency of levelling-out seems to be the increase of the number of dialogue reporting sentences, which renders narratorial power less conspicuous.The higher percentage of dialogue reporting sentences allows the characters' voices to take an even stronger predominance than in the ST since fewer sentences are attributed to the narrator's voice only and the great majority of dialogue reporting resorts to Direct Speech.
In TTs for teenagers and children, the narrator appears more overbearing, more audible and more explicit, also because the implied translator appears more powerful regarding the ST because more solidary (or patronising) towards an intended readership which, because it is deemed less competent, is offered less interpretative leeway.The transformation of the narrator profile is therefore more dramatic in TTs for a younger readership, especially, as the analysis shows, through explicitation of the narrator's favourite discursive strategy of implicature.
In TT for adult readers the translator's power is less evident, both regarding the ST and the intended readership.However, shifts seem to be motivated by a consideration of a more competent intended reader, both in terms of the inferred capability to interpret implicature and in terms of a presupposed literary competence built by an acquaintance with 20 th century literary norms that devaluate an overbearing narrator profile still acceptable in 19 th century fiction.
Target texts
secondary discourse."Within the current predominant ideological context, the choice of "order" as a very explicit expression of actual power will probably be interpreted negatively.The implied translator's shift from ST "said" to TT "ordered" may even be interpreted as denouncing the quotee, Mrs. Sowerberry, as a bully and thereby controlling the implied reader's response.On the other hand, it may also be interpreted as a marker of solidarity with the teenage/child implied reader, whose positioning may be anticipated in this choice of "ordered" (see Fairclough 1995: 62).Whether this shift is attributable to the Portuguese interlingual translator or to the intralingual translator who produced the English language version used as ST is yet another line of inquiry that can be followed in future research. 8As stated by Brown and Yule (1983: 31): "The term 'implicature' is used by Grice (1975) to account for what a speaker can imply, suggest or mean, as distinct from what the speaker literally says."As Brown and Levinson also suggest ([1978] 1987: 57) indirect expressions, equated with implicature, also tend to be used to express criticism, as is the case of negative evaluative and implicit stance so often characteristic of Dickensian narrators. 9Based on White (2001), we suggest a set of descriptive categories for the study of source and target texts focussing on the relationship between the two discursive centres involved in reported speech, expressed in terms of (1) Endorsement, defined as an evaluative stance that can either be classified as (1a) Endorsement, if the narrator evaluates positively what he quotes, or (1b) Disendorsement, if the narrator evaluates negatively what he quotes; or (2) neutral, if no type of evaluation can be ascertained from the analysis. 10Translation is here interpreted in the broad sense, all target text versions were included, irrespective of their labels ('translation', 'full text translation', 'adaptation' or 'condensation'). 11The degree of condensation of these TT versions was assessed by comparing the number of sentences in the ST with the number of sentences in each TT regarding the same four initial chapters.In this corpus, TTs for adult readership recreate on average 91.41% of ST sentences whereas TTs for teenagers or children condensate more and recreate on average 40.42% of the number of ST sentences. 12Against this predominant background, the increase of DS and FDS in the seventies and eighties, that is the increase in forms that make the narrator's power and control less visible (in one text for adults published in 1980 and one text for teenagers published in 1972), becomes noteworthy.Despite this verification for the seventies and early eighties, these results to not correlate very consistently either with date of publication or with the degree of condensation.It is very often the case that coherence is lacking when we analyse translation but this may also be due to the restricted scope of this corpus and calls for further studies.
The elderly female was a woman of wisdom and experience; she
knew what was good for children; and she had a very accurate perception of what was good for herself.
<OT6 S43> A velha era cheia de sabedoria e de experiência; sabia o que convinha às crianças, e sabia também perfeitamente o que lhe convinha a ela (…) [The old woman was full of wisdom and experience; she knew what was good for children and she also perfectly knew what was good for her.] <OT7 S42> A
velha era cheia de manha e de experiência. [The old woman was full of cunning and experience
.] <OT9 S32> <p17> (…) a administração estava a cargo de uma mulher de poucos escrúpulos, que reservava para si a maior parte das pensões concedidas aos asilados.[Incharge of the administration was a woman of little scruples, who kept for herself most of the pension granted the asylum inmates.]<OT17 S51> Essa mulher, mesquinha e ruim, preferia ficar com o dinheiro para si e deixar as crianças passar fome.
Table 5 :
Forms of speech report (adult readership and teenage/child reader- | 2017-10-16T22:57:31.674Z | 2021-10-25T00:00:00.000 | {
"year": 2021,
"sha1": "bb96e249137e9b61aec6d53d372720a2aa35f206",
"oa_license": "CCBY",
"oa_url": "https://lans-tts.uantwerpen.be/index.php/LANS-TTS/article/download/217/141",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "bb96e249137e9b61aec6d53d372720a2aa35f206",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Sociology"
]
} |
18157371 | pes2o/s2orc | v3-fos-license | Comparison of Turkish and US haemodialysis patient mortality rates: an observational cohort study
Background There are significant differences between countries in the mortality rates of haemodialysis (HD) patients. The extent of these differences and possible contributing factors are worthy of investigation. Methods As of March 2009, all patients undergoing HD or haemodiafiltration for >3 months (n = 4041) in the Turkish clinics of the NephroCare network were enrolled. Data were prospectively collected for 2 years through the European Clinical Dialysis Database. Mean age ± standard deviation was 58.7 ± 14.7 years, 45.9% were female and 22.9% were diabetic. Comparison with US data was performed by applying an indirect standardization technique, using specific mortality rates for patients on HD by age, gender, race and primary diagnosis as provided by the 2012 US Renal Data System Annual Data Report as reference. Results The crude mortality rate in Turkey was 95.1 per 1000 patient-years. Compared with the US reference population, the annual mortality rate for Turkey was significantly lower, irrespective of gender, age and diabetes. After adjustments for age, gender and diabetes, the mortality risk in the Turkish cohort was 50% lower than US whites [95% confidence interval (CI) 0.46–0.54, P < 0.001], 44% lower than US African-Americans (95% CI 0.52–0.61, P < 0.001) and 20% lower than Asian-Americans (95% CI 0.74–0.86, P < 0.05). Conclusions The annual mortality rate of prevalent HD patients was found to be significantly lower in the studied Turkish cohort compared with that published by the US Renal Data System Annual Data Report. Differences in practice patterns may contribute to the divergence.
Introduction
Despite improvements in dialysis treatment, patients on maintenance haemodialysis (HD) have a markedly higher mortality rate compared with the general population. According to the 2012 US Renal Data System (USRDS) report, the expected life span is 8 years for incident HD patients aged 40-44 years and 4.5 years for those aged 60-64 years [1]. The major cause of death is cardiovascular disease, accounting for 50% of deaths.
There are significant inter-country differences in annual mortality rates of HD patients. The HD patient mortality rate is significantly higher in the USA than in Europe and Japan. This disparity may partially be explained by the differences in mortality rates of the general population in the various countries [2]. Additionally, variations in patient age, prevalence of comorbid diseases, underlying renal disease and racial/genetic status all contribute to survival of dialysis patients [3,4]. Finally, differences in practice patterns also impact survival rates, e.g. weekly dialysis duration, vascular access type, physician care and management of hypertension, hyperphosphataemia and anaemia [5][6][7][8][9][10][11].
The Turkish National Registry reports a crude mortality rate in prevalent HD patients as low as 10.0/100 patient-years [12], although the validity of this low mortality rate might be affected by the retrospective and questionnaire-based nature of the data collection. Another report with higher-quality data obtained from 1074 prevalent HD patients receiving dialysis from a single provider chain in Turkey found a 9.6/100 patient-years crude mortality rate in Turkish prevalent HD patients. Both values are much lower than what is reported for US white patients (23.6/ 100 patient-years in 2009) and also for European patients (13.3-18.6/100 patient-years) [13][14][15]. However, it is possible that the low mortality rate observed in Turkish HD patients may be due to the calculation method and has inadequate or no adjustment for gender, age and diabetes. In fact, the Turkish HD population is much younger and has a relatively lower diabetes prevalence compared with US patients [12].
The aim of this study was to determine the mortality rate in a relatively large Turkish HD cohort using prospectively recorded data and to compare the mortality rate adjusted by age, gender, race and diabetes with that of the US HD population, as obtained from the 2012 USRDS annual data report.
Materials and methods
Patients receiving maintenance HD at 41 HD centres operated by Fresenius Medical Care in Turkey were enrolled in this observational cohort study. The study's start date was March 2009, and follow-up time was 2 years. Inclusion criteria were age 18 years and older and initiation of dialysis more than 3 months before baseline (i.e. the study population comprised prevalent dialysis patients only, to facilitate alignment with the US cohort). Baseline and follow-up data, including time of death, were collected from the European Clinical Dialysis Database (EuCliD) in Turkey, which has been validated and used since 2001 [13,16]. The patients were censored at the time of transfer to other dialysis facilities or to other renal replacement modalities. The data from the patients who transferred to another treatment modality or to other dialysis centres were recorded until premature termination and were included in all analyses.
All patients were dialysed with polysulfone membranes (94% high-flux) and bicarbonate-based dialysate. Dialysate sodium concentration was prescribed as 138 mEq/L in all patients; potassium and calcium concentrations were prescribed according to individual needs.
All biochemical analyses were measured using an Abbott Architect c8000 autoanalyzer (Abbott Diagnostics, Chicago, IL, USA) in the same central laboratory. All blood samples were taken before a mid-week HD session.
All Turkish HD patients were Caucasian. US mortality data were extracted from the USRDS database, 2012 USRDS Annual Data Report. The USRDS is a national data system that provides information about chronic kidney disease and end-stage renal disease (ESRD) in the USA. A central goal is to describe the prevalence and incidence of ESRD and to provide data sets and samples of national data to support research by other research bodies. The data used in this analysis were provided in Table H.8.1 of the 2012 USRDS Annual Data Report and refer to over 375 000 prevalent HD patients with breakdown information according to age, gender, race and primary diagnosis.
Ethics approval and consent to participate
The study was conducted in accordance with the ethical principles of the Declaration of Helsinki and compliance with Good Clinical Practice Guidelines; all patients provided written informed consent.
Statistical analysis
Comparison between Turkish and US mortality data was performed on the basis of standardized mortality rate (SMR) calculations and by applying the indirect standardization technique, using as reference-specific mortality rates for patients on HD by age, gender, white race and primary diagnosis as provided by Table H.8.1 of the 2012 USRDS Annual Data Report. The following steps were taken during the analysis: (i) for each subgroup that is homogenous in terms of age, gender, race and primary diagnosis, the mortality rate from the USRDS table ( per 1000 patient-years) was multiplied by patient time at-risk in the corresponding Turkish cohort. Since a specific Turkish reference population group was not available in the USRDS database, we selected comparisons with US whites, African-Americans and Asian-Americans separately. This yields the expected number of deaths according to USRDS specific mortality rates. (ii) The sum of the observed deaths for the same homogenous subgroups was calculated also in the Turkish cohort. (iii) The SMR was calculated as the ratio of the observed to the expected mortality. Values lower than one mean that the mortality rate in the evaluated population of Turkish patients is lower than that in the US reference population [17].
Statistical significance was defined as P < 0.05. All analyses were performed using SPSS software version 13.0 (SPSS, Inc., Chicago, IL, USA). Data were expressed as mean ± standard deviation.
Baseline characteristics of the Turkish patient cohort 2009-2011
After excluding patients younger than 18 years (n = 17) and those who were on dialysis for <3 months (n = 266), 4041 Turkish patients were entered into the final analysis. Patient characteristics are shown in Table 1. Mean age was 58.7 ± 14.6 years, and median time on HD was 48.2 months. Fifty-five per cent of patients were male. Twenty-five per cent of patients had diabetes.
Comparison of mortality rates between Turkey and the USA | 477 Vascular access was arteriovenous (AV) fistula in 86.1% of the cases. Mean length of sessions was 238 ± 6 min, and mean eKt/V was 1.46 ± 0.23; 90.3% of the patients were predominantly treated with conventional HD (>50% of the sessions during follow-up) and 9.7% were predominantly treated by post-dilution on-line haemodiafiltration.
Hypertension (systolic blood pressure >140 mmHg and/or diastolic blood pressure >90 mmHg) was present in 24.1% of the patients. Approximately 6% of the patients had an interdialytic weight gain (IDWG) of >5.7% (corresponding to 4 kg IDWG in a patient weighing 70 kg).
There was no difference in characteristics between the patients who prematurely terminated the study because of transfer to another centre and those who remained in the study. The patients who were transplanted during the study period were younger and had less frequency of diabetes compared with the patients who remained in the study (age: 57.7 ± 14.2 versus 42.3 ± 11.1 years, P < 0.001 and diabetes: 12.7 versus 26.2%, P = 0.04, respectively).
Crude mortality rate was 95.1 per 1000 patient-years. Comparison of specific unadjusted mortality rates of Turkish patients with US patients is reported in Table 2. The Turkish annual mortality rate was significantly lower than that published by the USRDS for the US population, irrespective of gender, age or diabetes.
Crude mortality rates of Turkish and USRDS cohorts and expected mortality rate of the Turkish cohort adjusted by race are displayed in Figure 1. Gender, age group and cause of ESRD in the Turkish cohort were adjusted to the same year cohort of the white, Asian-American and African-American cohorts, respectively. SMRs of the Turkish sample were calculated using USRDS data as reference. After adjustment for age, gender and diabetes, the mortality risk in the Turkish cohort was 50% lower than US white, 44% lower than US African-Americans and 20% lower than Asian-Americans (Table 3).
After exclusion of patients predominantly on haemodiafiltration, the crude mortality rate was still lower in Turkish HD patients than in the US white HD patients (103 per 1000 patientyears). Compared with the US whites, adjusted relative risk for overall mortality was 0.53 (95% confidence interval 0.49-0.57, P < 0.001).
Discussion
The mortality rate of prevalent HD patients was found to be lower in the studied Turkish cohort compared with that published by the USRDS. Racial differences, younger age and lower prevalence of diabetes may be postulated to contribute to this discrepancy. However, in this study, a survival advantage persisted after adjustment for age, race, gender and diabetes such that the mortality risk in the Turkish cohort was 50% lower than US whites, 44% lower than US African-Americans and 20% lower than Asian-Americans. Mortality rates were not different in men and women (58 and 61% relative risk reductions, respectively). Regarding age, the lower mortality rate of Turkish patients was much more prominent in younger patients. The survival advantage observed in this cohort over US patients was blunted in diabetics: risk reduction was 37% in diabetics, while it was 59% in the whole population.
In 2003, DOPPS data reported a crude annual mortality of 6.6, 15.6 and 21.7% in Japan, Europe and the USA, respectively [3]. Some evidence suggests that this is in part due to the countryspecific differences in the general population mortality rates ( particularly due to cardiovascular disease). However, as the life expectancy of the general population in Turkey is lower than in the USA (United Nations World population prospects: 2012 revision), this cannot explain the higher survival we observed in this Turkish cohort of HD patients. Comparisons of mortality rates in HD patient populations may be flawed by the use of different calculation methods. A recent study, which was similar to ours, compared survival rates in a group of Chinese patients with USRDS data using the same method applied in the USRDS calculations [18]. The annual mortality for Beijing HD patients was found to be lower than that of their USRDS counterparts in adjusted analyses. The authors speculated that differences in race or practice patterns might be responsible for the lower mortality rate in their cohort. In our study, the survival advantage we found for Turkish HD patients was evident in all race groups' comparisons.
Practice patterns differ significantly between countries and could affect outcomes. There was a large difference in the proportion of patients dialysed with AV fistula between the two cohorts, which is associated with better survival rates compared with AV graft or catheter: 86% in this cohort versus 55% in USRDS report [1]. The mean duration of the HD sessions was 238 min in the Turkish cohort, while it is 214 min for the US patients according to DOPPS data [7]. More strikingly, 33.1% of the sessions in the USA were shorter than 200 min compared with only 0.3% in the Turkish cohort [7]. A 30-min decrease of dialysis session duration from 240 min is associated with a 19% increase in mortality risk, and the relative risk for mortality is 1.34 in patients treated with sessions of duration below 211 min [6], independent of body size [5].
Both hypertension and overhydration, which is the major cause of hypertension, have been found to be independent predictors of mortality in HD patients [19,20]. Removal of excess volume by appropriate ultrafiltration and strict dietary salt restriction has been shown to normalize blood pressure in the majority of dialysis patients [21]. The prevalence of hypertension is substantially lower in this Turkish cohort (24.1%) compared with the prevalence (69%) reported in DOPPS North America data, probably reflecting better dry weight management in patients [20].
A further difference in practice patterns between Turkey and the USA is the frequency and duration of doctor visits. The presence of a dialysis physician during treatments in HD clinics is obligatory according to Turkish legal regulations. In the USA, HD patients are usually seen by nephrologists weekly or less frequently [8]. A recent DOPPS analysis reported that each 5-min shorter duration of patient-doctor contact was associated with a 5% higher risk for death [8].
This study has several limitations. The major limitation is the lack of comorbidities in survival analyses. The relatively younger age and the lower frequency of diabetes in the Turkish cohort are likely to contribute to the low mortality rate, although the survival advantage persisted even after adjusting for age and diabetes. Lower mortality rates observed in the Turkish cohort were present in also the older age group and the diabetics; however, it was less pronounced in those. Secondly, studying . The low renal transplantation rate in Turkey compared with the USA may contribute to better survival; as expected, patients transplanted were younger, and the frequency of diabetes was lower. Finally, it is questionable whether the study cohort is representative of the total Turkish HD population. Although demographics and primary diagnosis are very similar to what is reported by the Turkish Registry for the total HD population, treatment characteristics may differ in our study population.
In conclusion, we reported that the annual mortality for the studied Turkish cohort was lower than that of their USRDS counterparts and that this difference persisted after adjusting for baseline demographics. The reason for the significant disparity is unclear, but significant differences in practice patterns may play a contributory role. | 2018-04-03T01:39:28.255Z | 2016-05-04T00:00:00.000 | {
"year": 2016,
"sha1": "fdcd7cd7d72649244fd88ca3c837d52f1a1f5f58",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/ckj/article-pdf/9/3/476/9573166/sfw027.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fdcd7cd7d72649244fd88ca3c837d52f1a1f5f58",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18437505 | pes2o/s2orc | v3-fos-license | High-Resolution Genotyping via Whole Genome Hybridizations to Microarrays Containing Long Oligonucleotide Probes
To date, microarray-based genotyping of large, complex plant genomes has been complicated by the need to perform genome complexity reduction to obtain sufficiently strong hybridization signals. Genome complexity reduction techniques are, however, tedious and can introduce unwanted variables into genotyping assays. Here, we report a microarray-based genotyping technology for complex genomes (such as the 2.3 GB maize genome) that does not require genome complexity reduction prior to hybridization. Approximately 200,000 long oligonucleotide probes were identified as being polymorphic between the inbred parents of a mapping population and used to genotype two recombinant inbred lines. While multiple hybridization replicates provided ∼97% accuracy, even a single replicate provided ∼95% accuracy. Genotyping accuracy was further increased to >99% by utilizing information from adjacent probes. This microarray-based method provides a simple, high-density genotyping approach for large, complex genomes.
Introduction
The ability to rapidly determine genotypes at many loci in numerous individuals is critical to furthering our understanding of the inheritance of complex traits and for developing improved strategies for plant breeding. The use of molecular markers based on isozymes, RFLPs (restriction fragment length polymorphisms), SSRs (simple sequence repeats) and CAPS (cleaved amplified polymorphic sequences) genetic markers allowed for the construction of early genetic maps. However, these initial genotyping technologies were of relatively low throughput and required significant effort per data point.
A number of technologies have been developed for highthroughput genotyping (reviewed by [1,2,3,4]). There are additional approaches that combine the use of microarrays and restriction digests such as diversity array technology (DArT) [5] and restriction site associated DNA (RAD) tags [6] to assay up to several thousand markers. These high-throughput approaches vary substantially in number of markers, amount of information required for development, accuracy, ease of application and data analysis. In particular, there are limitations on the application of some of these methods to species with large, complex genomes.
The wide-spread availabilities of genomic and EST sequences in many species have led to the development of markers based on SNPs (single nucleotide polymorphisms). Several companies have developed high-throughput technologies that can genotype up to several hundred thousand SNPs in a single reaction [7,8]. Flibotte et al. [9] have reported the detection of SNPs in C. elegans (genome size 0.1 GB) using whole genome hybridization to arrays containing oligo probes designed based on the sequences of known SNPs (SNP-CGH). SNPs can be extremely valuable molecular markers but effort is required for the discovery and validation of SNPs, as well as for assay development. Alternative genotyping approaches that do not require prior knowledge of SNPs have also been developed. For example, RAD tags can be sequenced to discover and map SNPs [10]. Alternatively, SNPs and SFPs (single feature polymorphisms) can be detected by hybridizing genomic DNA or RNA to short oligonucleotide microarrays that contain short (,25 mer) oligonucleotides [11,12,13,14,15,16,17]. Longer oligonucleotide probes have also been used to detect SFPs caused by indel (insertion/deletion) polymorphisms [18,19,20]. Although this process is quite efficient in organisms with relatively small genomes, it has proven less successful for detecting polymorphisms in genomic DNA from organisms with larger genomes, such as maize (2.3 GB, [21]), due to a lack of sufficient signal strength. To obtain sufficient signal strengths in such species it is necessary to utilize RNA or reduced complexity (e.g., high C o t or methylation filteration) DNA. Unfortunately, these approaches introduce variables, such as expression level or filtration efficiency, that can complicate genotyping efforts.
Previously we developed a custom long oligonucleotide microarray that yielded strong signals from whole genome hybridizations. This array was used to assess structural variation between the two maize inbreds B73 and Mo17 [22]. In that study ,200,000 probes were identified that exhibited highly differential and discriminatory hybridization signals between B73 and Mo17 genomic DNA. These hybridization differences were typically caused by the presence of multiple SNPs, small IDPs (InDel Polymorphisms), and CNVs (copy number variants), including PAVs (presence absence variants), rather than by single SNPs [22]. To demonstrate the utility using these polymorphic probes as molecular markers for whole-genome genotyping in a large, complex genome, in this study two recombinant inbred lines (RILs) from the maize IBM mapping population [23] were randomly selected and analyzed via array comparative genomic hybridization (aCGH). In this report, we demonstrate the utility of using long oligonucleotide microarrays for high-throughput mapping in maize without the need to apply complexity reduction methods. The attractive features of this system are first, that it does not require prior knowledge of polymorphisms; second, that the genotyping results are highly accurate; third, high probe density allows for the fine-mapping of recombination breakpoints; and fourth, data analysis can be relatively simple.
Identification of a large number of CGH-based polymorphisms
The first objective was to identify polymorphic probes that could be scored in the RILs. To identify such probes we used data derived from hybridization of the two parental genotypes (B73 and Mo17) to the microarray (all microarray data were deposited into GEO under Series# GSE16938). The microarray platform contained 1,262,421 probes that could each be unambiguously mapped to a single location (i.e., uniquely mapped) in the maize genome and that we therefore concluded are non-repetitive [22]. Following normalization and linear modeling (see [22] for full details), we identified 225,867 probes that exhibited significant differences in hybridization between B73 and Mo17 genomic DNA at a false discovery rate (FDR) cut-off of ,0.0001 (Table 1). The vast majority (91%) of these probes have higher hybridization signal intensities in B73 than in Mo17 and are referred to as B.M probes. Because the microarray was designed based on the reference B73 genomic sequence this observation is an expected consequence of ascertainment bias. B.M probes may have either of three possible characteristics. They can occur due to the existence of polymorphisms within the probe sequence between the two genotypes, due to the presence of more copies of the probe sequence in B73 than in Mo17, or due to a deletion of the probe sequence from the Mo17 genome. Because all probes were designed based on the B73 reference genome sequence, those probes that exhibit M.B hybridization ratios are expected to be present at higher copy number in the Mo17 genome than in the B73 genome. An additional filter was applied to the derived probe list to cull for only the most utilitarian probes. This additional filter identified 173,122 probes that exhibited a minimum fold change of 2 ( Table 1). This filter removed ,20% of the B.M probes and nearly 60% of the M.B probes.
The remaining 173,122 probes were annotated based on their genomic map positions relative to the B73 reference genome [21] and conservation in the Mo17 genome. Each probe sequence was compared with an ,5X whole-genome shotgun (WGS) sequence of Mo17 generated by the Joint Genome Institute using 454 sequencing technology (pre-publication access to these sequences was kindly provided by Dan Rokshar). Each probe was assigned a value of perfect match (100% identity and coverage), conserved (.90% coverage and identity) or ''no match'' (,90% coverage and/or identity). The majority (59%) of the B.M probes had no match in the collection of Mo17 WGS sequence reads, while only 1% had a perfect match (Table S1). Hence, as expected most (99%) of the B.M probe sequences are either absent from Mo17 or are polymorphic relative to B73. In contrast, the majority (51%) of the M.B probes had a perfect match in the collection of Mo17 WGS sequence reads, while only 13% did not have a match. Collectively, this set of polymorphic probes consisted of 173,122 probes, which included at least 12,000 probes for each of the 10 maize chromosomes (Table S2).
Assessment of data analysis and subsets of polymorphic probes
The potential of these polymorphic probes for genetic mapping was evaluated using two B73xMo17 recombinant inbred lines (IBM RILs; [23]) both of which we and others had previously genotyped using ,10,000 markers [24,25]. To enhance the utility of microarray-based genotyping we assessed the importance of hybridization replication, compared various methodologies for data analysis, and determined the effects of polymorphism types upon the accuracy of genotype determinations. Comparisons of the number of markers and accuracy of several different analytical approaches, including linear modeling of replicates, simple assessment of relative signals from a single replicate, and BACbased genotyping were considered most germane. A visualization of the results obtained for chromosome 1 mapping was made for each of these approaches and is depicted in Figure 1.
The first analytical approach (Method I) involved the use of normalization methodology and subsequent estimation of the errors accounted for by dye and genotype effects upon the signal as determined via a linear model. This approach allowed for statistical contrasting of RIL vs. B73 and RIL vs. Mo17 at each probe using two hybridizations (See Methods for details). q-values were obtained for each of these contrasting comparisons and each probe was assigned to one of four classes in each RIL. Probes that were significantly different (q,0.05) from B73 but not from Mo17 in the RIL hybridizations were assigned a genotype of B (Class I) and probes that were different from Mo17 but not from B73 were assigned a genotype of M (Class II). Some probes exhibited significant differences as compared to both parental lines (Class III) or were not significant in either of the two comparisons (Class IV). These later two classes may reflect non-polymorphic probes, residual heterozygosity or complex genome arrangements of gene families. Based on the broad genomic distribution of these probes (black dots in Figure 1) it is unlikely that residual heterozygosity is a major cause. Method I was able to assign genotypes for 93-95% of B.M probes in both RILs and was unaffected by the use of filtering based on a fold change (Table 1). However, substantially fewer of the M.B probes (74-86%) could be assigned genotypes (Table 1). Previously obtained genotyping results (from [25]) were used to validate the array-based genotyping calls (see Methods for details). As expected, consistency rates were substantially higher for the B.M probes than for the M.B probes and the use of the filtered probe set provided only a slight improvement in the validation (consistency) rate. While Method I provides robust results, it requires substantial bioinformatics expertise, as well as replication of hybridizations, factors that could discourage the broad adoption of this microarray-based genotyping platform.
Therefore, a more streamlined analytic method (Method II) was considered. This method employed a single hybridization and thus vastly reduced the complexity of the required computational analyses. In Method II, spatially normalized data from a single array were analyzed and hybridization contrasts were considered without applying statistical methods (See Methods for details). Our goal was to assess the relative loss of accuracy and information achieved using this relatively simple method of analysis and a single hybridization as compared to the robust Method I. The genotype for each probe was assigned by calculating the hybridization difference of the RIL and B73 relative to Mo17 and B73 [(RIL-B73)/(Mo17-B73)]. Probes with values near zero have hybridization intensities that are more similar to B73 than to Mo17, while values near 1 have hybridization intensities more similar to Mo17 than to B73. All probes with values less than 0.33 were assigned a genotype of B73 and probes with values greater than 0.66 were assigned a genotype of Mo17. The remaining probes were not classified (visualization provided in Figure 1). This approach assigned genotypes to slightly fewer probes than did the linear model (Method I) and had a slightly lower validation rate. Even so, this less complex analytic method still provided genotyping calls for .90% of the B.M probes and these calls were ,95% consistent with independently determined genotypes. Note that the filtered set of B.M probes provided substantially more benefit for Method II than for Method I. Consequently, rigorous filtering of probes is more critical when using a single hybridization (Method II) than when data from multiple hybridizations (Method I) are available. Based on a comparison of genotyping calls between two replicates (only two of the M0023 Cy3 replicates was used to enhance the consistency of analyses) the majority of the genotype assignments determined using B.M probes were consistent between pairs of replicates and only 2-4% of probes were called as different genotypes in the two replicates. As expected, the performance for the M.B probes was substantially lower for this approach. Only 40-60% of the M.B probes were consistently assigned to the same genotype across independent replicates and the rate of inconsistent calls between the replicates was higher (Table 1).
Next, we investigated the utility of assigning genotypes to RILs based on a series of probes that were closely linked and that exhibited similar genotyping calls. The physical map of the maize genome is quite accurate at a resolution of single BACs [21]. However, the order and orientation of DNA sequences within BACs is often not known. This can lead to incorrect fine-scale arrangements in the order of probes in our genotyping data.
Assigning each BAC a genotype in each RIL alleviates this problem. BAC-level genotyping is also expected to increase the accuracy of genotyping assignments because it allows for a genotyping assignment to be made using multiple probes located on the same BAC. In addition, doing so simplifies data visualization by reducing the number of data points. For this analysis we used only those B.M probes that had a minimum of a 2-fold change because this set exhibited the greatest accuracy in both Methods I and II. To be assigned a genotype a BAC had to have at least 5 probes that were assigned a genotype of B73 or Mo17 and these probes had to exhibit at least 80% genotype agreement within the BAC. Using this approach, genotypes could be assigned to over 95% the 8,497 BACs that contain at least 5 polymorphic probes (Method III, Table 2). The different methods of analysis were able to assign a genotype for slightly different sets of BACs (Table 2) but for BACs that were assigned genotypes with both methods there was 100% agreement of the genotyping calls made by the different approaches. By comparing the genotyping assignments with the genotyping data of Liu et al. [25] we could demonstrate .99% accuracy for each of these approaches. This approach of assigning a genotype for each BAC in each RIL allows for simple visualization of the genotyping calls (Figures 1 and 2).
We assessed the resolution of the genotyping data that was generated by CGH. While some recombination break-points can only be resolved within ,100 kb due to lack of polymorphic probes in the region of the recombination event other cross-overs can be resolved at quite high resolution. Figure 3 provides five examples of highly resolved cross-overs in the M0022 RIL. The exact location of these five cross-overs could be identified within 2,450 to 6,042 base pairs. The ability to pinpoint the location of recombination events was influenced by the number of polymorphic probes within the region.
Discussion
This report documents the feasibility of genotyping complex genomes via a microarray-based method that does not require the use of methods to reduce genome complexity prior to hybridization. To do so, we used long oligonucleotides, which increased our ability to detect signal from genomic DNA and leveraged the abundance of frequent and widely distributed differences in DNA sequences between haplotypes as molecular markers. This approach is particularly valuable because it allows for mapping experiments even in the absence of any prior knowledge of polymorphisms between the parents of a mapping population, does not require extensive laboratory manipulation and can be performed using a single hybridization replicate. Additional experiments (data not shown) have demonstrated that this
Comparison of linear model (Method I) and single array (Method II) analyses
Careful comparisons enabled us to determine the numbers of markers that could be genotyped, and their validation rates using replicated data analyzed using a linear model (Method I) as well as a more simplified analysis (Method II) conducted on nonreplicated data. Replicates did provide slightly high numbers of markers that could be scored and yielded genotyping data that was validated at slightly higher rates. However, when probes were filtered to use only those that exhibited at least 2-fold change between the parental lines, the overall validation rates for Methods I and II were quite similar. We found that assigning genotypes to each BAC based on the occurrence of multiple polymorphic probes within the same BAC provided highly accurate genotyping scores. Indeed, this method resulted in nearly perfect validation rates (.99%). If it is desired to perform fine-scale mapping of recombination break-points to the highest possible resolution it would be possible to first conduct a BAC-based analysis to map recombination breakpoints to a BAC-level resolution. Subsequently, the analysis of individual probes within those BACs near the recombination breakpoint could further define the position of the recombination breakpoint.
Genotyping accuracy of different classes of probes
Because this method relies on long oligonucleotide probes many of the detected polymorphisms are likely to be structural variants. Because our probes were designed based upon the sequence of the B73 reference genome [21], each exhibits a perfect match to B73. Therefore, all probes with higher hybridization intensities in Mo17 than in B73 (M.B probes) are expected to represent sequences that are present in more copies in Mo17 than in B73 (CNVs). In contrast, the sequences detected by probes having higher hybridization intensities in B73 than in Mo17 (B.M probes) are likely to: 1) exhibit multiple sequence differences (SNPs and/or IDPs) between B73 and Mo17; 2) be absent from the Mo17 genome (PAVs); or 3) exist in higher copy number in B73 than in Mo17 (CNV). Because only probes that had a single match to the B73 reference genome were used in this analysis, it is likely that most of the B.M probes are from the first two classes. Consistent with this view, a comparison of the B.M probes with a collection of Mo17 WGS sequences (,5X coverage) revealed that many do not have a 90% identical sequence. Although all probes used in this analysis are single copy in the B73 reference genome, it is, however, conceivable that some of the B.M probes match duplicated regions of the actual B73 genome that were either not sequenced or that were inadvertently collapsed into single copies during genome assembly. Hence, a small fraction of the B.M probe sequences may in fact exist at higher copy numbers in the B73 genome than in the Mo17 genome. Consistent with this possibility, a small fraction (1%) of the B.M probes had a perfect match to Mo17. This result is unexpected if indeed these probe sequences exist as single copy sequences in the B73 genome. The 13% of the M.B probes that did not have matches in the collection of Mo17 WGS sequence reads could be the result of inadequate sampling of the Mo17 genome. Although a small fraction of probes exhibit hybridization patterns that differ from expectations for various reasons, the overall mapping accuracy and resolution generated using this technology is high.
It is noteworthy that the proportion of M.B probes that could be called and their validation rates were much lower than for the B.M probes. This likely reflects the fact that copy number variants (CNVs) could be in either the cis or trans configuration. If the multiple copies of a probe sequence are not closely linked in the Mo17 genome (i.e., in a trans configuration) they would be expected to segregate among RILs, potentially yielding novel hybridization signals (i.e., not similar to either of the parental signals).
Mapping strategy
In the experiments reported here we used an array that contained ,2.1 M probes. Roche NimbleGen also offers a customizable 12plex suite of arrays. It contains 12 sets of the same 135,000 probes on a standard glass slide. Using the data obtained from a single dyechannel such a 12-plex array can be used to genotype 24 lines; these arrays have significant advantages from the perspectives of cost and efficiency. We have considered how best to modify our mapping strategy to accommodate the fact that each genotype will be analyzed with fewer probes (135,000 vs. 2.1 M). As a consequence of the high degree of sequence polymorphism in maize ,10% of the probes on our 2.1 M array (i.e., ,200,000) proved to be polymorphic between B73 and Mo17 even though our custom array was designed based on the B73 haplotype without reference to the sequence of the Mo17 haplotype. We expect quantitatively similar results would be obtained when comparing B73 to any other inbred that is not closely related to B73. Indeed, this was observed in comparisons of the inbreds Hp301 and Tx303 to B73 (unpublished observation). But importantly for the design of a mapping strategy, the same probes are not likely not be polymorphic in all comparisons or mapping populations. We therefore recommend a two-step mapping strategy. In the first step a survey array containing ,2.1 M probes sampled from the low-copy, genic regions of the genome of interest will be used to identify probes that are informative in a given population via hybridizations to the parents of the mapping population. In the second step, based on the results of these hybridizations, ,135,000 of the most informative polymorphic probes would be selected and used to construct 12-plex arrays for genotyping members of a mapping population.
We recommend for routine genotyping experiments using those probes that exhibit higher hybridization intensities from the reference genome (e.g., B.M probes) for initial mapping applications because they have higher validation rates. Subsequent analyses could use probes having higher hybridization intensities in the non-reference genome to estimate the relative rates of tandem and dispersed duplications.
Plant materials
Genomic DNA was isolated from two-week-old seedlings of the inbreds B73 and Mo17 as well as from two IBM RILs: M0022 and M0023. According to previous genotyping results [24] the genomes of these two RILs are ,56% identical. 1 mg of DNA was labeled using either 59 Cy3 or Cy5-labeled Random Nonamers (TriLink Biotechnologies). DNA was incubated for 2 hours at 37uC with 100 units (exo-) Klenow fragment (NEB) and dNTP mix (6 mM each in TE; Invitrogen). Labeled samples were then precipitated with NaCl and isopropanol and rehydrated in 25 ml of VWR H20. 34 mg of test and reference samples were combined in a 1.5 ml tube and dried down using a SpeedVac. Samples were resuspended in 12.3 ml of H20 and 31.7 ml of NimbleGen Hybridization Buffer (Roche NimbleGen Inc.) and incubated at 95uC. The combined and resuspended samples were then hybridized to the array for 60-72 hours at 42uC degrees with mixing. Arrays were washed using NimbleGen Wash Buffer System and dried using a NimbleGen Microarray Dryer (Roche NimbleGen, Inc). Arrays were scanned at 5 mm resolution using a GenePix4000B scanner (Axon Instruments). Data were extracted from scanned images using NimbleScan 2.4 extraction software (Roche NimbleGen, Inc.), which allows for automated grid alignment, extraction and generation of data files. For this experiment, five hybridizations were performed and the samples hybridized to each array are as follows: Array 1 M0023 (Cy3)/B73 (Cy5); Array 2 M0022 (Cy3)/B73 (Cy5); Array 3 Mo17 (Cy3)/ M0023 (Cy5); Array 4 Mo17 (Cy3)/M0022 (Cy5); Array 5 M0023 (Cy3)/B73 (Cy5).
CGH data analyses
The probes were mapped to the B73 RefGen_v1 genome sequence [21] with 100% identity and coverage [22] and only probes with a single perfect match were used for this analysis. The integrated genetic and physical map of maize [25] was used to determine the physical location of each genetic marker on B73 RefGen_v1. The hybridization intensity of each mapped probe was estimated within each genotype using LIMMA [26] according to [22]. When applying q,0.0001 cutoff [27], a total of 225,867 probes that exhibited significantly different hybridization signals between B73 and Mo17 were deemed to be polymorphic. This set was further divided and filtered based on which genotype exhibited a higher signal and whether there was at least a 2-fold change in signal intensity between B73 and Mo17 (Table 1).
CGH-based Genotyping
Linear model. A linear model was used to calculate a q-value to estimate the false-discovery corrected probability that a particular probe was different from B73 or from Mo17. For each RIL, a probe was assigned a value of ''B73'' if it was significantly different from Mo17 (q,0.05) but not from B73 and was assigned a value of ''Mo17'' if it was significantly different from B73 (q,0.05) but not from Mo17. The probes that were significantly different from both or neither parental lines were not assigned a genotype.
Single array based model. A simple model was employed to assign genotype calls using a single replicate of data. The spatially normalized data extracted for each array using the NimbleScan software were imported into Excel. For each probe the value of [(RIL-B73)/(Mo17-B73)] was calculated. Cut-off values of 0.33 and 0.66 were arbitrarily selected for the purpose of this analysis. All probes having values of less than 0.33 was assigned a genotype of B73, while probes having values greater than 0.66 were assigned a genotype of Mo17. Probes with values between 0.33 and 0.66 were not classified. The values for different replicates were subsequently compared to determine the number of genotype assignments that were shared or conflicting for each of the hybridizations.
BAC based genotyping. Genotypes of each BAC were determined by comparing the calls for all polymorphic probes within a BAC. Genotypes were assigned to BACs have at least five polymorphic probes and only BACs with at least 80% agreement for the genotypes of all probes within the BAC that were classified as B73 or Mo17. The ''consensus'' genotyping assignments were assigned when a BAC was assigned the same genotype for each of the replicates for a RIL.
Validation of genotype assignments
The genotype scores for each of the two RILs were collected from a total of 10,143 markers [25] including IDP markers [24], TIDP markers [28], SNP markers [29] and other markers downloaded from MaizeGDB (http://www.maizegdb.org). If one of these markers was located within 5,000 bp of a probe, the genotype obtained from this marker was treated as the ''true'' genotype for this probe. The proportions of the genotyping calls for probes that were supported by these other markers were then determined. | 2018-04-03T03:58:21.212Z | 2010-12-02T00:00:00.000 | {
"year": 2010,
"sha1": "7051fbc4e334565e75b4424457b075bd866a26b7",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0014178&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "df3bf78e98191753c5656a2b294fd5166ccc7e13",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
251712177 | pes2o/s2orc | v3-fos-license | Swimming training attenuates the decrease of calcium responsiveness in female infarcted rats
Aim: To evaluate the influence of swimming training on calcium responsiveness of the myocardium of rats with different infarction sizes (MI). Method: female Wistar rats, sedentary sham (SS = 14), sedentary moderate MI (SMI = 8) and sedentary large MI (SLI = 10) were compared to trained sham (TS = 16), trained moderate MI (TMI = 9) and trained large MI (TLI = 10). After 4 weeks of MI, the animals swam for 60 min/day, 5 days/week, for additional 8 weeks. Papillary muscles of the left ventricle were subjected to different concentrations of extracellular calcium. Inotropism was evaluated through the developed tension (DT), the maximum positive value of the first temporal derivation (+Td/td) and the time to peak tension (TPT). Lusitropism was evaluated by the maximum negative value of the first temporal derivation (−Td/td) and time to 50% relaxation (50%TR). Statistical significance was determined using multivariate analysis of variance and a Hotelling T2 test for the absolute power values of all four extracellular calcium concentrations (p < 0.05). Results: MI depressed inotropism (from 17% to 51%) and lusitropism (from 22% to 54%) of the sedentary rats, but exercise attenuated the losses, especially regarding + dT/dt, TPT, −dT/dt and 50%TR. Exercise attenuated the decrease in myocardial responsiveness, proportionally to the size of the MI. Conclusion: Myocardial calcium responsiveness is favorably affected in animals with moderate and large MI after swimming exercise.
Introduction
Physical exercise is one of the most important nonpharmacological therapies after myocardial infarction (MI), as it attenuates cardiac and myocardial remodeling, preserving systolic and diastolic functions (Bowles et al., 1992;Orenstein et al., 1995;Wisloff et al., 2002;Serra et al., 2008;Serra et al., 2010;Andrews Portes et al., 2009). The postulated mechanisms for these benefits include the attenuation of fetal β-myosin isoform expression (Orenstein et al., 1995), reduction of reactive oxygen species (Frederico et al., 2009), reduction of pro-inflammatory cytokines (Serra et al., 2010), and improved calcium sensitivity and kinetics of cardiomyocytes from infarcted hearts (Wisloff et al., 2002). Little is known about the effects of physical exercise on myocardial responsiveness, especially for hearts with different MI sizes (Bowles et al., 1992). Bowles et al. (1992) subjected hearts from treadmill-trained male rats to 25 min of ischemia followed by 30 min of reperfusion and noticed better recovery of the systolic function, greater cardiac work and less diastolic stiffness, associated with greater responsiveness to extracellular calcium, compared to non-trained male rats. On the other hand, Nutter et al. (1981) found that exercise was detrimental to the contractile function of papillary muscles of the heart of healthy male rats, and this impairment was associated with depression of calcium responsiveness.
Since myocardial responsiveness to calcium is an important physiological determinant of the contractile mechanism, and that MI impairs this response (Fellenius et al., 1985;Sandmann et al., 1999), the present study evaluated the influence of aerobic physical exercise on papillary muscles of female hearts rats with moderate and large MI. This information will broaden the understanding of the effects of aerobic exercise on the myocardium of infarcted hearts. Our hypothesis was that aerobic swimming exercise favorably affects myocardial responsiveness to calcium.
Animals, myocardial infarction induction and experimental groups
Female Wistar rats weighing 170-190 g were housed under a 12/12 h dark/light cycle, at a temperature of 22°C−23°C and humidity of 54%-55%. The rats had free access to water and to a pellet rodent diet. MI was induced according to the procedure as described by Andrews Portes et al. (2009), and were anesthetized (Ketamine, 90 mg/kg and Xylazine, 10 mg/kg; intraperitoneally), intubated and ventilated (model 683, Harvard Apparatus, 2.0 ml, 80 strokes/min). A left thoracotomy was performed, and a 6.0 silk thread was permanently tied around the left anterior descending coronary artery. In the sham rats, coronary occlusion was not undertaken. After the heart was quickly returned to the thorax, a purse string suture allowed chest closure and the rats remained sedentary for 4 weeks.
After 4 weeks, transthoracic Doppler echocardiography evaluation was performed under the same anesthesia using a 12-MHz transducer (Sonos-5500, Hewlett-Packard, Andover, Massachusetts) to determine infarct size, as described elsewhere (Cury et al., 2005). MI smaller than 20% were excluded. According to the MI size, the MI rats were grouped as: sedentary moderate infarct (SMI: n = 8), trained moderate infarct (TMI: n = 9), sedentary large infarct (SLI: n = 10) and trained large infarct (TLI: n = 8). A moderate infarction was considered for rats presenting a MI scar occupying 20%-39% of the LV and a large MI for rats presenting a MI scar equal to or larger than 40% of the LV. In addition, two sham groups were studied: sedentary sham (SS: n = 14), and trained sham (TS: n = 15). After the protocol and heart dissection, infarct size was confirmed by planimetry. The left ventricle (LV) was isolated and unrolled, and straight incisions allowed the domelike LV shape to lie flat when placed over a thin glass plate. Using transillumination, the contours of the infarcted area and of the entire left ventricle mass were traced on a transparent acetate plate and the areas measured using Sigma Scan Pro 5.0 (Systat Software Inc., Richmond, California, United States). The infarct sizes were expressed as a percentage of the LV area.
The rats were cared for in compliance with the "Principles of Laboratory Animal Care" formulated by the National Institutes of Health (National Institutes of Health publication no. 96-23, revised, 1996)
Exercise training protocol
The exercise protocol was in conformity with the "American Physiological Society: Resource Book for the Design of Animals Exercise Protocols" (Kregel, 2006). Swimming training was initiated 4 weeks after coronary occlusion and was performed in a container (depth of 80 cm) filled with tap water kept at 32°C-34°C by a feed-back controlled electric heating coil. The water was maintained in continuous turbulence to provide a continuous exercise. For adaptation, training was limited to 10 min on the first day and increased by 10 min each day until day 6. Training was then continued for a total period of 8 weeks, 60 min/day and 5 days/week. Ten to 12 rats swam simultaneously. Rats were toweled dry after each swimming session before they were returned to their cages. Rats randomized to sedentary conditions did not swim during the 8 weeks. This exercise protocol corresponds to about 75% of the maximal VO 2 (McArdle, 1967) and has already been shown to be
Myocardial studies by papillary muscles
After 13 weeks of the MI or sham procedures (4 weeks after surgery, 1 week for adaptation and 8 weeks for swimming training), under anesthesia, the hearts were quickly removed and placed in oxygenated Krebs-Henseleit solution at 30°C. A papillary muscle was carefully dissected from the LV, mounted between two spring clips, and placed vertically in a chamber containing Krebs-Henseleit solution at 28°C, oxygenated with 100% O 2 , and pH 7.40 ± 0.02. The composition of the Krebs-Henseleit solution was as follows (in mM): 132 NaCl, 4.69 KCl, 1.5 CaCl 2 , 1.16 MgSO 4 , 1.18 KH 2 PO 4 , 5.50 C 6 H 12 O 6 , 20 HEPES, pH 7.40. The lower spring clip was attached to the bottom of the chamber and the upper spring clip was connected by a thin steel wire to an isometric transducer (GRASS model FT03E) connected to a micrometer for adjustment of muscle length. The preparations were stimulated 12 times/min with 5 ms square-wave pulses through parallel platinum electrodes, at voltages which were~10% greater than the minimum stimulus required to produce a maximal mechanical response. After a 60 min period, during which the preparation was permitted to contract isotonically under light loading conditions (0.4 g), the papillary muscle was loaded to contract isometrically during 15 min and, thereafter, was stretched to the apices of their length-tension curves (L max ). The mechanical behavior of papillary muscles were evaluated in baseline calcium condition (1.5 mmol/L) and in three different other extracellular calcium concentration [(Ca 2+ ) o , in mmol/L]: 0.5, 1.0 and 2.0. The 1.5 mmol/L was reassumed for each (Ca 2+ ) o . The contractile myocardial parameters were determined for each (Ca 2+ ) o after a stable period of 10 min. The following parameters were measured during the isometric contractions: peak developed tension (DT, g/mm 2 ), resting tension (RT, g/ mm 2 ), maximum rate of developed tension (+dT/dt) and maximum rate of decline tension (-dT/dt), time to peak tension (TPT), and time from peak tension to 50% relaxation (TR50%). At the end of each experiment, muscle length at L max was measured and the muscle between the two clips was blotted dry and weighed. The cross-sectional area was calculated from the muscle weight and length by assuming cylindrical uniformity and a specific gravity of 1.04. All experiments were carried out at 28°C (Andrews Portes et al., 2009).
Statistical analysis
Statistical significance between groups was determined using multivariate analysis of variance (MANOVA) and a Hotelling T 2 test for the absolute power values of all four extracellular calcium concentrations [(Ca 2+ ) o ]. Calcium responsiveness was determined from a linear regression analysis between the increase in the papillary muscle contractile parameters and the (Ca 2+ ) o . The slope angles of the linear regression curves of each animal (slope) were analyzed by Kruskal-Wallis test, followed by Dunn's posttest. Differences between the groups of animals in relation to the values of the slopes of the curves were considered statistically significant if p < 0.05. Statistical analysis was performed using SigmaStat version 3.5 for Windows.
The data (means ± SD) on contractile function are shown in Figure 1.
There was an increase (p < 0.001) in DT and +dT/dt as a function of the increase in (Ca 2+ ) o , both in sedentary and trained animals ( Figures 1A-F). Physical exercise attenuated MI losses in relation to + dT/dt (p < 0.05) in TMI and TLI animals. TPT was abbreviated in all exercised groups (p < 0.001) in relation to the respective sedentary ones (Figures 1G-I). These results indicate benefits of physical exercise in relation to myocardial inotropism, despite the size of the infarction.
The responsiveness was evaluated by the slope of curves ( Figure 3) in DT, +dT/dt and −dT/dt. The greater the slope, the greater the myocardial responsiveness. Impairments in myocardial responsiveness triggered by MI were clearly attenuated by physical exercise (p < 0.05).
Discussion
The present study adds strength to the evidence on the benefits of aerobic exercise for the hearts of rats with heart failure due to MI, providing evidence that improvements in Frontiers in Physiology frontiersin.org 03 systolic and diastolic function are related to the maintenance of myocardial responsiveness to increased (Ca 2+ ) o in cardiomyocytes, even in hearts with large MI (≥45% of the left ventricle).
Physical exercise by swimming is added to other forms of aerobic exercise or physical activity, such as treadmill exercise (Molé, 1978;Schaible and Scheuer, 1979;Wisloff et al., 2002), with some advantages. For instance, in some studies, myocardial hypertrophy aroused by treadmill exercise would be related to a reduction in body weight and not to an increase in cardiac mass (Molé, 1978), or to a true increase in cardiac mass (Schaible and Scheuer, 1979), though others (Wisloff et al., 2002) verified true myocardial hypertrophy, with an increase in length and width of cardiomyocytes. With or without myocardial hypertrophy, treadmill exercise resulted in improvement in cardiac or myocardial function.
There are concerns about the possibility that during the swimming exercise the animals experience episodes of hypoxia, or that they aspirate water, or even that they suffer cases of drowning, which would increase the risk of pulmonary congestion, aggravating the effects of heart failure resulting from MI. Some studies, in fact, observed an increase in pulmonary water content in animals with heart failure due to MI and exercised on a treadmill (Jain et al., 2000;Helwig et al., 2003), indicating that physical exercise worsened the effects of heart failure due to MI. However, data from our group ( The use of female rats was due to previous data from our group indicating that morphological changes related to MI size and diastolic and systolic diameters, as well as functional data related to systolic (change in fractional area), and diastolic (E and A waves, and E/A ratio) functions, aroused by acute myocardial infarction, do not differ according to the animal's gender (Antonio et al., 2015), with the advantage of virtually excluding the effects of testosterone on muscle mass and performance in the exercises.
Another important aspect to be previously considered refers to the different MI sizes. It has been widely documented (Pfeffer et al., 1979;Fletcher et al., 1981;Pfeffer et al., 1991) that rats with small MI (≥5% to <30%), moderate MI (≥30% to <45%) and large MI (≥45%) exhibit progressive and proportional Frontiers in Physiology frontiersin.org 04 impairment of systolic and diastolic function. While animals with small MI do not exhibit discernible hemodynamic impairments related to cardiac pumping capacity and pressure generation, animals with moderate MI exhibit reduced flow and pressure generation indices, and animals with large MI exhibit congestive heart failure associated with high diastolic filling pressures, reduced cardiac output, and minimal ability to respond to increased preload and afterload (Pfeffer et al., 1979). Data from our group agree with previous studies that animals with small MI (4% to <30%), medium MI (≥30% to <40%) and large MI (≥40%) also exhibited left ventricular dilatation, reduced systolic and diastolic function (Nozawa et al., 2006). Our group also observed that with the increase in the size of the MI, there was a worsening of pulmonary congestion, hypertrophy of the right and left ventricles, damage to the positive and negative derivatives, and the times for peak tension and relaxation of the papillary muscles (Andrews Portes et al., 2009). For all these reasons, the expectation in the present study was that myocardial responsiveness to calcium would be depressed proportionally to the size of the MI and that physical training by swimming would attenuate these losses. However, previous information on physical exercise and myocardial responsiveness are contradictory.
Frontiers in Physiology frontiersin.org 05 Nutter et al. (1981), for example, studied healthy animals and found impairments in the contractile function of papillary muscles of hearts submitted to physical training on a treadmill, both in terms of calcium responsiveness, norepinephrine responsiveness and the Frank-Starling mechanism. Authors were unable explain these findings, as there was a lack of evidence of pathological structural changes, edema, and left ventricular connective tissue hyperplasia. Bowles et al. (1992) evaluated calcium responsiveness of treadmill-exercised rat hearts in healthy animals using the Langendorff model, before, after 25 min of ischemia and after 30 min of reperfusion. Sedentary and trained animals did not differ in cardiac and hemodynamic function in the pre-ischemia phase, indicating no influence of physical training, except that the maximum rate of ventricular pressure change (+dP/dt) of the trained group was worse than that of the sedentary ones. Still on the pre-ischemia phase, as a function of increased calcium concentrations, hearts from trained rats exhibited worse cardiac output and maximal systolic pressure than sedentary rats at lower concentrations (0.50-0.75 mM), and no difference at concentrations of 1.5 mM at 3.0 mM. In the post-ischemia phase, coronary flow, aortic flow and cardiac output of trained rats were better than those of sedentary rats, but not aortic pressure, systolic pressure, diastolic pressure, +dP/dt and −dP/dt. Still regarding the post-ischemia phase, only systolic pressure was significantly higher in the trained rats as a function of the increase in (Ca 2+ ) o , but there were no differences in calcium sensitivity between sedentary and trained rats. Bowles et al. (1992) attributed the slight effects of training on systolic pressure responsiveness to calcium to the preservation of phosphocreatine levels, ATP and total nucleotides, and lower rate of AMP. Fellenius et al. (1985) are among the first to show that MI impairs cardiac responsiveness to calcium. After coronary occlusion, infarct sizes ranged from 20% to 25% of the left ventricle and were therefore considered small. Both control and MI hearts exhibited increase in systolic pressure as a function of the increase in (Ca 2+ ) o , but the MI exhibited peak systolic pressure at each (Ca 2+ ) o reduced by almost 50% when compared to controls. These authors did not provide a direct explanation for the decrease in cardiac responsiveness to the increase in (Ca 2+ ) o , but they attribute these losses to the decrease in sarcolemma calcium channels and/or the reduction in calcium affinity resulting from MI. Wisloff et al. (2002) evaluated isolated cardiomyocytes from infarcted female rats exercised on a treadmill and observed that, with increasing stimulation frequency, the maximum shortening of the trained cells occurred with intracellular calcium rates approximately 50% lower than their respective sedentary controls. Authors interpreted this phenomenon as an increase in calcium sensitivity (between 5% and 35% greater sensitivity), induced by physical exercise. They also noticed a slight lusitropic effect of physical exercise, with a reduction of 50% in the time to drop intracellular calcium. This increased sensitivity of cardiomyocytes to calcium was associated with increased expression of the Na + /Ca 2+ exchanger (NCX), SERCA2a, and phospholamban phosphorylation. Such changes would explain the faster removal of cytosolic calcium, improving relaxation and contraction (Zhang et al., 2000;Medeiros et al., 2004;Rolim et al., 2007). Additionally, the improvement in metabolic profile with training could explain the increase in the rate of troponin I phosphorylation, resulting in a higher rate of uncoupling between myosin and actin, increasing the rate of relaxation (Belin et al., 2006).
The present study confirms that MI impairs myocardial responsiveness to calcium (Fellenius et al., 1985;Wisloff et al., 2002), but clearly demonstrates that physical exercise attenuates these impairments (Wisloff et al., 2002). Additionally, it indicates that the myocardium of hearts with infarctions greater than 45% of the left ventricle benefits from aerobic exercise, as indicated by the greater responsiveness to calcium in inotropic and lusitropic maneuvers. These benefits would explain, in part, the positive effects of physical exercise on hearts with MI.
The main strengths of the present study are related to the use of multicellular preparations by means of papillary muscles, which allows studying the intimacy of myocardial mechanics and its responses to variations in (Ca 2+ ) o . Another aspect is related to the use of physical exercise by swimming. This form of aerobic exercise mobilizes large muscle mass and corresponds to an intensity of approximately 75% of maximum oxygen consumption (McArdle, 1967) and eliminates the stress due to the use of electrical stimulation so that the animal remains active on the treadmill.
The present study was limited to assessing mechanical function. In the future, it would be desirable to associate myocardial alterations with the various proteins related to the intracellular calcium and calcium transient through the sarcolemma.
The present study was also limited in not evaluating the physical capacity of the animals, since, based on previous studies (McArdle, 1967;Schaible and Scheuer, 1979;Musch et al., 1988), swimming without weights added to the body represent moderate to vigorous intensity, requiring between 60% and 75% of VO2 max. It is also known that swimming does not allow fine adjustments in exercise intensity as with the treadmill, unless weights are added to the animal's body. Nevertheless, this form of exercise is very suitable for the purpose of providing aerobic stimulation to animals with cardiovascular diseases, such as congestive heart failure resulting from MI.
Conclusion
MI impairs myocardial responsiveness to calcium proportionally to MI size. Aerobic exercise attenuates these damages, largely preserving the inotropic and lusitropic Frontiers in Physiology frontiersin.org 06 response of the myocardium. Even in hearts with MI close to 50%, the benefits of physical exercise were identified, indicating a potential mechanism by which hearts with large MI still meet organic demands and contribute to increased survival in these animals.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The animal study was reviewed and approved by the Institutional Ethics Committee of Escola Paulista de Medicina, Federal University of Sao Paulo, Sao Paulo, Brazil (protocol #16/ 2003). | 2022-08-22T13:55:30.564Z | 2022-08-22T00:00:00.000 | {
"year": 2022,
"sha1": "604240de528aef9ee4dde09da8d02c23e80abc0f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "604240de528aef9ee4dde09da8d02c23e80abc0f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
14063076 | pes2o/s2orc | v3-fos-license | Modeling Mechanical and Electrical Uncertain Systems using Functions of Robust Control MATLAB Toolbox®3
Uncertainty is inherent property of all real life control systems, and this is due to that there is nothing constant practically; all parameters are going to change under some environmental circumstances, therefore control engineers must not ignore this changing since it can affect the behavior and the performance of the system. In this paper a critical research method for modeling uncertain systems is demonstrated with the utilization of built in robust control Mat-lab Toolbox®3 functions. Good results were obtained for testing the stability of interval linear time invariant systems. Finally mechanical and electrical uncertain systems were implemented as practical example to validate the uncertainty. Keywords—uncertainty; interval; robust stability; system response; Nyquist criteria; root bounds
INTRODUCTION
Robustness is of crucial importance in control-system design because real engineering systems are vulnerable to external disturbance and measurement noise and there are always differences between mathematical models used for design and the actual system. Typically, a control engineer is required to design a controller that will stabilize a plant, if it is not stable originally, and satisfy certain performance levels in the presence of disturbance signals, noise interference, unmodeled plant dynamics and plant-parameter variations.
In general, there are two categories of control systems, the open-loop systems and closed-loop systems. An open-loop system uses a controller or control actuator to obtain the design response.
A closed-loop control system uses sensors to measure the actual output to adjust the input in order to achieve desired output.
In this paper building uncertain system models using the functions of Robust Control Toolbox®3 is presented. Modeling and analyzing such systems is an important and essential step towards robust control system design. The corresponding functions of Robust Control Toolbox®3 allow to facilitate the process of building different uncertainty models and to analyze easily the properties of such models. Then various functions of Robust Control Toolbox®3 were used to allow creating models of systems with structured (real) uncertainties. The usage of these functions is illustrated for the simple case of a second order mass-damper-spring system and the RLC electrical circuit. It is shown how to investigate several properties of uncertain models in the time domain and frequency domain.
A. LTI Models
This section is dealing with developing and manipulating models of linear time invariant systems (LTI models) in MATLAB®.
Creation of LTI models of multivariable systems is done by the following commands: • ss-State-space models (SS objects) • tf-Transfer function matrices (TF objects) • zpk-Zero-pole-gain models (ZPK objects) • frd-Frequency response data models (FRD objects)
B. Literature Review
The problem of an interval matrices was first presented in 1966 by Ramon E. Moore, who defined an interval number to be an ordered pair of real numbers [a,b], with a ≤ b [1]- [2].
II. METHODOLGY AND SIMMULATION
In this research the design and evaluate the robust stability for three dynamic electrical and mechanical systems were presented.
Based on Moore famous four interval arithmetic, all possible matrices of the interval (uncertain) state matrix A of system state space model are computed, also plotting step response and bode diagram for each new matrix which result in an envelope with its upper and lower bounds, find all polynomials of the family matrix in order to compute and plot the convex hull of the system and finally plotting Nyquist and the roots bounds of the interval system.
Mat-lab 2013 software is used with some of its robust functions and commands to design and analysis the system stability and to get the convex Hull and eigenvalues bounds plots. Therefore this paper is a continuation and extension efforts of the author previous work dealing with the robust stability of an interval or uncertain system, as an efficient and helpful tool for control systems engineers [8][9][10][11][12][13][14][15]. The following three different unique engineering examples will be used to validate and demonstrate the methodology and used technique.
III. EXAMPLE 1: MASS-SPRING-DAMPER SYSTEM
The following example that is shown in figure 1 presents a mass-spring Damper as a mechanical system whose parameters are suffering from uncertainty and hence deviations from the nominal values, due to several conditions such as ageing, temperature or other disturbances. The free body diagram for this system is illustrated below in Fig. 2. ( ) ̇ ̈ To determine the state-space representation of the massspring-damper system, from the system differential equations the state space representation is derived by selecting the position and velocity as system state variables. Also system parameters are shown below in Using Mat-Lab, 2 2 = 4 sub-matrices can be generated from the above interval A-matrix as shown below And its corresponding four (4) polynomials were computed (using mat-lab) as follows: The analysis of open and closed step responses for the spring damper system is shown below in Fig. 3 and 4 respectively. In Fig. 7, the convex hull is presented and hence used to find the roots bounds on interval matrix as shown in Fig. 8, and using convex hull is reducing the level of computations that is involved in such problems as many points can be ignored as long as it is located inside the convex hull. Also it can be noticed that the system is stable since the symmetric bounds of eigenvalues are located on the left half of x-axis. RLC circuit is an electrical circuit consisting of a resistor, an inductor, and a capacitor, connected in series or in parallel. The RLC part of the name is due to those letters being the usual electrical symbols for resistance, inductance and capacitance respectively. The circuit forms a harmonic oscillator for current and will resonate in a similar way as an LC circuit will. The main difference that the presence of the resistor makes is that any oscillation induced in the circuit will die away over time if it is not kept going by a source. This effect of the resistor is called damping. The presence of the resistance also reduces the peak resonant frequency somewhat.
The three circuit elements can be combined in a number of different topologies and our case is as shown in Fig. 9 As these symmetrical bounds clearly confirm the stability of the electrical interval circuit system. www.ijacsa.thesai.org V. CONCLUSION AND FUTURE WORK In this paper the stability behavior of mechanical and electrical systems with uncertain parameters were molded with robust control Matlab Toolbox®3. A good result was obtained as demonstrated in the uncertain mechanical and electrical examples. The computational time and efforts for determining the stability for interval problems (uncertain parameters) is very excessive, therefore as future work parallel algorithms and supercomputers are highly recommended in handling such problems, also this paper hoped to extended and be used as ground foundation to other applications such solar, thermal and wind as they suffer from disturbances and uncertain circumstances. | 2015-12-02T01:35:28.312Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "04801d8e3a7c872238c7b3e42c9ec409e829fecf",
"oa_license": "CCBY",
"oa_url": "http://thesai.org/Downloads/Volume6No4/Paper_11-Modeling_Mechanical_and_Electrical_Uncertain_Systems_using_Functions.pdf",
"oa_status": "HYBRID",
"pdf_src": "Crawler",
"pdf_hash": "04801d8e3a7c872238c7b3e42c9ec409e829fecf",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
236272076 | pes2o/s2orc | v3-fos-license | An Analysis of Student Engagement for Online Microeconomics Class Based on ELED
This study aims to analyze the factors affecting student engagement in Microeconomics online classes based on E-Learning Engagement Design (ELED). This study applied a mixed method with a sequential model. The study population included all students of the Department of Economics Education UNNES who took online courses in Microeconomics and Microeconomics 1, both regular and international classes, with a total of 320 students and 4 lecturers handled the classes. The results showed that situational interest, personal significance, mastery of self-talk and mastery of self-talk for performance had a positive effect on student engagement in Microeconomics online classes. Meanwhile, mastery of self-talk to avoid negativity, environmental control, independent consequences, and setting proximal goals do not significantly influence student engagement in Microeconomics online classes. The concept of E-Learning Engagement Design (ELED) has been applied to all components. However, it is necessary to standardize the learning environment components to make sure there is no gap among Microeconomics classes which leads to less optimal academic services.
INTRODUCTION
The outbreak of Coronavirus Disease 2091 in early 2020 created a significant transformation on the education system around the world. Many schools and universities shut down face-to-face teaching and learning activities and transform them into virtual classes. One of them is Universitas Negeri Semarang (UNNES) in Indonesia which encourage lecturers to applicate online learning during the pandemic. However, this fast changing leads to a little chaos among lecturers. Some lecturers have different opinion about what and how to teach, the environment of teaching and learning, the workload of teachers and students, and the implications for education equity (Zhang, Wang, Yang & Wang, 2020).
On the other hand, UNNES has armed by the specific Learning Management System (LMS) named Elena (Electronic Learning Aid) which based on MOODLE version 2.0 to support academic activities. The problem emerged was there was only a few numbers of lecturers use this LMS due to the complexity of this system and the lack campaign how to use it, especially before the pandemic. Many of lecturers prefer to use another platform to conduct their online class activities. The existence of Zoom, Google Classroom, Google Meet, Skype, and other applications as learning media have been an alternative for lecturers and students to conduct their academic activities. All platforms supporting the online learning have to be optimized to deliver the knowledge and discussion means between lecturers and students during the pandemic.
The absence of face-to-face meetings at some certain condition might allow students to express their opinions more freely. This is because a different learning environment where the internet acts as an intermediary and easily accesses information related to the subjects being studied makes students more interested and supports their thinking power to think critically. This facilitates students who is too shy to speak up their arguments in the face-to-face meeting have a better opportunity to convey their opinion indirectly. Another positive effect of online learning is students have new experiences in learning that can be applied in the future, especially students majoring in education who later become teachers.
However, based on survey data conducted by UNNES which was released in April 2020, it indicated that the online learning process was less interactive which made learning objectives difficult to achieve. Online classes require the stable high effort and persistence in learning inside and outside the classroom, asking questions and so on. Cognitive involvement can be defined as perceptions of motivation and strategy use, seriousness, desire, coping attitudes, and discipline. Meanwhile, emotional involvement shows interests, values, emotions about what is being learned. For example, having respect for other people's opinions, willingness to treat properly their peers and teachers, having sense of belonging and high motivation.
Microeconomics is one of the basic courses that should be taken by students of the Department of Economics Education. Microeconomics is a branch of economics that studies consumer and company behavior. It contains how to determine market price, the input factors quantity and the number of goods and services being traded. It also studies how certain decisions can affect the supply and demand for goods and services so as to create a balance. This balance is represented by different curves. So that in delivering material, it is necessary to encourage interactive online learning strategies to highlight the cause-and-effect relationship caused by a certain economic phenomenon.
Online learning has a key role in the process of transferring knowledge in the Microeconomics course. The absence of face-to-face or physically class activities is a challenge that must be faced in order to maintain the enthusiasm for student learning. The positive side of it is interactions between teachers and students are not limited to physical classrooms or face-to-face learning can still be carried out in different places (Rahman, et.al, 2015). Therefore, the appropriate online learning means and teaching strategies are needed to maintain student engagement in Microeconomics class. It is expected as a solution to increase the cognitive, behavioral, and emotional involvement of students. This research is needed to design a better online class which can synergize with the characteristics of the Microeconomics course resulting in obtaining the course objectives even though the teaching and learning process have to be distracted by the pandemic and have to be conducted online.
The student engagement in online learning is influenced by various factors. Lee, Hae, and Ah Jeong (2019) show that in the context of online learning, student engagement consists of six factors, they are (1) psychological motivation; (2) peer collaboration; (3) cognitive problem solving; (4) interactions with instructors; (5) community support and (6) learning management. First is psychological issues, they represent students' thoughts and feelings, such as motivation, expectations, and interests related to what is learned in online learning. Second, the peer collaboration is activities where students discuss with their peers about the knowledge learned. Third, cognitive problem solving is defined as the process of acquiring, understanding, and utilizing knowledge. Activities such as analyzing and applying knowledge highly support students to improve learning achievement.
Fourth, interaction with instructors or teachers, it is related to behavioral involvement in which students communicate with their teacher (or in this case is lecturer) using online platforms. This interaction will affect student engagement. The more regular interactions between students and teachers, the higher the sense of being involved in learning. Fifth, community support factors are related to the psychological state of students, such as the feelings of brotherhood emerged among students in the classroom. This sense of belonging affects student involvement even in the online class. Therefore, it can be concluded that the success of building student engagement in the online learning caused by various factors that has to be synergized the teachers and students for the success of learning.
Zimmerman (2008) explains that students who are able to do the self-regulate can enjoy the independent learning process and are ultimately able to proactively transforms their mental abilities into performance skills. Independent learners also possess the cognitive ability to complete different academic assignments well (Wolters, 2003). The level of student motivation is phenomenologically seen as a product, while the tools that control their choices, efforts and persistence are seen as processes. Considering these two characteristics, motivation regulation refers to individual actions aimed at initiating, maintaining, or increasing their motivation to complete certain academic activities (Wolters, 2003).
The purpose of the Motivational Regulation Strategies (MRS) is to improve student learning exertions in the process of learning (Schwinger & Stiensmeier-Pelster, 2012). MRS is important for online students because MRS affects students positively to be involved more in learning activities (Smit, De Brabander, Boekaerts, & Martens, 2017). Therefore, in order to investigate the quality of the regulatory motivation results, it is very important to examine how the regulations of student motivation affect their learning engagement. Extending Wolters' model, Schwinger, Steinmayr, and Spinath (2012) suggest eight indicators of MRS, they are (a) increasing situational interest, changing a tiring task to a more attractive one through imaginative modification, (b) increasing personal significance, establishing connections of tasks, personal interests and preferences, (c) mastering self-talk, accentuating the goal of enlarging one's competence and mastering challenging tasks, (d) mastering self-talk for performance, getting better exam scores than classmates, (e) mastering self-talk to avoid negativity, averting to make fun of peer's performance, (f) environmental control, deliberately eliminating possible distractions when having an online class, (g) self-consequences, self-managed gratification to achieve a goal, and (h) establishing proximal goal, breaking down the learning material into smaller and manageable chunks to experience success more frequently. In its implementation, online learning activities during this pandemic find many obstacles. This study tried to apply the E-Learning Engagement Design (ELED) framework to evaluate the stages of online learning in Microeconomics courses which are directed to encourage student engagement in online learning according to the feedback provided by students.
METHODS
This study combines the quantitative and qualitative method, named sequential explanatory design. The mixed-methods sequential explanatory design consists of two different phases: quantitative phase followed by qualitative phase (Creswell et al. 2003). In the first phase, the quantitative method applied using regression analysis. The second phase (the qualitative method), narrative inquiry is carried out through in-depth interviews and Focus Group Discussions (FGD). The qualitative data validity is ensured using triangulation technique. The interviews and FGDs conducted separately between students and lecturer's session to get different perspectives and then the results were compared. The measurement results from the first and second phases are then triangulated and integrated to get a holistic understanding of making improvements.
Respondents are students in semester 3 of the Department of Economics Education, batch 2019 which consists of 3 concentrations, namely: Accounting Education, Cooperative Economic Education and Office Administration Education. Each concentration consists of 3 classes, namely, Regular Class A, Regular Class B and International Class (International Undergraduate Program -IUP) with a total of 320 students. Furthermore, as many as 4 lecturers also became participants for the qualitative phase.
Primary data and secondary data were used in this study. Primary data were obtained from questionnaire. Meanwhile, secondary data were obtained from literature studies related to research problems. The data collection techniques used in this study were questionnaires, literature studies, online focus group discussions. The FGD was conducted with student representatives and lecturers to collect qualitative data in the form of student and lecturer feedback for each stage of the ELED.
Descriptive analysis results
The first variable is student engagement variable, a minimum value of 65 was obtained while a maximum value was 128. This indicates that the level of linkage of the students of Economic Education at Semarang State University is between 65 and 128. deviation 12.752, which means the deviation from the average value is quite small (slight data variation). The average value of 99.13 indicates that the level of linkage of students is high. The independent variables of this study have varied categories. The increasing situational interest, mastering self-talk, mastering self-talk for performance, mastering self-talk to avoid negativity, and establishing proximal goal reach a high category. Meanwhile, increasing personal significance, environmental control and self-consequences get the moderate category.
Multiple regression analysis results
The results of multiple linear analysis are used to explain the level of influence of the independent variables on the dependent variable. These results are shown in Table 2 and Table 3.
Based on the results of the ANOVA test or the F test, the F value is 47.647 and the probability level is 0.000. The probability in this study is less than 0.05, so it can be concluded that the variables of increased situational interest, increased personal significance, mastered self-talk, self-talk for performance, self-talk to avoid negativity, environmental control, independent consequences, and goal setting proximal simultaneously affects the linkage of students to online classes in Microeconomics.
Based on Table 3, it is known that the R 2 value is 0.551 or 55.1%. This means that 55.1% of online student engagement is explained by increased situational interest, increased personal significance, mastering self-talk, self-talk for performance, self-talk to avoid negativity, environmental control, independent consequences, and proximal goal setting. The remaining 44.9% is explained by other factors outside the model. This indicates that there is a moderate relationship between increased situational interest, increased personal significance, mastered self-talk, self-talk for performance, self-talk to avoid negative things, environmental control, independent consequences, and proximal goal setting of online student engagement in Microeconomics online class.
Based on Table 4, column B states a constant value of 29.937, the value of increasing situational interest is 1.483, the value of increasing personal significance is 1.124, the value of (2020). Predictors: (Constant), a) increasing situational interest, (b) increasing personal significance, (c) mastering self-talk, (d) mastering self-talk for performance, (e) mastering selftalk to avoid negativity, (f) environmental control, (g) self-consequences and (h) establishing proximal goal. (2020) mastering self-talk is 0.875, the value of self-talk for performance is 1.854, the value of self-talk is to avoid this. negative of -0.872, environmental control value of 0.366, independent consequence value of 0.387, and proximal goal setting value of 0.230. So that the multiple linear regression equation can be obtained as follows. Y = 29,937 + 1,483 X1 + 1,124 X2 + 0,875 X3 + 1,854 X4 -0,872 X5 + 0,366 X6 + 0,0387 X7 + 0,230 X8 The partial hypothesis test (t test) was used to test how the influence of increased situational interest, increased personal significance, mastery of self-talk, self-talk for performance, self-talk to avoid negativity, environmental control, independent consequences, and goal setting Proximal individually or partially affect the linkage of students of the Semarang State University of Economics Education where each independent variable is independently of the dependent variable. The rules for decision making are as follows: first, if the significance value t <α or t count> t table then H0 is rejected and Ha is accepted, which means partially increased situational interest (X1), increased personal significance (X2), mastered selftalk (X3), self-talk for performance (X4), self-talk to avoid negative things (X5), environmental control (X6), independent consequences (X7), and proximal goal setting (X8) have an effect on student engagement (Y).
Second, if the significance value t <α or t count> t table then Ha is rejected and H0 is accepted, which means partially increased situational interest (X1), increased personal significance (X2), mastered self-talk (X3), self-talk for performance (X4), self-talk to avoid negativity (X5), environmental control (X6), independent consequences (X7), and proximal goal setting (X8) had no effect on student linkage (Y). The results showed that situational interest, personal significance, mastery of self-talk and mastery of self-talk for performance had a positive effect on student engagement in Microeconomics online classes. Meanwhile, mastery of self-talk to avoid negativity, environmental control, independent consequences, and setting proximal goals do not significantly influence student engagement in Microeconomics online classes.
Qualitative analysis results
The Focus Group Discussion (FGD) were held online to determine the implementation of E-Learning Engagement Design (ELED) in Microeconomics classes. The FGD participants are representatives of students in the Microeconomics classes consisting of 31 students and 5 lecturers who teach the Microeconomics classes. Based on the E-Learning Engagement Design (ELED) designed by Czerkawski, Betal C & Eugene W. Lyman (2016), online learning preparation consists of several components, namely: (1) instructional needs, (2) instructional objectives, (3) learning environment, and (4) summative assessment. This section will discuss the evaluation of the progress of the 4 components based on the results of the FGD.
Instructional Needs
The results of the online FGD with student representatives of each Microeconomics class revealed that in general the lecturers had understood the needs of students and analyzed student However, there are also complaints, especially for lecturers who only rely on learning through Whatsapp Group and do not use other media or platforms in the implementation of Micro Economics lectures. Student representatives from one of the classes complained about their discomfort and low understanding of the material presented.
"The learning activities with Whatsapp Group are less efficient" (Student Representative of Economics Education Cooperative B 2019).
The conclusion that can be taken in this FGD is that students of the Microeconomics classes feel enthusiastic about learning when the lecturer combines various learning media at once and not only uses one digital platform. In addition, students are satisfied if the lecturer provides the opportunity to interact in the form of discussion and question and answer rather than just providing material and assignments.
Instructional Objectives
From the FGD which was conducted with lecturers who taught the Microeconomics classes, it was known that the instructional objectives and professional standards that must be achieved by students taking the Microeconomics classes were clearly detailed in the Lesson Plan uploaded through the LMS in the beginning of the semester. This lesson plan has been accessible to students from the beginning of the semester before the student takes lectures for the first meeting.
In fact, generally each lecturer who teaches the Microeconomics classes dissects the Lesson Plan at the beginning of the meeting before starting to enter the first material of the lecture. However, unfortunately not all students pay attention to the lesson plan, only a few students download the lesson plan from the LMS which ultimately leads to student confusion in determining what to study next and when there will be periodic evaluations in lectures.
"Because there are many students who do not open the lesson plan. It makes me always have to remind them about what material to study next week and what aspects need a greater focus." (Lecturer L)
"So far, students seem to be less independent and always ask about lecture techniques that can actually be studied on their own at the lesson plan which has been uploaded at the beginning of the semester." (Lecturer J) The conclusion that can be taken in this FGD is that basically the instructional objectives and professional standards of learning have been clearly formulated and uploaded in the learning management system so that they can be accessed by students, but there is a missing link that causes not many students to pay attention to this so that instructional goals and professional standards the material is not conveyed properly.
Learning Environment
In the FGD which was attended by student representatives for the Microeconomics classes, there were several complaints about assignments and evaluations of assignments that had been given by the lecturers who taught Microeconomics classes. Some students objected to the assignments given in almost every meeting, yet there was minimal discussion. Therefore, this FGD suggested that not every meeting had to be assigned a task that had to be submitted within a week or more. The FGD audience would prefer giving a quiz at the beginning or end of the meeting with a few questions and a short time (taking a little time according to the lecture schedule), but discussing it immediately so that they better understand the material in question and can immediately confirm their answers.
Interaction in the form of questions and answers and discussion was an important point according to the FGD audience because with this interaction they felt that they received feedback according to their respective needs. Most of the participants admitted that the lecturers had facilitated them to hold discussions and opened the questions and answers forum. Even so, there were still some students who complained about their lack of understanding of the material because lectures were conducted online.
"I still don't understand, the trouble is we have to study full using online system. (Student A, Representative of Cooperative Economic Education B 2019). "Honestly, I get dizzy, because I am a typical person who learns from listening, so this kind of online doesn't stay in my brain, not to mention when my mom suddenly asks me to tell me what will break up my brain." (Student B, Representative of Cooperative Economic Education B 2019).
This complaint came from student representatives who admitted that their lecturers only used the Google Classroom and Whatsapp Group applications during lectures. Whereas for students with varied media use and evaluated assignments, the comments given were positive. The conclusion taken in this point is that some lecturers have carried out formative assessments well by evaluating each assignment given to students, have interacted well through discussions, provided facilitation for students to learn using various media, so that students are able to analyze the material. easily. However, there are still lecturers who only rely on one or two digital platforms without giving students the opportunity to discuss the assignment given.
Summative Assessment
During the pandemic, summative assessment of all lecturers was carried out through the LMS developed by UNNES, namely Elena. Elena allows lecturers to provide Mid-Semester Examinations and Final Semester Examinations in various formats, from multiple choice to submis-sion of assignments via files. Lecturers who teach Microeconomics courses coordinate the implementation of the mid-term examination and final examination through Elena using a multiple choices system. This question model is considered very efficient, especially in terms of cutting the time needed to correct students' answers.
In addition, Elena also allows students to find out the correct answer to the question after they have finished taking the test (or whenever it is according to the setting set by the teaching lecturer). However, this evaluation system still has weaknesses, namely that there is still a possibility of cooperation from students by sending photos of questions and discussing the answers in student groups because this system does not have a surveillance camera.
Therefore, it is suggested that for the future, summative assessment is carried out by utilizing an application that has recently been socialized, namely the E-Ujian. Unlike Elena, which is a comprehensive application that summarizes learning resources and the course of online learning, this E-Ujian application focuses on providing services for administering exams. The E-Ujian application is equipped with a question package feature, voice recording (if the lecturer wants an oral exam) to record the course of the exam process through the laptop camera of each test taker.
The interest ultimately develops from situational interest to individual interest through four different phases (Hidi & Renninger, 2006). The situational interests need to be stimulated and controlled before they are promoted into individual interests (Hidi & Renninger, 2006). Hence, situational interests are considered as underdeveloped interests, whereas individual interests are considered as more developed interests. Therefore, students with individual interests tend to pursue engagement with independent learning content considering that students with less developed interests may or may not engage in learning without external support (Renninger & Bachrach, 2015). However, although situational interest is said to be a temporary interest that is predicted not to develop in the future, students' situational interest in studying online course material in Microeconomics in this study is considered to be the first capital to "link" students with online learning.
The situational interest in this research is in the moderate category, this is because different lecturers' teams use different approaches in implementing online learning. There are classes with high situational interest and classes with low situational interest, namely in classes where the lecturers are more passive and do not develop va-rious online learning media. The findings in this study indicate that increased situational interest has an effect on student engagement with online courses in Microeconomics.
Based on in-depth interviews conducted after the Focus Group Discussion (FGD) event, it can be analyzed that this situational interest arises due to several things, including: interest in lecturer figures, interest in the use of media by teaching lecturers, and demands to collect assignments so that they want to. do not want to have to try to understand the material. The findings in this study also explain that when attending online courses in Microeconomics, students are involved cognitively, thereby increasing personal significance for learning activities rather than increasing situational interest.
One interesting finding is that mastery of self-talk, including self-talk to improve performance, has a positive and significant effect on student engagement with online Microeconomics classes. This strategy is particularly concerned with orientation to achieving learning goals and mastering challenging learning tasks. The results of this study indicate that students are emotionally engaged when they focus on mastery of the material rather than comparing their learning outcomes with others. This is in line with the findings of Huang's (2011) study of achievement goals and academic emotions which showed a significant positive correlation. However, the relationship between academic emotion and negative avoidance in Huang's (2011) study showed a negative correlation, which was not significant. Huang's findings are reinforced by the results of this study where self-talk to avoid negative performance is not proven to have a significant effect.
Cognitive engagement is the use of both cognitive strategies and self-regulated approaches in learning (Wang & Eccles, 2011). When using the self-talk approach to avoid negative performance, students tend to be involved cognitively in learning but have less participation in activities during learning session because they feel anxious and afraid that their participation will actually show their weaknesses, for example: incorrectly answering questions from the lecturer or expressing opinions that they feel less precise. Although for some students the strategy was considered positive because it placed them in a safe position, students who experienced it appeared to have less motivation to turn their cognitive efforts into actions. This condition will not have a good impact on the development of students' emotional intelligence.
The lecturer should anticipate students' low participation by improving the learning strategies. In accordance with the concept of "Merdeka Belajar, Kampus Merdeka" (Freedom to Learn, Independent Campus) launched by the Ministry of Education and Culture, it is suggested to apply some student-centered learning approaches, such as project-based learning and case study in order to increase students' participation during the class. It has been confirmed by some researches that project-based learning can effectively enhance students' learning motivation, problem solving competence and learning achievement (Hung, Hwang & Huang, 2011); improve creativity, encourage research and provide permanent learning (Genc, 2014) and increase critical thinking ability (Anazifa & Djukri, 2017). Meanwhile, the case study method acknowledges in-depth, multi-faceted explorations of complex issues in real-life settings (Crowe et al, 2011), promotes active learning and develops critical thinking skills (Popil, 2011). Kuhl (as quoted in Keller, 2008) stated that environmental control is an active control strategy to support the implementation and maintenance of the expected actions. Kuhl illustrated that environmental control strategies can be managed to liberate oneself from irresistible distraction and to socialize commitments (in this case the commitment to attend lectures well online) by telling others about online lecture plans and how the other party supports the planned actions. However, in this study, these results indicate that environmental control has no effect on student engagement in Microeconomics online classes.
During the FGD, the fact that was revealed was that not all individuals (including the students themselves) and their families understood the concept of online lectures and the burdens that students must bear in carrying out lectures online. This was revealed from the statement of one of the FGD participants, as follows.
"Honestly, I get dizzy, because I am a typical person who learns from listening, so this kind of online doesn't stay in my brain, not to mention when my mom suddenly asks me to tell me what will break up my brain.
" (Representative of Regular Economics Education Student B 2019)
The family's lack of understanding of the concept of online lectures causes parents to misunderstand students who are studying online. Parents consider their children to only play smartphones all day, so they tend to try to do something so that their children do not hold the smartphone by asking them to do other activities that are not related to online lecturing activities, or even interfere with ongoing online lectures. This also causes the independent consequence factor and the setting of proximal goals in this study does not affect the student engagement to Microeconomics online classes.
The various findings in this study converge to the conclusion that the engagement to online classes in Microeconomics can work best if students have strong motivation within themselves. One way to maintain that motivation is by doing self-talk. However, the self-talk that is carried out must prioritize self-talk to improve performance in online learning, not self-talk for avoidance and being passive to maintain a safe position in the online classroom. On the other hand, the family environment sometimes does not support the implementation of online lectures and inhibits cognitive and emotional ties between students and the online classes they take. In this case, good and continuous communication is needed so that the family can understand the conditions of online lectures during a pandemic like today.
CONCLUSION
The crush transformation from conventional to online classes during the pandemic create many home works for lecturers to maintain the student engagement and achieve the learning goals. There are more distractions happens in online learning activities that need supportive environment for the students to maintain their focus to it. In the case of Microeconomics online classes in this study, the supportive environment consists of the internal condition of students themselves; which was examined in this study through the Motivational Regulation Strategies (MRS), the online class infrastructure and materials, the lecturers' strategies and the neighborhood where students live. All these factors have to collaborate to support the online learning activities. The ELED framework has implemented fairly during the Microeconomics online classes. However, the assessment aspect has to be improved more.
This study needs improvement in the ELED framework evaluation to be able to design the better practice of ELED in online Microeconomics classes, instead of only describe the implementation of it. Furthermore, the next study should focus not only on the MRS, but also other factors in students' online learning ecosystem. It is important to change the researcher's point of view that student engagement is not merely a cause of successful learning, yet it is a result created by the supportive collaboration of various factors in student environments that support students' online learning activities. | 2021-07-26T00:05:05.991Z | 2021-07-01T00:00:00.000 | {
"year": 2021,
"sha1": "0fab773e956dff20c2992d494842b336e2e56070",
"oa_license": "CCBY",
"oa_url": "https://journal.unnes.ac.id/nju/index.php/DP/article/download/29568/11794",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "58b5ac566b5e3e95ff05809379353d32d46eddd7",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
16465345 | pes2o/s2orc | v3-fos-license | Remarks on an inequality involving the normal scalar curvature
We study a pointwise inequality for submanifolds in real space forms involving the scalar curvature, the normal scalar curvature and the mean curvature. We translate it into an algebraic problem, allowing us to prove a slightly weaker version of it. We also prove the conjecture for certain types of submanifolds of $\mathbb C^n$.
Introduction
In 1983, Guadelupe and Rodriguez proved the following: Remark that this is an extension of the well-known inequality K ≤ H 2 for surfaces in E 3 .
In [9] the following was conjectured as generalization of the previous Theorem. * The second author is Research assistant of the Fund for Scientific Research -Flanders (Belgium) (FWO) Conjecture 1.1 ([9]). Let M n be a submanifold of a real space form M n+m (c) of constant sectional curvature c. Denote by ρ the normalized scalar curvature, by H the mean curvature vector and by ρ ⊥ the normalized normal scalar curvature. Then The conjecture was proved for m = 2 in [9], where also some classification results were obtained in case equality holds in (1) at every point.
Remark 1.1 (Added remark on recent developments). Nowadays, this conjecture is known as the DDVV-conjecture. Recently the conjecture was proved for n = 3 in [6] and for m = 3 in [13]. In a private communication [14], Z. Lu announced a proof for the general case. Also in the study of submanifolds attaining equality there is recently substantial progress : see [7] and [17]. All these results were obtained after the finishing of this paper.
For normally flat submanifolds, in particular for hypersurfaces, inequality (1) follows from a more general result of Chen ([2]). In particular, we have for any submanifold M n of a real space form M n+m (c): For immersions which are invariant with respect to the standard Kählerian and Sasakian structures on E 2k and S 2k+1 (1) the conjecture was proved in [8] and for immersions which are totally real with respect to the nearly Kähler structure on S 6 (1) in [10]. In section 3 we will translate the conjecture to an algebraic problem involving symmetric matrices, followed by a proof of a weaker version. In section 4 we will prove the conjecture for H-umbilical Lagrangian submanifolds of C n ∼ = E 2n , for minimal Lagrangian submanifolds of C 3 ∼ = E 6 and for ultra-minimal Lagrangian submanifolds of C 4 ∼ = E 8 . We remark that some of these results have been generalized in the meantime by A. Mihai in [15], see [16] in the present volume. The reader should be warned however that the notations in [16] and in this paper are not always consistent.
Preliminaries
Let M n be a Riemannian manifold of dimension n with Riemann-Christoffel curvature tensor R. If {e 1 , . . . , e n } is an orthonormal basis for T p M, then we define the normalized scalar curvature of M n at p by Now let M n+m be another Riemannian manifold with Riemann-Christoffel curvature tensor R and let f : M n → M n+m be an isometric immersion. If h is the second fundamental form, A U the shape-operator associated to a normal vector field U, and R ⊥ the curvature tensor of the normal connection, then the equations of Gauss and Ricci are given by for tangent vectors X, Y , Z and T and normal vectors U and V .
Let {e 1 , . . . , e n } be as above and suppose that {u 1 , . . . , u m } is an orthonormal basis for T ⊥ p M. Then we define the normalized normal scalar curvature of M n at p by which corresponds to the definition proposed in [9]. Another extrinsic curvature invariant that we will use is the mean curvature vector of the submanifold at p: h(e i , e i ).
A translation of the problem
From now on, we use the following convention: if A and B are (n × n)-matrices, we define A, B = tr(A t ·B). The associated norm is then given by The scalar product, and hence the norm are preserved by orthogonal transformations.
The translation
The following theorem reduces the conjecture to an inequality involving symmetric (n × n)-matrices.
Proof. Let M n be a submanifold of M n+m (c). Take p ∈ M n and suppose that {e 1 , . . . , e n } is an orthonormal basis for T p M and that {u 1 , . . . , u m } is an orthonormal basis for T ⊥ p M. In summations, Latin indices will always range from 1 to n, whereas Greek indices range from 1 to m. Further, we use the notations introduced in the previous section. We define a symmetric (1, 2)-tensor b, taking normal values, by Using the equation of Gauss (4) and (9), we find and thus From the equation of Ricci (5) and (10), we get We conclude that Remark 3.1. By proving Theorem 2 for m = 2, we obtain a simple proof of the conjecture for codimension 2 submanifolds: where the second inequality is due to Chern, do Carmo and Kobayashi [5], see Lemma 3.1 below.
Proof of a weaker version of the inequality
First, we recall two inequalities.
Lemma 3.1 ([5]
). If B 1 and B 2 are symmetric (n × n)-matrices, then with equality if and only if B 1 = B 2 = 0 or, after a suitable orthogonal transformation, We will use these inequalities to proof the following, weaker version of conjecture 1.1: Define the matrices B α as in the proof of theorem 3.1. After a suitable orthogonal transformation, we may assume that B α , B β = B α 2 δ αβ . The inequality of Cauchy- This inequality, together with lemma 3.1 and theorem 3.2 gives which yields the first inequality stated in the theorem. To prove the second one, remark that we may replace m by the dimension of the image of b. The result follows from the observation dim(im(b)) ≤ n(n+1) 2 − 1.
Lagrangian submanifolds
In this section, we prove the conjecture for three families of Lagrangian submanifolds, namely for H-umbilical Lagrangian submanifolds of C n ∼ = E 2n , for minimal Lagrangian submanifolds of C 3 ∼ = E 6 and for ultraminimal Lagrangian submanifolds of C 4 ∼ = E 8 .
Recall that a submanifold M of a Kählerian manifold M 2n is called Lagrangian if at every point the almost complex structure J of M 2n induces an isomorphism between T p M and T ⊥ p M. In particular dim(M) = n. The second fundamental form satisfies the following symmetry property: for X, Y, Z ∈ T p M.
H-umbilical Lagrangian immersions in C n
It was proven in [4] that there are no totally umbilical Lagrangian submanifolds in complex space forms, except totally geodesic ones. H-umbilical Lagrangian submanifolds are introduced in [3] as the 'simplest' Lagrangian submanifolds next to totally geodesic ones. Their second fundamental form satisfies for some suitable functions λ and µ and a suitable orthonormal local frame field {E 1 , . . . , E n } on M n .
We prove the following: Proof. From (14) the form of the shape-operators is easily deduced. We now use theorem 3.1. Defining the matrices B α as in the proof of that theorem, we easily see that n α,β=1 The last inequality is satisfied for every λ and µ since the bilinear form 2nx 2 − 4 n xy+ n−1 n 2 y 2 is positive definite.
Minimal Lagrangian submanifolds of C 3
with respect to this basis. If equality holds at every point of a minimal Lagrangian submanifold of C 3 , then M 3 is either a cylinder on complex curve in C 2 (with respect to a different complex structure) or a "twisted special Lagrangian cone", both in the sense of [1].
Proof. Let M 3 be a minimal Lagrangian submanifold of C 3 . Take p ∈ M 3 and consider the function Take e 1 ∈ T p M such that f attains its maximum value in e 1 . Then h(e 1 , e 1 ), JY = 0 for every Y ⊥ e 1 . Using (13), this implies that e 1 is an eigenvector of A Je 1 . Choosing e 2 and e 3 such that {e 1 , e 2 , e 3 } is an orthonormal basis for T p M which diagonalizes A Je 1 , we have that the shape-operators take the following form: We now compute ρ, using Gauss' equation: The computation of ρ ⊥ using Ricci's equation is completely analoguous to that in [10], yielding Using the same argument as in [10], we obtain that 9(ρ ⊥ ) 2 ≤ (3ρ) 2 , which implies the inequality stated in the theorem, since ρ ≤ 0 from (2). Equality holds if and only if c = d = 0 and ab = 0. By, if necessary, changing the role of e 2 and e 3 , we obtain the result.
For proving the statement on the equality case, it suffices to remark that when the shape operator has the form (15), then the cubic form h(X, Y ), JZ has S 3 -symmetry in the sense of [1] and therefore the classification following from the classification in [1].
We can extend the previous theorem to 3-dimensional Lagrangian submanifolds of complex space forms. For a complex space form of constant holomorphic sectional curvature 4c, the curvature tensor takes the form This implies that for a Lagrangian immersion in such a space, the equations of Gauss and Ricci read respectively: An analoguous computation as in the proof of the previous theorem now yields the following: with respect to this basis.
In [8] an analoguous inequality relating ρ and ρ ⊥ is obtained for complex submanifolds of complex space forms.
Ultra-minimal Lagrangian submanifolds of C 4
A submanifold M n of a Riemannian manifold M n+m is called ultra-minimal if around each point p ∈ M n there exist a local orthonormal tangent frame and a local orthonormal normal frame, such that the shape operators take the form where A α j is an symmetric (n j × n j )-matrix, with tr(A α j ) = 0, and n 1 = n. Proof. Since M 4 is ultra-minimal, there are two cases to consider, namely n 1 = n 2 = 2 and n 1 = 3, n 2 = 1. In the first case, using the symmety conditions for Lagrangian immersions, we obtain that In the second case, the ultra-minimality condition yields that A α 2 = 0 for α = 1, 2, 3, 4 and hence the problem reduces to the one solved in Theorem 4.2. We obtain ρ ≤ −ρ ⊥ , with equality if and only if the shape operators take the form (17), with b = 0. | 2014-10-01T00:00:00.000Z | 2006-10-24T00:00:00.000 | {
"year": 2006,
"sha1": "c900a16a888eac0341ba5450167091cd75c22905",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1784653b45eaa7123fc01ec43b98df6d46d55b98",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
257397030 | pes2o/s2orc | v3-fos-license | Impact of Communication Quality in Facilitating Citizen Participation in Urban Planning
– Urban planning procedures are difficult for individuals to understand due to their complexity, which stems from the large number of parties involved, the existence of legal and political processes, and the presence of bureaucracy. Despite towns' efforts to include more residents in shaping their communities, participation remains low, and citizens' involvement often occurs late in the design process, when changes are more difficult to implement. That is not because people are not interested; instead, it is because plans have not been conveyed well enough for people to understand the repercussions before it is too late to have an impact. Citizens' comprehension, involvement, and ownership of plan proposals may all benefit from more opportunities for public participation in the planning process. As an alternative to data based only on technical and statistical understanding, citizen engagement may augment analysis with useful information at a human level. In that regard, this study will investigate how towns might enhance communication quality by making information more readily available and presenting it in a style and tone that are more likely to encourage debate and collaboration among its constituents.
structures and public spaces.Robert Moses and Le Corbusier began from scratch, whereas Georges-Eugene Haussmann in Paris, Lucio Costa in Brasilia, Daniel Burnham in Chicago, and Pierre Charles L'Enfant in Washington in Paris all renovated and remade cities and villages to match their ideals of urban planning [3].
Cities and towns from the 3 rd millennium BC were planned and built in Mesopotamia, the Indus Valley, Crete, and Egypt, and their remains may still be seen today.Archaeologists digging in these areas have unearthed the remnants of paved roadways laid out in a grid layout.A well-organized city center developed through time as the concept was accepted by many cultures.Beginning in the eighth century BCE, orthogonal (or grid-like) patterns became the norm for Greek city states.The "founder of European urban planning," known as Hippodamus of Miletus (498 to 408 BC) was a Greek urban planner and architect famous for developing the "Hippodamian plan" (grid plan) for city design.
In imitation of the Greeks, the ancient Romans adopted orthogonal city layouts.Ancient Roman city planning prioritized security and practicality.The Roman Empire's expansion facilitated the dissemination of new concepts in city design.These beliefs gradually faded away after the collapse of the Roman Empire.In spite of this, the Roman city center was often preserved in numerous European towns.Cities in Europe expanded spontaneously and sometimes chaotically between the ninth and fourteenth centuries.However, many new towns expanded with well-planned constructions in the years after the Renaissance.More information on urban planning and the individuals who made it happen is available beginning in the 15 th century [4].It is during this time that the first theoretical treatises on urban planning and architecture appear, detailing and illustrating the designs of towns and cities while addressing theoretical questions such as how to best plan the main lines, how to best ensure that plans meet the requirements of a particular population, and so on.Several European monarchs throughout the Enlightenment era made grandiose attempts to remodel their respective capitals.In order to make Paris more suitable as a contemporary capital, Baron Georges-Eugène Haussmann was tasked by Napoleon III to remodel the city during the Second French Empire, giving the city new long, straight, and broad boulevards.
A new paradigm emerged at the start of the twentieth century in the fields of planning and architecture.Rapid expansion was a hallmark of the 19 th -century industrial metropolis.As time went on, people started paying more attention to the plight of the working poor in cities.The laissez-faire approach to government economic management that was popular for many of the Victorian period was paving way to a New Liberalism, which favored citizen engagement on behalf of the poor and oppressed [5].Theorists started designing urban planning models about 1900 to help populations, particularly factory employees, cope with the negative effects of the industrial period.Therefore, a central planning approach to urban planning would dominate the next century over the world, but this wouldn't necessarily be an improvement.
According to Crăciun, Ion Mincu [6], planning cities and towns was not always seen as a distinct field, but that started to change around the turn of the twentieth century.In 1899, the Country and Town Planning Association were established, and in 1909, the University of Liverpool introduced the first urban planning course in British higher education.The modernist and uniformist ideals that emerged in urban planning in the 1920s persisted into the 1970s.The Radiant City, proposed by Le Corbusier in 1933, is a vertical metropolis meant to alleviate environmental hazards and population congestion by efficient use of space.However, a sizable number of urban planners began to suspect that crime and other social issues would increase if modernist principles were implemented.Planning for cities began to emphasize uniqueness and variety in the second half of the 20 th century.
The planners of large cities face a unique set of challenges and opportunities due to the city's inherent variety of human communities.Cities have numerous benefits, but one of them is that the high concentration of people makes them a natural center for variety and a good location for cooperation.This lays the groundwork for public input into urban planning.The majority of modern urban planning bodies only include the public as much as is required by law.Some players see citizen engagement as a chore, something which must be accomplished in order to go forward, rather than a chance to provide insightful feedback.Developers, in particular, tend to have this view, according to the available evidence, whereas designers and politicians place a higher value on involvement in order to guarantee democratic procedures and the equal representation of all voices in society.
This article argues that public engagement at early stages may give useful input for planning processes, but that planning authorities must do a better job of enabling this participation than they now do.The article will clarify some of the key concepts of urban planning, explore the literature's perspectives on public engagement in planning processes, and examine works on improving the quality of communications so that its meaning may be grasped by those without specialized training.What urban architects can learn from the field of designing, where user input is valued and humancentered processes are standard, will also be explored.Based on this study, we discuss some of the current concerns to engagement and provide some ideas for overcoming them.The remaining part of the article has been organized as follows: Section II focuses on an introductory review of urban planning.Section III discusses the relevant urban planning stakeholders while Section IV focuses on the concept of urban planning and citizen participation.Challenges of participatory city planning are critically evaluated in Section V. Section VI focuses on the breadth and actualization of participation, which Section VII provides an insight on the human centered design.Section VIII presents an in-depth analysis of the development of urban planning, urban forms and participatory planning.Lastly, Section IX draws final remarks to the article.
II. URBAN PLANNING
With both the present and the future in mind, urban planners create blueprints for how cities will grow and change.Community development plans must account for land use, mobility, constructions, landscapes, open spaces, infrastructure, socioeconomic investments, employment, and enterprises.The following are the steps taken in Norway for regulating urban plans: (1), there is the initiation, and (2) then there is the first meeting.(3) Initiating a New Project Exposition to the General Public (4) Collection of Information ( 5) Outline of the Plan (6) The Political Handling of the Planning Committee (7) There must be open examination from the general public.(8) The Second Treatment: The Committee for Planning (9) ninth and last step is the close.
During the mandatory, behind-closed-doors kickoff meeting, stakeholders including politicians, planners, landowners, and developers review the project's technical specifications.The plan consultant is responsible for informing other stakeholders of their rights and providing ideas for ensuring their involvement in advance of the kickoff meeting.Public government stakeholders, such as infrastructure and government agencies, neighbors, and interest groups, are notified at the project's initiation.The public notice must be posted on a government-run website and republished in at least one regional newspaper.The reader needs to know what will happen if a certain plan is implemented, who is to blame, and where to go for further details.A further letter of notice, written in language that is understood to those without technical training, must be sent to landowners, neighbors, and government stakeholders.
Information such as geotechnical and chronological data is gathered in the fourth step before a design draft is created.The drafting process is handled in a democratic political planning committee.Individuals have at least 6 weeks of public review to publicly provide recommendations or criticisms of the proposal.The next step is a fresh hearing before the planning committee, when it will be determined whether or not to go through with the original design, and if so, what modifications will be necessary.Alterations made in steps 6-9 have the potential to restart the process from step 4. The costs and delays of making a course correction at this late stage of the project might be substantial.Without public buy-in, opposition to the proposal might grow, exacerbating the problem that prompted the need for revisions in the first place.Due to this, this study will concentrate on maximizing citizen involvement in steps 1-4, when the plan is most open to change and may be shaped more effectively by listening to the public's feedback.
III. URBAN PLANNING STAKEHOLDERS
According to Fageha and Aibinu [7], individuals or organizations with a direct interest in the project's outcome and the ability to influence it for better or worse are considered stakeholders.Citizens aren't the only ones that have a vested interest in urban planning, however; developers, planners, politicians, and community groups all have, too.When attempting to enhance the quality of interaction between stakeholders, it is important to take into account the nature of the existing relationships among them and the current flow of information and communication.IFC's "Stakeholders Interaction" guidebook outlines eight factors necessary for effective stakeholder engagement.These are: 1) Defining Stakeholders and Conducting Analysis 2) Dissemination of Knowledge 3) Involvement of relevant parties in decision making 4) Collaboration and Negation 5) Handling complaints 6) Involvement of project stakeholders in ongoing monitoring 7) Informing relevant parties 8 ) Administrative duties Stakeholder alignment, in which diverse actors collaborate toward a single purpose as opposed to their own interests, may be achieved by the administration of these components, which planners can become aware of.
IV. URBAN PLANNING AND CITIZEN PARTICIPATION
According to the Norwegian Plan and Building Act, (Section 5-1: Participation), anyone who supports a plan proposal shall be regarded for participation.It is the responsibility of the municipality to guarantee that these criteria are met in planning procedures carried out by other governmental bodies or commercial entities.It is the obligation of local governments to facilitate the full and equal involvement of all citizens, including those with special needs such as children and teenagers.The chances for those who cannot take part directly must be made available to them in other ways.
According to the Ministry of Local Government and Modernization [8], "participation" is the freedom of people and organizations to have a say in policymaking.The objective is to make sure that local communities have access to democratic forums where people' demands may be heard and where innovation and participation are encouraged.Citizens should become involved for two key reasons.The first is that participatory planning serves normative purposes, which is connected to direct democracy.The case is made that giving people a platform to have their opinions heard would increase faith in the legitimacy of government decision-making.This might give the people more say in government choices and help to balance the power dynamic.
To continue, one of the instrumental purposes of participatory planning is to improve the quality and efficiency of the planning process.When individuals have a voice in policymaking, they feel more invested in its outcome, and the resulting decisions are typically more well-grounded and transparent.Issues and difficulties should be made bare rather than concealed so that a wide range of interested parties may contribute to finding solutions.Possible result: less implementation issues and fewer pushbacks.And it could help make urban renewal more obvious, which might inspire people to become involved in city planning.The Ministry considers that when people take part in planning, it may improve the quality of the plans, foster a sense of shared ownership, and provide a learning opportunity for both residents and planners.Citizens have the finest understanding of their communities, and when they share that understanding with planners, it fosters local democracy and creates the foundation for a vibrant, inclusive society.
V. CHALLENGES OF PARTICIPATORY CITY PLANNING Including everyone in city planning is difficult for a variety of reasons.The complexity of urban planning, the difficulty of achieving the intended degree of engagement and the quality of communications will all be discussed in detail below.
Complexity
The complexities of urban planning present the greatest barrier to the success of participatory planning.A desire to enable for participatory procedures among planners, developers, and legislators is necessary to ensure that the requirements of many stakeholders are satisfied and that all people have the chance to participate and voice their view.It also necessitates resources, both monetary and human, which may be hard to defend given the unpredictability of its long-term impacts and the impact on short-term budgets.It is also challenging to maintain a healthy power and influence distribution among many interested parties.People will not have decision-making authority despite efforts to do so, and landowners, politicians, and planners will nearly always be in a better position to affect outcomes than citizens.
Citizens need to be informed of this, so they know what their rights are and how much sway they have over proposed changes.There has to be a level playing field for all citizens if they are to have their voices heard, yet in today's complicated world, many people give up or are too overwhelmed to become involved.Diverting tasks into specialized teams led by specialists in the subject is a common suggestion for untangling complexity in large, intricate systems.The term "functional organization structure" describes this kind of setup.The lack of immediate ownership of the whole project, as well as the absence of cross-functional collaboration and sluggish communication across departments, might result from a silo-mentality brought on by this organizational setup.
Top-down techniques, in which specialists make decisions with little input from public, are also sought to deal with the complexity.According to Blanding and Kilic [9], both top-down techniques, where choices are decided by municipalities, planners, or experts, and bottom-up approaches, where people' input have an impact on decisions, have their proponents and detractors.Regional issues should be handled by planning authorities and specialists, while bottom-up approaches should be included into local decision making.In a democratic system, voters must have faith that their government will make choices that will devote resources to society objectives, but citizens cannot be indispensable to every process.
Not Attaining Intent of Participation
It is indeed clear from the discussions that have arisen and the attention given to urban planning in the local media that residents are interested in this topic; the problem is that their interest is focused in the wrong places.Citizens are seldom involved until Stage 5 of the planning process, when a more complete and understandable plan proposal is available.Since so much effort has already been put into the plan, and since the political planning committee has already voted and approved the plan, making changes at this point is impossible.That is to say, the goal of involvement is not being achieved.The people of Norway live in a highly democratic and open society.All material and relevant information is disclosed publicly, and the process is accessible to public examination.Our laws are written with the intention and duty of government involvement in mind.The issue, however, is not a lack of communication, but rather the difficulty in gaining perspective and understanding the point of plans, particularly in the outset.
Another difficulty is the widespread pessimism, particularly among developers.In the opinion of some, audience involvement will merely serve to bog down the proceedings rather than add anything to them.By imposing minimal conditions on participation, it may be possible to restrict access to some information or eliminate it entirely.This is an example of plan resistance minimization, often known as gatekeeping.If individuals become aware of it, they may become suspicious, mistrustful, and resigned to the system.If citizens have to take on powerful interests in order to participate, they may get confused and frustrated.Even if the government intends to become involved, it won't do so beyond the bare minimum if private sector developers and planners don't see the benefit.
In [10], when residents are brought in too late to the planning process, the resulting involvement is often less than beneficial, marked by resentment and animosity towards the planning authority and plans.When planning authorities connect involvement with unfavorable outcomes, it might harden their attitude against it.This is more justification for starting the wedding process early.It will take more time and money, but it will help get everyone on the same page by laying the groundwork for a solid data foundation, and if people are included in the process from the start, it may be possible to use less resources overall.
Lack of Quality in Communication
According to Gossel [11], the term "communicate" means "to make public, share, or educate".In order to be effective, communication-whether verbal, nonverbal, or via a tangible medium-must be clear and brief, focused on the context, and respectful of the diversity of perspectives among listeners or readers.The significance of good communication may be shown in the fact that poor communication is a leading cause of project failure.The fundamentals of communication consist of a sender conveying information via a medium of some kind to a receiver who then deciphers the information.It is important that they have a shared emotional reference point in the tone of voice in order to prevent misunderstandings.Two-way communication is characterized by a fluid exchange of roles between the sender and the recipient.However, one of the issues with communication in urban planning processes is that information is frequently delivered without any effort to make a conversation out of it, leaving people without a method to reply, voice their view, or even be aware that the message is conveyed.
The second issue is that urban planning communication tends to be formal and bureaucratic.Statutes, rules, and a formal letter announcing something are all examples.In order to facilitate a shared knowledge of the underlying mechanics, formal communication tends to be strict and orderly.Citizens may be lost in the weeds of technicalities, nevertheless.However, in informal settings, it is the meaning and the connections between people that matter.It is very contextual, and the dynamic between individuals plays a big role.Both a lack of formal and informal communication may lead to confusion.
As a result of both uncertainty and ambiguity, individuals may lose faith in the planning processes.The public has to trust that the planning authorities are serious, thoughtful, and competent.One must also take into account the medium in which data is conveyed.Citizens are a diverse and nuanced demographic.Although it is almost impossible to communicate with everyone, it is crucial to choose appropriate channels if a significant portion of the population has to be reached.In today's highly digitized environment, there are many methods for getting the word out.Streaming and internet services are taking readers and listeners away from traditional media like newspapers (both in print and online) and simplified television and radio.Planners and municipalities can reach more people if they share information via the channels where those people already get their news and allow those people to react in the ways that are most natural to them.
VI. BREADTH AND ACTUALIZATION OF PARTICIPATION
The majority of today's residents who take part in planning procedures are either highly educated or resourceful, which presents a unique set of difficulties.Similarly, engagements are announced much too late.Both stem in part from insufficiently explained procedures, making it hard to understand how to make a meaningful contribution.Anyone with a stake in the plan's success should be consulted.It is important to explore the fundamentals of motivation and specific strategies for motivation since early engagement may be accomplished by encouraging people for involvement sooner.Addressing visual communication as a means of presenting the complexities of urban planning to residents is one of the finest ways to simplify the presentation of complex systems and processes.In addition, this will be elaborated upon further.
Motivation for Participation
Both internal and external factors may influence a person's level of motivation.To be intrinsically motivated, one must have an emotional connection to the task at hand, whether it via curiosity, delight, or a desire for a personal challenge.Extrinsic motivation is driven by factors outside of oneself, such as incentives, criticism, or public acclaim.To achieve the extrinsic aim, for instance, achieving the intrinsic goal may serve as a means.It is very uncommon for both to have a role in propelling an individual to success in a given endeavor, although some studies have shown that intrinsic motivation elements largely are more conducive for creativity.
Intrinsic motivation is typically the basis for public participation in urban planning.People care about the growth of their communities because of semantics, or their own set of beliefs.With a few notable exceptions, however, it seems that when extrinsic drive is combined with intrinsic motivation, the results are even more favourable for employee engagement.The first is if you are highly motivated on your own and are rewarded for going above and beyond.Second, 'information extrinsic motivators' such as positive reinforcement for a job well done and constructive criticism for areas of improvement tend to be more effective than others.The third factor is time.However, at the conception stage of a project, while looking for validation, extrinsic incentive might assist assess whether or not an idea is acceptable.
In most cases, it is the extrinsic incentive aspects that serve as the first spark that ignites the process.By considering extraneous variables, planners might expand their target demographic to include younger people and children.In certain cases, adding a little of fun to the procedure might help.The term "gamification" is the practice of incorporating game aspects into non-game settings to motivate and engage users in doing an unrelated activity.When executed well, it may help lighten the mood while yet maintaining the subject's seriousness.Giving people a way to explore planning procedures where prizes, challenges, or explanatory visuals play a large and balanced part might also assist untangle the knottiness of planning safely and interestingly for them.
People are motivated by gamification because it promotes mastery via problem solving, which in turn causes physiological responses.This may motivate individuals to do tasks that they normally would avoid.Various hormones and signal substances, such as dopamine, oxytocin, serotonin, and endorphins, are released during gaming, making players feel good about themselves because of the rewards they receive, the bonds they form with other players while working together to complete a task, the success they have achieved, and the pain they have overcome.Fun and gratifying entry points for individuals to become involved in urban planning include things like awards for participation, acknowledgement for their efforts, and even an upgrade to their "citizen status".
Information Visualization
According to Vázquez-Ingelmo, García-Peñalvo, and Therón, "MetaViz [12], information visualizations may relay a message by connecting secondary and primary data, allowing the recipient to understand without resorting to extensive research of the issue; this is preferable to reading the bureaucratic writing, which is often delivered in a tone of voice that people cannot relate to.The intangible may often be better understood via visual representations.The government's emphasis on digitizing to streamline operations means that user experience design for digital interfaces will become more crucial.Effectiveness, openness, and uniformity in planning procedures may all benefit from its use.Visualizations are useful because they make complex information readily available and understandable, illuminating not just the plan's goals but also its likely outcomes.Rather than just displaying images, effective visualizations should spark new ideas and provide the groundwork for learning.
Curiosity may be piqued and attention captured with visualization only by the use of interesting visuals.Collectively, this may help people think more clearly, leading to more well-formed viewpoints and more informed participation in debates and discussions, even among those who are not experts on the subject.The presentation of symbols plays a crucial role in the ways in which visualizations aid learning.Sensory symbols are universal and do not need learning; they are the symbols most often used to describe visuals.The right use of arbitrary symbols, like mathematical symbols, needs study and is easily forgotten.This is important to keep in mind while explaining urban planning to residents who are not professionals in the topic.
It is important to make visualizations more accessible via the use of sensory symbols.Structured connections may be shown using visual aids, while more nuanced and intricate reasoning can be explained in simple English.In many cases, a mix of the two is the most effective way to promote more in-depth learning.Careful planning is required before engaging in any kind of visualisation.Making something reliable and believable is an art form that needs training and experience.It has to be "ergonomic," or useful in a way that makes it easy to read.Regardless of whether it is a public or private ICT, all of Norway's systems must comply with universal design laws.While this provides some relief, it is nevertheless advised that experienced graphic designers, information architects, text composers, or anyone with experience creating visualizations be consulted for assistance.
By displaying data visually, visualizations may enhance the efficacy of communication in urban planning processes and increase its accessibility to the general public.By replacing bureaucratic language with illustrations of the process, it may make the structure more accessible to people and provide them the information and resources they need to learn more about it.Citizens should be given timely, contextual information, with options to go further if necessary.DESIGN Warnke,Bratan,and Wunderle [13] argue that participatory approaches are desirable because they provide voice to the public and ground the planning they underpin in reality.Therefore, it is instructive to examine what planners may learn from other domains of practice, such as human-centered design, where involvement is crucial, and which are more used to it as a key factor in problem-solving.In order to avoid relying only on the opinion of specialists and academics, human centered design (HDC) is a catch-all term for a variety of design methodologies that aim to incorporate end-users, for instance citizens, and other important stakeholders in the problem-solving process.Some examples of HDC methodologies that are also participatory design methods are ethnographic research, empathic design, co-design, contextual design, and the lead user approach.Some of these techniques are geared at coping with the here-and-now, while others are more futuristically inclined.Methods also differ in how much weight is given to the opinions of researchers and designers vs. those of the end users.
VII. HUMAN CENTERED
Similar to ethnographic research, which focuses on the cultural practices of a specific group of people, HCD involves listening to individuals in order to learn about their unspoken wants and needs.Second, contextual design, in which research is carried out in a genuine situation, is a great way to learn about people's actual requirements by seeing their actions.What individuals create may be studied at a deeper level, revealing user strategies and hidden requirements.Codesign is an example of such an approach.Co-design is a method of design with a strong emphasis on user agency achieved via user participation in the solution-generation process.Among these advantages include a sense of personal investment in the outcome, more productivity, and more original thought.
Co-design needs a significant amount of time and facilitation, thus it may not be required to get to that stage in every planning process.The early phases might also benefit from other HCD approaches like ethnographic research, contextual design, and emphatic design to learn about people, their culture, and their preferences, thereby adding a human dimension to analysis, particularly if the methodologies are triangulated.A more complete picture is painted as a result of the fact that evidence from several sources might corroborate one another.Even while experts will still make the ultimate decisions, the public will have more of a say in the process, and the resulting plan proposal will be grounded not just in technical analysis but also in the priorities and ideals of the community at large.
VIII. DISCUSSION
Planning, designing, and regulating the applications of space inside cities with an eye on the built environment, economic operations, and social effects of the city's many activities.A result of its multifaceted character, urban planning may be approached as either an academic discipline or as a technical profession needing political will and public involvement.In order to create additional open space ("greenfields areas") and revitalize already existing parts of the city, urban planning requires goal-setting, data collection and evaluation, forecast, design, innovative planning, and public engagement.In order to better map the present urban structure and forecast the impacts of changes, geographic information systems (GIS) have grown in popularity [14].In the latter part of the twentieth century, the term "sustainable development" began to be used interchangeably with "best possible outcome" when referring to planning objectives.Our Common Future (1987), a report commissioned by the United Nations, defines sustainable development as "development that fulfills the requirements of the contemporary without compromising the capacity of future generation to satisfy their own needs."While everyone agrees on the overarching aim, they may not always see eye to eye on the specifics when it comes to planning.
Planning for cities as we know them now began as a social movement in the late 19 th century in reaction to the instability of the industrial hub.Many of the leading minds of the period envisioned creating a utopian metropolis with no room for improvement, but they were also driven to plan by the pressing need to improve infrastructure like public health and safety, transportation, and public amenities.Modern planners make an effort to balance several objectives, such as those related to the economy, the environment, social equity, and aesthetics.The results of a strategic planning might take the shape of a formalized plan for a city or cosmopolitan area, a plan for a specific neighbourhood or project, or a set of policy options.Despite attempts to separate planning from politics, planners and their sponsors still need entrepreneurial spirit and political savvy to see plans through to fruition.Although planning is traditionally a government function, "public-private partnerships" increasingly include contributions from the business sector.
In the early 20 th century, urban planning evolved as a distinct academic field.The University of Liverpool in the UK launched the first academic planning program in 1909, while Harvard University in the USA founded the first such program in North America in 1924 [15].The curriculum varies greatly from institution to university, although most classes are at the graduate level.While some schools stick to the more conventional curriculum focused on architecture and urban planning, others, notably those awarding doctorates, place more of an emphasis on the social sciences.Because of its fluid nature, the theoretical heart of the field is better characterized by the problems it attempts to solve than by any one guiding paradigm or set of guidelines.Among the most prominent concerns are questions of who has the authority to act in the public interest, how that authority should be exercised, what the cultural and psychosocial characteristics of the ideal city should be, whether or not change can be achieved in accordance with conscientiously determined goals, how far consensus on goals can be achieved through communication, who the city's decision-makers should be (its citizens, state officials, or private investors), and whether or not quantitative methods are the best way to go about achieving change (discussed below).Courses on environmental policy, transportation planning, and infrastructure and social economic growth are typical in urban planning degree courses.
The development of urban planning Early history
Ancient city centers all across the world have been excavated, revealing artifacts that witness to well-planned urban layouts.These centers include Central and South America, the Mediterranean, Asia Minor, Egypt, India, and China.The building of rectilinear and, sometimes, radial street patterns, the division of a city into several functional sections, the establishment of imposing central sites for palaces, monastery, and civic institutions, and the execution of sophisticated security, water system, and sewage systems are examples of early efforts at structured urban development.Most of it may be found in colonial-era minor cities, which were constructed rapidly.It was not uncommon for ancient nations' capital cities to expand significantly before centralized governments were established and equipped to impose order.In Europe, city-building slowed to a trickle for centuries during the Middle Ages.Over time, towns developed into political, commercial, cultural, and religious hubs.Overcrowding, a lack of fresh air and light, and appalling sanitary conditions resulted as cities were more hemmed in by walls and fortresses to contain their exploding populations.As is the case in various modern environments in the developing countries, some areas of the metropolis were confined to particular nations, classes, or trades.
Cities throughout the middle Ages and the Renaissance typically adopted the shape of a village, expanding out along a highway or a junction in a haphazard, circular pattern rather than the more typical rectangular layout seen in newer towns.Most European cities didn't have paved streets until the 12th century (1184 in Paris, 1300 in Lübeck and 1235 in Florence), and even then they were mostly just walkways used more for communication than transportation.A city's walls would be stretched to accommodate its growing population, although at the time, very few cities were longer than a mile.Cities such as Lübeck moved to new locations as their populations grew, and numerous new cities sprung up, usually within a day's walk of one another.Cities' populations varied widely, from a few hundred to as high as 40,000 (London in the 14 th century; the city's population peaked at about 80,000 just before the Black Death struck).Cities such as Venice and Paris stood out, with populations of over 100 thousand.
During the Renaissance, Europeans once again made concerted efforts to organize urban space.The primary goal of these initiatives was frequently the exaltation of a state or ruler, despite the fact that they did help with circulation and military defense.Many magnificent cities were planned and constructed during the 16th and 18th centuries.The end effect may have inspired and thrilled the populace, but it did nothing to improve their health, their standard of living, or the efficacy of production, distribution, and marketing.
The ideas of European absolutism about planning were only partially adopted by the New World.This shift was highlighted by Pierre L'Enfant's ambitious design for Washington, D.C. ( 1791), and by subsequent City Beautiful initiatives that prioritized the aesthetics of public building placement at the expense of the practicality of residential, commercial, and industrial growth [16].The strict grid design that William Penn created for Philadelphia, however, had a much more significant effect on the growth of urban planning in the U.S. (1682).Because it was the most straightforward strategy for partitioning surveyed land, this design was taken west by the pioneers.Despite its disregard for terrain, it helped build land markets by creating uniformly proportioned lots that could be purchased and sold without physically seeing the property beforehand.
The notion of a town square or other centrally positioned public space was fundamental to the design of many cities throughout the globe.In contrast, the plans' recommendations for residential construction were somewhat different from one another.The New England town's commons served as the centerpiece of community life and was home to the town's conference center, tavern, blacksmith, and stores; this design was later imitated by other American towns.The detached single-family home, common in today's big cities, was also a tradition that had its beginnings in this New England village.In European city layouts, the plaza, place, or square in the center had a similar function.However, the attached home predominated in European domestic architecture, in contrast to the detached house that was typical of American residential growth; while in other parts of the globe, markets or bazaars, rather than open spaces, served as the focal point of urban life.The Mediterranean was known for its courtyard-style homes, while many African and Asian communities were made up of enclaves of modest dwellings separated from the street by fences.
The Era of Industrialization
Rapid population growth, unrestrained economic activity, high speculative gains, and political failings in regulating the unexpected physical repercussions of development characterized the mid-to mid-19 th century, when industrialisation thrived in both both the United States and Europe.Massive, ever-expanding cities sprung up during this time, showcasing the era's stark socioeconomic disparity.City planning was an important part of the Progressive movement, which arose in response to the pervasive corruption and abuse of power at the period.The response to the slums, congestion, disorganization, ugliness, and danger of illness was to call for an increase in cleanliness.Engineering advancements in water supply and sewerage is crucial to the continued rise of urban populations, significantly improved public health.The first major housing laws were changed in the latter part of the century.Minimum requirements for housing quality were established by early regulatory legislation (such the Tenement House Act of 1879 in New York and Public Health Act of 1848 in Britain).However, implementation was delayed since neither government nor the low incomes of slum residents provided incentives for landlords to renovate their structures.Nonetheless, advancements were achieved in the field of housing as new structures were built and new legislation continued to boost standards, often in response to the exposing of inspectors and activists such as Charles Booth in England and Jacob Riis in the U.S.
The Progressive Era, which lasted until the early 20 th century, saw the development of initiatives to enhance the quality of life in urban areas in response to the growing demand for recreation.Parks were created so that people may get some fresh air and enjoy a peaceful environment to play or unwind in a healthy way.Playgrounds were developed out of congested areas, and sports and recreation centers were constructed for both adults and children as work hours were reduced in the decades that followed [17].Those who advocated for the establishment of public parks reasoned that giving the working classes access to green spaces would have a civilizing influence on a population that was otherwise confined to substandard living conditions and hazardous environments.In the 1850s, architects Calvert Vaux and Frederick Law Olmsted had the idea for what would become Central Park in New York City.It helped by separating foot traffic from car traffic, by creating a picturesque setting in the middle of the city, and by proving that the addition of parks could significantly raise property prices in the region.For more on this, see "landscape design." The European continent has a long history of valuing urban beauty, as seen by the imperial legacy of courts and palaces, as well as the large plazas and grand monuments of the state and church.Large, symmetrical layouts that featured straight arterial cobblestone streets, favorable vistas, and a systematic grid of squares and highways propelled Georges-Eugène, Baron Haussmann to notoriety as the most prominent urban designer of his day during the Second Empire (1852-1870) in Paris.The new urban planning concept was quickly replicated throughout the rest of mainland Europe.Haussmann's work, however, was about more than just making the city seem nicer; he also removed many of the obstacles to trade that had existed in medieval Paris, making it easier to move products and soldiers about the city quickly and efficiently.His plans called for the removal of low-income residents from central locations, the destruction of dilapidated tenements to make way for more upscale apartment buildings, and the creation of transit corridors and commercial space that cut through and separated neighborhoods.Throughout the majority of the 20 th century, Haussmann's methods were used as a template for urban rehabilitation projects in United States and Europe, and their influence eventually reached most of the developing world.
American designer Daniel Burnham created the standards for the City Beautiful movement, and the World's Columbian Exhibition of 1893 in Chicago, which was the movement's finest accomplishment, reflected those standards.The exposition's architectural design created a pattern that was copied by countless other communities.Civic districts and boulevards around the country were thus built in the City Beautiful style, which is characterized by large malls and heroically positioned state buildings in Greco-Roman architecture, as a juxtaposition to and revolt against the surroundings disorder and ugliness.Spreading the City Beautiful paradigm here in the United States proved challenging due to the concept's low potential for increasing business profitability and the government's much weaker position here.
Haussmann's method had a greater impact on the architecture of American civic centers and residential neighborhoods in Europe than the utopian idea of the garden city, which was first introduced by Pease and Severens [18].With its lowrise residences on backstreets and cul-de-sacs, its separation of business and residential districts, and its ample open space brimming with flora, Howard's garden city had a form that was basically suburban in nature.Howard advocated for a "cooperative commonwealth" in which residents would split any increases in property value, public spaces would be owned collectively, and commercial and industrial zones would be compactly located near neighborhoods.The residential style established in the two additional towns built during Howard's life (Letchworth & Welwyn Garden City) was maintained by his successors, who, although rejecting his socialist ideas, modeled their own projects after the green city's winding avenues and verdant parks.
More than anything else, shifting transportation patterns have influenced the design of modern urban centers.As a result of the shift from human to industrial transportation, cities quickly expanded.Because of the rapid movement of commodities from the site of production to the market, workers were not restricted to living close to their workplaces.However, traffic congestion caused by cars and buses spread quickly across the city's historic districts.They emphasized the need of creating new forms of well-organized traffic flow by threatening to suffocate it.Towards the start of the twentieth century, when subway systems were being constructed in New York, London, and Paris, transportation networks inevitably became a focus of planning.In order to accommodate the increase in traffic, communities have invested vast sums of money into constructing and widening roads.
Multiple local governments established planning divisions during the initial three decades of the twenty-first century.A number of significant events occurred in 1909 that formalized urban planning as a modern governmental duty.These included the approval of Britain's first town-planning law, the hosting of the first national convention on town planning in the U. S., the publication of Burnham's plan for Chicago, and the formation of Chicago's Plan Commission (Nevertheless, in 1907, Hartford, Connecticut established the first official U.S. planning organization).Planning administration and legislation were also created at this period in European nations like Germany and Sweden.
European ideas on urban planning were imported by colonial empires and implemented in developing-world metropolises.As a consequence, it was not uncommon for a brand new city to spring up next to an older, uncontrolled community, both of which suffered from the same problems that plagued medieval European cities despite having been designed according to Western ideas of beauty and division of purposes.The Indian capital city of New Delhi is a prime example of this trend in urbanization.It was designed by British architects Herbert Baker and Edwin Lutyens and constructed right next to the maze of Old Delhi's streets [19].While the new city provided conveniences and amenities better suited to modern living, the old city afforded its residents a feeling of community, functionality and historical continuity.Salisbury, Southern Rhodesia and Nairobi, Kenya are only two examples of capital cities in British-ruled Africa that were planned specifically to meet the needs of their white colonial masters.French planners also inserted spacious boulevards and European-style homes into colonial homes, despite the fact that the ornamental elements launched by France in its colonial capital signified a rather different aesthetic sense.
Urban Forms Sub-division and Zoning Controls
During the first half of the twentieth century, Western industrial towns quickly developed, resulting in the fast encroachment of industries into residential districts, the crowding in of tenements amid tiny dwellings, and the eventual overshadowing of smaller structures by skyscrapers.In order to keep property values stable and to accomplish efficiency and economy in the planning and operation of the city, authorities recognized a necessity for sort out contradictory activities, set some constraints upon building height, and protect existing districts from despoilment.The projected patterns of traffic, population size, and infrastructural developments were all laid out in detail.Zoning regulations, first implemented in the early twentieth century, were the major mechanism for achieving these objectives.Importantly for the purposes of urban planning, zoning regulations separated various uses of urban space by establishing maximums for building width and height within delineated regions (zones).
As a result, the city's residential, industrial, and commercial districts were separated.In addition to shielding residents from potentially unpleasant nearby uses, zoning also exacerbated traffic and congestion and limited activity in some areas of cities to specific times of the day.Disagreements arose as a result of some zoning regulations.Zoning laws in the United States have been challenged in court because of their need of big single-family houses on vast lots, which makes it difficult to provide affordable housing for low-income families.As a result of judicial overturns, certain states have implemented laws to address the problems caused by exclusionary zoning.The original layout of undeveloped property in the United States is now subject to public regulation thanks to the rise of subdivision rules, which developed in tandem with zoning.These rules dictated how future buildings should look and made sure that new roads fit in with the general layout of the city.Depending on the local zoning laws, developers may be responsible for providing the land for public amenities like roadways, playgrounds, and school grounds, as well as footing the bill for their construction.
New towns
Many European nations, including the Soviet Union, Germany, the Netherlands, and France, built new towns (whole new communities outside of city centers) as postwar government projects.Governments, worried about what they saw as too much congestion inside metropolitan areas, built these new towns to absorb the population expansion from the suburbs and move it into planned communities.Some British cities, like Milton Keynes, were able to attract both industry and people inside low-rise conurbations, although this was unusual outside of the Soviet Union.The Swedish government has successfully built high-rise, mixed-income neighborhoods for people of varying economic levels.The Tapiola low-rise ensemble in metropolitan Helsinki, Finland, incorporated many of Howard's groundbreaking concepts and design principles.
For the most part, however, new town development in France, Spain, Belgium and Italy resulted in enormous, uninviting high-rise residential constructions for the working and middle class on the urban periphery.Seaside, California, Irvine, Maryland, Columbia, Virginia, and Reston are all examples of post-war American new towns that depended significantly on private initiative.However, some tiny privately planned suburbs existed before these initiatives were undertaken.These included Riverside, Illinois, a planned city west of Chicago created by Frederick Law Olmsted in 1868-1869.Although they are widely dispersed, some of the world's best examples of modern, well-planned towns may be found in unexpected regions like India, South America and the Middle East.
Large, highly populated, and sometimes gridlocked megacities emerged throughout Asia after World War II as a result of the region's newfound ability to sustain an industrial economy.Numerous Asian governments met the problems of rapid expansion by launching massive construction projects, such as skyscraping office buildings, shopping malls, apartments and restaurants, and new airports.Shanghai's Pudong New Area, spanning both sides of the Huangpu River from the city's historic center, was developed by the Chinese government in a little over a decade.However, many developing countries are still preoccupied with political and economic matters, and as a result, they have made little advance in supporting sustainable organizational strategies that may assist them avoid the unclean conditions that afflicted Urban centers in the late nineteenth century.
Participatory Planning
Participatory planning approaches face a number of obstacles, but they may provide significant advantages if they are implemented properly.The intricacy of modern planning systems leaves the average person confused.It is tough to make a positive contribution if one does not know what to do.Positive community development, particularly at the local level, may result from planners gaining insight into not just the technical needs, but also the quantitative requirements rooted in residents via participation in the planning process.
At the outset, we need to ensure that the general public has a deeper familiarity with and appreciation for the planning process.To do this, the planning authorities must maintain open lines of communication.When people do not know what is going on, there is confusion and maybe even distrust of the procedures.People need to know what is in it for them, how much sway their input will provide them, and how far the plan will go.The purpose of citizen engagement is not to displace professionals but rather to strengthen the evidence basis upon which they may make judgments.Opposition to change and conflicts are likely to reduce if people' recommendations are respectfully accepted.Planners need to be receptive to feedback and suggestions at every stage.The secret to successful group issue solving may lie in being open, honest, and cooperative.
Information must be readily available and presented in a manner that does not presume prior understanding of complex planning procedures on the part of the general public.Citizens who take part nowadays tend to be self-reliant or highly educated, which suggests that average citizens are not yet at ease with participation.Technical jargons have their place, but they should be left at the office and forgotten about while conversing with friends.However, the message must be conveyed clearly and concisely, preferably supported by illustrative visuals that map out the process, improve comprehension, and pique individuals' interest.Last but not least, the knowledge must be disseminated through citizenfriendly channels of communication and made available for discussion between residents and planners [20].
Further, public participation has to be encouraged at an earlier stage, ideally before a formal plan proposal is submitted.In order to do this, planning processes may make use of the tools made available by human-centered design approaches, which are developed from the design process itself.The use of gamification is another approach that, by offering extrinsic motivating elements, may help bring about a more rapid onset of participation.This may be especially helpful for local planning, where it seems to be good to support bottom-up procedures.There will always be a debate about whether or not the recommended attempts to develop participatory processes are worthwhile in light of the resources spent on it and the gains made.There are signs that it is worthwhile, but further research is needed before drawing any firm conclusions.
IX. CONCLUSION Engaging citizens early in the process (while plans are still dynamic and open to change) is optimal since locals know the place best and may give useful insights and information.It seems that effective communication is crucial for this to occur.Since no two planning processes are the same, it is unrealistic to expect them to adhere strictly to a single blueprint.Instead, planners may benefit from feedback on which tools and techniques work best for them.It is essential to get insight from previous procedures, familiarize oneself with the local populace, and adjust procedures appropriately.This might help find strategies to solve the problem that do not force planners to waste money on unneeded resources but instead allow them to reorganize their funds so that they can better serve the process.If they invest more time and energy into planning from the outset, they may save themselves time and energy in the long run when they are not forced to defend their ideas so vigorously.Citizens feel more invested in initiatives when they are able to contribute to them.By including the public in planning for their neighborhoods, crucial details will not be missed and can be discussed openly from the start, which should lead to better results for everyone involved. | 2023-03-08T16:20:35.888Z | 2022-07-30T00:00:00.000 | {
"year": 2022,
"sha1": "6d381d4e546e20c769f6c0a28011fba5dc518e52",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.53759/aist/978-9914-9946-0-5_5",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "bc51e5a6d2ba9f0d22cd3dfa587a98472fb8fb4e",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": []
} |
119208723 | pes2o/s2orc | v3-fos-license | Algebraic Geometry Approach in Gravity Theory and New Relations between the Parameters in Type I Low-Energy String Theory Action in Theories with Extra Dimensions
On the base of the distinction between covariant and contravariant metric tensor components, a new (multivariable) cubic algebraic equation for reparametrization invariance of the gravitational Lagrangian has been derived and parametrized with complicated non - elliptic functions, depending on the (elliptic) Weierstrass function and its derivative. This is different from standard algebraic geometry, where only two-dimensional cubic equations are parametrized with elliptic functions and not multivariable ones. Physical applications of the approach have been considered in reference to theories with extra dimensions. The s.c."length function"l(x) has been introduced and found as a solution of quasilinear differential equations in partial derivatives for two different cases of"compactification + rescaling"and"rescaling + compactification". New physically important relations (inequalities) between the parameters in the action are established, which cannot be derived in the case $l=1$ of the standard gravitational theory, but should be fulfilled also for that case.
Introduction
Inhomogeneous cosmological models have been intensively studied in the past in reference to colliding gravitational waves [1] or singularity structure and generalizations of the Bondi -Tolman and Eardley-Liang-Sachs metrics [2,3]. In these models the inhomogeneous metric is called the Szafron-Szekeres metric [4][5][6][7]. In [7], after an integration of one of the components -G 0 1 of the Einstein's equations, a solution in terms of an elliptic function is obtained. This is important since valuable cosmological characteristics for observational cosmology such as the Hubble's constant H(t) = . R(t) R(t) and the deceleration parameter q = − ..
R(t)R(t)
. R 2 (t) may be expressed in terms of the Jacobi's theta function and of the Weierstrass elliptic function respectively [8]. Also in [7], the expression for the metric in the Szafron-Szekeres approach has been obtained in terms of the Weierstrass elliptic function after reducing the component G 0 1 of the Einstein's equations [7,8] to the nonlinear differential equation ∂Φ ∂t 2 = −K(z) + 2M (z)Φ −1 + 1 3 ΛΦ 2 . Then by introducing some notations this equation can be brought to the two-dimensional cubic algebraic equation y 2 = 4x 3 − g 2 x − g 3 , which according to the standard algebraic geometry prescription (see [9] for a contemporary introduction into algebraic geometry) can be parametrized as where ρ(z) is the well-known Weierstrass elliptic function and the summation is over the poles in the complex plane. According to the standard definition of an elliptic curve [9], such a parametrization is possible if the functions g 2 and g 3 are equal to the s.c. "Eisenstein series" (definite complex numbers) g 2 = 60 ω⊂Γ 1 ω 4 ; g 3 = 140 ω⊂Γ 1 ω 6 . The main goal of the present paper and of the preceeding ones [10,11] is to propose a new algebraic geometry approach for finding new solutions of the Einstein's equations by representing them in an algebraic form. The approach is based essentially on the s.c. gravitational theory with covariant and contravariant metrics and connections (GTCCMC) [12], which makes a clear distinction between covariant g ij and contravariant metric tensor components g is . This means that g is should not be considered to be the inverse ones to the covariant components g ij , consequently g is g im ≡ f s m (x). In the special case when f s m (x) = l(x)δ s m , important new relations in the form of inequalities will be found between the parameters in the type I low energy string theory action -the string coupling constant λ, the string scale m s (which in these theories is identified with m grav. ) and the electromagnetic coupling constant g 4 .
New Algebraic Geometry Approach in Gravity Theory. Embedded Sequence of Cubic Algebraic Equations.
In the framework of the GTCCMC and the distinction between covariant and contravariant metric tensor components, we shall assume that the contravariant metric tensor can be represented in the form of the factorized product g ij = dX i dX j , where the differentials dX i remain in the tangent space T X of the defined on the initially given manifold generalized coordinates The existence of different from g ij contravariant metric tensor components g ij means that another connection Γ s kl ≡ g is Γ i;kl = g is g im Γ m kl = 1 2 g is (g ik,l + g il,k − g kl,i ), not consistent with the initial metric g ij , can be introduced. By substituting Γ s kl in the expression for the "tilda" Ricci tensor R ij and requiring the equality of the "tilda" scalar curvature R with the usual one R, i.e. R = R (assuming also that R ij = R ij ), one can obtain the s.c. "cubic algebraic equation for reparametrization invariance of the gravitational Lagrangian" [10,11] In the same way, assuming the contravariant metric tensor components to be equal to the "tilda" ones, the Einstein's equations in vacuum were derived in the general case for arbitrary g ij , when the assumption g ij = dX i dX j is no longer implemented [11]. Now we shall briefly explain the essence of the s.c. method of "embedded sequence of cubic algebraic equations", proposed for the first time in the paper [11] and enabling to find solutions of multivariable cubic equations in terms of (non-elliptic) functions, depending on the Weierstrass function and its derivative. The method is based on representing (the three-dimensional case is taken as a model example) the initial cubic algebraic equation (2.1) as a cubic equation with respect to the variable dX 3 only and applying with respect to it the linear -fractional transformation. Thus a cubic algebraic equation with respect to the two-dimensional algebraic variety of the (remaining) variables dX 1 and dX 2 is derived (further α, β, α dX α + 2p a3 c3 3 Γ r 33 g 3r = 0. From the last equation the solutions of the initially given multivariable equation (2.1) (called "the embedding equation of the preceeding one) are found to be [11] The found solutions do not represent elliptic functions, since they cannot be represented in the form dX 1 = K 1 (ρ) + ρ ′ (z)K 2 (ρ), where ρ is the Weierstrass elliptic function (1.1). Also, since the solution dX 2 contains in itself dX 1 , it is called "the embedding solution" of dX 1 [11]. Similarly, dX 3 is the embedding solution of dX 1 and dX 2 .
The standard approach in type I string theory in ten dimensions is based on the low -energy action [14,15,16] where L is the expression inside the bracket. After compactification to 4 dimensions on a manifold of volume V 6 , one can identify the resulting coefficients in front of the R and 1 4 F 2 terms with M 2 (4) and 1 (2π) 7 . The essence of the proposed new approach in the paper [13] is that the operation of compactification is "supplemented" by the additional operation of "rescaling" of the contravariant metric tensor components in the sense, clarified in Section 2. This means that since the contraction of the covariant metric tensor g ij with the contravariant one g jk = dX j dX k gives exactly (when i = k) the length interval l = ds 2 = g ij dX j dX i , then naturally for i = k the contraction will give a (mixed) tensor l k i = g ij dX j dX k , which can be interpreted as a "tensor" length scale for the different directions. Further the case of general contravariant tensor components g is had been assumed when g is g im ≡ f s m (x) := l s m := lδ s m , from where g is and the "rescaled" scalar quantities R and F 2 can easily be expressed [13]. In the following one can discern two cases: 1st case -"compactification + rescaling". One starts from the "unrescaled" ten -dimensional action (3.1), then performs a compactification to a five -dimensional manifold and afterwards a transition to the usual "rescaled" scalar quantities R and F 2 . Then it is required that the "unrescaled" ten -dimensional effective action (3.1) is equivalent to the five -dimensional effective action after compactification, but in terms of the rescaled quantities R and F 2 in the right-hand side (R. H. S.) of (3.1). This can be expressed as follows Note also that since R (5) = R (4) ( R (5) means the curvature of the 5D−spacetime), the compactification is in fact to four dimensions and consequently the integration in the R. H. S. of (3.2) is over a 4D−volume.
Expressing the tilda (rescaled) quantities R and F 2 in the right -hand side of (3.2) through the unrescaled ones R and F 2 by means of the relation g is = l g is and identifying the expressions in front of the "unrescaled" scalar quantities F 2 and R in both sides of (3.2), one obtains an algebraic relation and a quasilinear differential equation in partial derivatives with respect to the length function l(x) [13]. 2nd case -"rescaling + compactification". This case is just the opposite to the previous one in the sense that the "rescaled" components become unrescaled ones and vice versa. In an analogous way, an algebraic relation can be obtained again after comparing the coefficient functions [13], from where after introducing the notation β ≡ , i.e. β ≪ 1, one can express the length scale l(x) as l 2 = In the last expression P denotes the term with the second partial derivatives of the metric tensor, i.e P := (g AD,BC + g BC,AD − g AC,BD − g BD,AC ). For l = 1 (when g is = g is and g is g im ≡ δ s m ), as should be expected, we can obtain the usual relation M 2 (4) = (2π) 7 may be attributed to deviations from the usual scale l = 1 for the standard gravity theory.
3rd case -simultaneous fulfillment of "rescaling + compactification" and "compactification + rescaling". This means that it does not matter in what sequence the two operations are performed, i.e. the process of compactification is accompanied by rescaling. From the simultaneous fulfillment of the two differential equations one obtains a cubic algebraic equation with respect to l 2 , from where under the assumption about the positivity of the square of the length function l(x) (consequently -positive roots of the cubic equation), the following two inequalities (written for compactness as one -the upper and lower signs in ≷ and ± mean two separate cases), relating the parameters in the low energy type I string theory action are obtained where a ≡ a 2 − [g AC g BD P (2π) 7 g 4 | 2009-11-03T20:10:59.000Z | 2009-11-03T00:00:00.000 | {
"year": 2009,
"sha1": "13e6da4901f43138218ccc8ad0f87ccdce05d87d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0911.0659",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "13e6da4901f43138218ccc8ad0f87ccdce05d87d",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
55157968 | pes2o/s2orc | v3-fos-license | Nano-and Submicron Particles Emission during Gas Tungsten Arc Welding ( GTAW ) of Steel : Differences between Automatic and Manual Process
Welding operations originate micro and nanoparticles represented by metal oxides, unoxidized metals and compounds, such as fluorides and chlorides. Welding fumes exposure is associated to lung cancer, chronic bronchitis, asthma and early Parkinson disease. Ultrafine (nanosized) particles in welding fumes are considered a risk factor in terms of occupational exposure: when inhaled, they are efficiently deposited in all regions of the respiratory tract and can translocate to other target organs as brain and systemic circulation. The study of nanoparticles emissions during welding can help to understand effects related also to new-engineered nanoparticles exposure. In our study two real sources of Gas Tungsten Arc Welding (GTAW) fume particles, collected in an automotive plant, were characterized by means of a transmission electron microscope coupled with an energy-dispersive X-ray analytical system (TEM-EDS) and compared to a zone of the plant far from the two sources used as a reference background. The particles sampled during the automatic GTAW process were mainly constituted by iron/manganese oxide with a mean diameter of 47 nm, followed by smaller iron oxide nanoparticles (21 nm). During the manual welding process mostly aggregates with larger diameters that showed an X-ray spectrum characteristic of different kinds of silicates were found. Iron and cobalt oxides nanoparticles were present only inside bigger aggregates mainly composed of aluminum and titanium oxides. This study confirms that welders are exposed to nanoand submicron particles and that iron/manganese oxide nanoparticles are the most representative in automatic process, despite the low concentration of manganese in welding wires (1–2%). Our results help to understand hazard related to welding fumes exposure and possible effects of nanoparticles on lung, brain and systemic circulation.
INTRODUCTION
In the working environment, several sources of metal nanoparticles with relevant toxicological effects can be found.Among these sources, the welding fumes are probably the most interesting one both from a chemical and toxicological point of view (Berlinger et al., 2008).The high temperatures used in welding operations originate fine and Gas Tungsten Arc Welding (GTAW) is the most important technique in terms of occupational exposure because it causes the formation of smaller nanoparticles compared with other welding techniques (Lehnert et al., 2012;Brand et al., 2013;Miettinen et al., 2016).In addition, GTAW welding has become one of the most popular welding methods in various industrial settings as the automotive one (Buonanno et al., 2011) because, as reported by Kou (2003), this technique grants the operator greater control over the weld than other welding processes, allowing for very clean, strong and higher quality welds.
According to the mechanism of their formation, welding fume particles can be divided in three categories: (i) ultrafine particles (diameter < 0.1 µm), also called primary particles, formed by condensation from the gas phase; (ii) particles (diameter between 0.1 and 1 µm) which are mainly agglomerates, that are those particles made up of primary particles that adhere together because of electrostatic or van der Waals forces, and aggregates that are clumps of primary particles that have fused together; and (iii) coarse fume particles (diameter > 1 µm) formed by mechanical forces (Jenkins and Eagar, 2005).
A number of health problems are attributed to occupational exposure to welding fumes such as metal fume fever, chronic bronchitis, asthma, lung cancer and manganism (Antonini, 2003).Ultrafine particles in welding fumes are considered as a risk factor because they are characterized by a large surface-to-volume ratio.In contrast to larger-sized particles, ultrafine particles, when inhaled, are efficiently deposited in all regions of the respiratory tract and, evading specific defense mechanisms, they can translocate out of the respiratory tract and reach blood circulation and other internal organ or central nervous system via nose route causing an increase in cardiovascular diseases and neurological effects (Oberdörster et al., 2005).
Characterizing dimensions, shape and composition of welding fume particles is important to better understand their toxicity and helps to clarify the possible effects of engineered nanoparticles.The comparison with known exposures, such as to welding fumes, permits to bridge effects from traditional to new "nano" exposure.
A transmission electron microscope coupled with an energy-dispersive X-ray analytical system (TEM-EDS) is an effective tool for the analysis of aerosol particles.It can provide detailed morphochemical characterization (size distributions, shapes, microchemical data and structural information) of welding fume particles.
Even if various methods have been proposed for direct and indirect particle collection on TEM grids, such as thermophoretic precipitation (Bang et al., 2003), electrostatic precipitation (Miller et al., 2010), and deposition onto a TEM grid of a dissolved part of the filter used to collect the particles (Moroni and Viti, 2009), the most used direct method for sampling welding fume particles on TEM grids involves impactors (cascade and electrical low pressure impactors).
The particle size distribution, morphology and chemical composition of the welding fume particles seem to be related to the welding process typology and the welding alloy (Zimmer and Biswas, 2001).An interesting study of Lehnert et al. (2012) found that GTAW generated smaller particles than GMAW, FCAW, and SMAW which generated mainly agglomerates with higher dimensions.The highest mass concentrations were found in FCAW, followed by GMAW and SMAW, whereas mass concentrations for GTAW were frequently not determinable because too low to be detected by weighing filter samples.Although GTAW appeared with the lowest concentrations in terms of particle mass, larger numbers of small-sized particles, including ultrafine particles, were observed.Brand et al. (2013) reported that GTAW generates a majority of particles at the nanoscale.
Since few studies focus on GTAW fumes particles, even though they are potentially the most hazardous ones in terms of occupational exposure, in this study two different real sources of GTAW fumes particles, collected in an automotive plant, were characterized by means of TEM-EDS and compared to a zone of the plant far from the two sources used as a reference background.
Fume Collection Procedure
The welding fumes produced during the GTAW process were sampled in a factory operating in the automotive sector.Three indoor sampling zones were set (see Fig. 1): (i) in proximity of the automatic welder arm (A-GTAW); (ii) next to the operator performing manual welding (M-GTAW); (iii) in a zone of the factory approximately 500 m far from the exposure source, near the offices, used as a reference background (BKD).
In each sampling zone, at a distance of 0.5 m to the welding arc for A-GTAW and M-GTAW zones, three air sampler pumps (GilAir Plus, Sensidyne) were set to a flow of 2.2 L min -1 and connected each one to a personal SKC Plastic Cyclone sampler (model no.225-69) for respirable dust (50% cut point for 4 µm aerodynamic diameter if set to a flow rate of 2.2 L min -1 ) containing two TEM supports (200 mesh copper grids coated with a 20 nm carbon support film, Media System Lab, Italy) to collect particles.TEM grids were placed between the filter support grid and a Mixed Cellulose Ester (MCE) filter membrane (Whatman GmbH, Germany) with 5 µm pore size for two main reasons: the first was to reduce the mobility of TEM grids due to the air flow, the second was to prevent the deposition of particles with size larger than 5 µm that could be collected by the cyclone, even if with an efficiency less than 50%, saturating the TEM grids.Therefore, in each sampling zone, six TEM grids were located.The sampling procedure lasted 20 minutes in each sampling zone.
Chemical Analysis of Welding Wires
The chemical composition of mild steel welding wires used during the automatic and manual operations was specified by the producer in the datasheet (see Table 1).
Inductively coupled plasma atomic emission spectroscopy (ICP-AES) was used to verify the chemical composition of wires.The instrument used for analysis was a PerkinElmer Optima 8000 equipped with autosampler S10.
Three independent pieces of each type of welding wire were taken from the batch used during the sampling of fumes.
A mixture of acids 1:3 (v/v) HNO 3 :HCl and HF was added to about 40 mg (5 mm) of each welding wire sample, heated till complete dissolution, then diluted to 25 mL with MilliQ Table 1.Chemical composition of welding wires used during the automatic and manual operations as specified in the datasheet and verified by means of ICP-AES analysis.
water.The obtained solutions were analyzed for total iron, manganese, silicon, chromium and copper concentration.Analyses were conducted using a calibration curve, obtained by dilution (range: 0-10 mg L -1 ) of iron, manganese, silicon, chromium and copper standard solutions for ICP-AES analyses.The limit of detection (LOD) at the operative wavelength for each element was: 0.020 mg L -1 for Fe at 238.204 nm, 0.020 mg L -1 for Mn at 257.610 nm, 0.050 mg L -1 for Si at 251.611 nm, 0.050 mg L -1 for Cr at 267.716, 0.020 mg L -1 for Cu at 327.393 nm.The precision of the measurements as relative standard deviation (RSD%) was always less than 5%.
TEM-EDS Investigations
The morphological characteristics, dimensions, crystallinity and chemical composition of the particles were acquired by using a transmission electron microscope (TEM Philips CM12) working at 120 kV, equipped with LaB 6 cathode, double tilt holder, and a 622 SC CDD YAG Gatan Camera, coupled with an energy dispersive X-ray analyzer (EDS, EDAX Genesis 2000, Si(Li) detector, TEM QUANT software).The chemical data were processed with the TEM QUANT software system using default K factors.
Two TEM grids for each of the three personal cyclone samplers placed in each of the three sampling zones, for a total amount of eighteen, were investigated by means of TEM.
During TEM grids observation, dimensions of individual particles and aggregation states (agglomerates and aggregates) were collected, for a total amount of more than 750 data.As regards the identification of the aggregation state, in the agglomerates the boundaries between primary particles that form the agglomerate are clearly distinguishable in TEM images while aggregates are clumps of primary particles that have fused together and for this reason their outlines are not clearly defined.TEM images were developed only for the most significant observations and, in that case, the dimensions of particles, agglomerates and aggregates were measured by means of a Vernier scale (without an image processing tool).At the same time, over 500 EDS chemical analyses were performed.Compositional data and morphological shapes were used to identify the compound type.Besides, the obtained EDS spectra were compared with an EDS/SEM database previously made by using samples characterized in detail by other techniques (Fornero et al., 2009).
Data analysis was performed using Excel (Microsoft Office Professional Plus 2010).
In this study the particles observed were grouped according to the chemical composition and aggregation state.
Results of Chemical Analysis of Welding Wires
The results of ICP-AES analysis confirmed the chemical composition of welding wires written in the datasheet as reported in Table 1.
Results of TEM-EDS Investigations
During TEM-EDS investigations, the chemical composition of the grid must be taken into account.The characteristic peaks due to the material of the grid (C, Cu, Ni and Zn) will be present in each acquired spectra.Sometimes, when the chemical analysis was performed close to border of the grid, a Pt peak due to X-ray emissions from microscope diaphragm appeared.A typical energy dispersive X-ray spectrum of the background of the grid in the case study is reported in Fig. 2.
A part from the peaks related to the elements that compose the material of the grids, a typical spectrum of the background of the grid showed the peaks related to O and Si.It revealed the presence of silica that was ubiquitous in all the grids collected after the fumes sampling procedure, whereas the signals of O and Si were absent in a typical spectrum of a TEM grid not exposed to the fumes.
The particles observed were grouped according to their chemical composition (Table 2) because of its significance from an occupational exposure point of view: metal oxides like iron, manganese, cobalt, titanium, aluminum and Fig. 2. Typical EDS spectrum of the background of TEM grids in the case study.
Table 2. Characterization of the particles found on A-GTAW, M-GTAW and BKD samples.chromium oxides, and silicates.Silica particles have been demonstrated to persist in the lungs, and this greater pulmonary persistence may contribute to the chronic lung disease (silicosis) that it causes (Brody et al., 1982).From literature it is also known that welding related nanoparticles (as iron, chromium, and manganese oxides) could be responsible, at least in part, for the pulmonary inflammation observed in welders (Andujar et al., 2014).
On the grids collected in the zone in which automatic welding was performed (A-GTAW), particles, agglomerates and aggregates of a wide dimensional range were observed.The most representative particles were constituted by iron/manganese oxide.Nanoparticles with this chemical composition formed agglomerates chainlike as shown in Figs. 3 and 4. The equivalent diameter (Feret diameter) (Merkus, 2009) of about 300 particles was measured and the average diameter was 47 nm.The second more represented group in grids was agglomerates of iron oxide nanoparticles chainlike (Fig. 5) with a mean diameter of 21 nm.Very few are chromium oxide single particles with an average diameter of 1.276 µm.Agglomerates and aggregates of different kinds of silicates were found with average diameter of 0.791 µm.
On the grids collected in the zone in which manual welding was performed (M-GTAW), three kinds of aggregates were observed.Fig. 6 shows an example of the first kind of super-aggregates, mainly composed of aluminum and titanium oxides (X-ray spectrum in blue), with an average diameter of 0.349 µm.Inside these super-aggregates some iron and cobalt oxides nanoparticles (X-ray spectrum in red) with an average diameter of 22 nm were observed.The second kind of aggregates (Figs. 7 and 8) was composed of chromium oxides and had an average diameter of 1.434 µm.The third kind of aggregates had an average diameter of 0.924 µm and showed an X-ray spectrum characteristic of different kinds of silicates.On the grids collected in the zone far from the welding fumes source (BKD), particles, agglomerates and aggregates were found.Figs. 9 and 10 show two examples of aggregates that were probably aluminosilicates (Fig. 9) and phyllosilicates (Fig. 10), amorphous and in some cases crystalline.The average diameter of these particles was 0.537 µm.
DISCUSSION
During the GTAW process the aerosol generated is mainly nanosized: Miettinen et al. (2016) reported a geometric mean diameter of 46 nm in the middle of the workshop and lower size near the breathing zone.Similar results were found by Berlinger et al. (2011), Lehnert et al. (2012) and Brand et al. (2013).In our study we went deeply on shape and chemical composition of nano-and submicron particles emitted during both automatic and manual GTAW of steel to clarify their characteristics, finding new data that can help to understand the link between exposure and toxicity for exposed workers.
Analyzing automatic GTAW, TEM observations of size and shape of the collected particles are substantially in agreement with previous studies (Berlinger et al., 2011;Graczyk et al., 2016).No individual primary particles in the nanoscale were found.Particles in the ultrafine range were predominantly spherical, in some cases polyhedral, with clearly distinguishable boundaries and formed chainlike agglomerates with fractal geometry.The majority of emitted particles in respirable range was chainlike agglomerates constituted by iron/manganese oxide nanoparticles with size ranging from 14 to 208 nm and a mean value of 47 nm.Despite the low percentage of manganese in wires (< 2%), the smallest then more dangerous particles emitted during the welding process contained manganese.About the 90% of iron/manganese oxide particles observed was smaller than 100 nm: about 67% were in the range 0-50 nm, 24% in the range 50-100 nm, only 9% in the range 100-210 nm.This result is highly significant from a toxicological point of view: it is possible that, after inhalation, the van der Waals and electrostatic forces holding agglomerates together are at least partially disrupted, breaking agglomerates into their primary particle constituents or smaller agglomerates (Richman et al., 2011).It is already reported in literature (Zimmer et al., 2002) that the arc welding activities represent a source of exposure to high amounts of fumes containing manganese, also in nanoparticle size.From a study population of welders, Racette et al. (2001) showed that some of them would develop Parkinsonism 17 years earlier compared to the general population, thus placing the problem of understanding what was the penetration pathway of the metal into the inner brain structures and, in particular, inside the basal nuclei, the central nervous system regions most involved in the genesis of the disease.In nano-form this metal can translocate through the olfactory tract reaching the brain, where it can cause a neurological disease called manganism, with Parkinson-like symptoms (Yu et al., 2000;McMillian, 2005).
Manganese enrichment in A-GTAW samples, if compared to M-GTAW ones, is in agreement with Moroni and Viti (2009) and Jenkins (2003) who report that it is a function of decreasing of magnetite-like particles size in welding fumes.For this reason, even if the time of direct exposure of workers to the fumes generated during the automatic process in the restrained area is brief, an effective local exhaust ventilation is necessary.
Another metal, that can have an important role in lung toxicity, is chromium, but in the case of automatic GTAW the size of chromium particles is higher reaching an average diameter of 1.276 µm.No chromium nanoparticles were found in aerosol sampled during the welding process.
In manual GTAW the pattern of particles changed: particles observed on TEM grids were mostly amorphous and irregular-shaped aggregates.In some cases, these aggregates seem to be formed by thin and irregular plate-like particles overlapped and fuse together.The most representative nanoparticles were constituted by iron/cobalt oxide, with an average diameter of 22 nm, that were present inside aggregates of higher dimensions.As in the automatic welding, chromium was present only in bigger aggregates while no manganese oxide nanoparticles were found on TEM grids.
In our study particles, agglomerates and aggregates composed of silicates were ubiquitous.On all collected TEM grids no fibres were found.The presence of silicates on A-GTAW and M-GTAW samples reflects in part the chemical composition of the welding wires (silicon was a component of the two kinds of wires), in part is due to the atmospheric particulate matter of natural origin that was found also in BKD samples.Aluminosilicates and phyllosilicates are widely used for example in constructions materials like plasters and paints.The fact that in BKD samples metal oxides particles, like iron, manganese, cobalt and chromium oxides, were not observed, confirms that the source of metal oxides was only the welding process.
Particles produced in both welding processes (automatic and manual) are respirable according to ISO-CEN (Organization for Standardization -European Committee for standardization) definition (CEN, 1993) but the most significant difference between the two sources of welding fumes that were investigated is the aggregation state of the particles found on TEM grids.The particles sampled during the automatic process had the smallest diameter in comparison with those sampled in the M-GTAW zone, were mostly nanosized and formed agglomerates chainlike.The particles found in the M-GTAW zone, instead, showed a larger diameter and were mostly aggregates.These results generate the hypothesis that even the same kind of welding process produces different classes of particles depending on the lifetime of the welding process.As a matter of fact, the automatic welding is a fast process in which the contact between the tungsten electrode, the filler wire and the material that has to be welded is brief.On the other hand, the automatic welding is performed inside a restrained area that workers cannot enter until the end of the welding process when they pick up the welded pieces to complete and refine them manually.In the manual process the fume was generated by the welder who completed and refined the pieces treated in the automatic process (the operator welds a defined spot for a longer period of time respect to the automatic welding arm).
Our results on size and morphology of M-GTAW particles seem to be in contrast with Graczyk et al. (2016) who conducted the exposure assessment of apprentice welders during aluminum manual GTAW in a ventilated exposure cabin to have a controlled setting.As regards particle size distribution evaluated by means of particle sizers, they reported that at the breathing zone, 92% of the particles had a geometric mean diameter below 100 nm and 50% of the particles was below 41 nm.Moreover, particles collected on TEM grids formed chainlike agglomerates of primary particles in the nanoscale.
On the contrary, in M-GTAW zone of our study we found mostly aggregate of particles that were not in the nanoscale.These differences can be due to the prevalence of the accumulation over the nucleation mechanism of particle formation and the influence of other industrial processes during our manual welding activity that was not performed in a controlled setting.In these terms, our results are substantially in agreement with Moroni and Viti (2009) who characterized GMAW fumes and pointed out the differences in aerosol composition in terms of particle size at variable distances from the welding chambers.The influence of the industrial setting resulted in greater particle size variability and larger particle size of welding fumes compared to a controlled (laboratory) and/or confined setting.Moreover, Dasch and D'Arcy (2008) reported a size distribution mode for aluminum GMAW fumes in an automotive plant at 0.8 µm compared to 0.2-0.4 reported in literature for controlled settings.
For what concern the risk assessment during M-GTAW, we have to notice that our sampling was performed at a distance of 0.5 m away from the welding task, not in the breathing zone where the particle number concentration could be actually higher as reported by Graczyk et al. (2016) who found that the mean particle number concentration at the breathing zone (inside the welding helmet) was 54% higher (1.69E+06 particles cm -3 ) than at the near field location (0.6 m away from the welding task) confirming that the exposure increases with the decrease of the distance from the source and that a non-ventilated welding helmet is not a sufficient protection measure.For this reason, also the particle number concentration, not only particle size and morphology, have be taken into account during manual welding.
CONCLUSIONS
Characterizing dimensions, shape and composition of welding fume particles is important to better understand how they can originate occupational diseases in welders with special attention to ultrafine particles.
Since in literature there are few studies that focus on GTAW fumes particles even though they are potentially the most hazardous ones in terms of occupational exposure, in this study two different real sources of GTAW fumes particles, collected in an automotive plant, were characterized by means of TEM-EDS and compared to a zone of the plant far from the two sources, used as a reference background.
All particles observed in this study are in the class of respirable particulate matter.The particles sampled during the automatic welding process had the smallest diameter in comparison with those sampled in the M-GTAW zone, were mostly nanosized, constituted by iron/manganese oxide and formed chainlike agglomerates.The particles found in the M-GTAW zone, instead, showed a larger diameter and were mostly aggregates that showed an X-ray spectrum characteristic of different kinds of silicates.Iron and cobalt oxides nanoparticles were present only inside bigger aggregates mainly composed of aluminum and titanium oxides.
These results generate the hypothesis that even the same kind of welding process produces different classes of particles depending on factors like the lifetime of the welding process.It was also demonstrated that chemical composition of the particles reflected the composition of the mild steel used during the welding processes, meanwhile only particulate matter correlated to the building construction (as plaster) was found in BKD samples.
Fig. 1 .
Fig. 1.Schematic representation of the sampling points in the three zones.
composition, aggregation state and dimensions revealed by TEM-EDS investigations for automatic and manual gas tungsten arc welding (A-GTAW, M-GTAW) samples and samples collected far from the welding fumes source (BKD).The list of main chemical elements is indicated in decreasing abundance order.* Feret diameter.
Fig. 3 .
Fig. 3. Representative TEM image and EDS spectrum of an agglomerate found on a A-GTAW sample.Note: Scale bar: 0.19 µm.The colored X in figure represents the center of the microanalysis spot.Spectrum refers to iron/manganese oxide.
Fig. 4 .
Fig. 4. Representative TEM image and EDS spectrum of an agglomerate found on a A-GTAW sample.Note: Scale bar: 50 nm.The colored X in figure represents the center of the microanalysis spot.Spectrum refers to iron/manganese oxide.
Fig. 5 .
Fig. 5. Representative TEM image and EDS spectrum of an agglomerate found on a A-GTAW sample.Note: Scale bar: 50 nm.The colored X in figure represents the center of the microanalysis spot.Spectrum refers to iron oxide.
Fig. 6 .
Fig. 6.Representative TEM image and EDS spectra of a super-aggregate found on a M-GTAW sample.Note: Scale bar: 0.16 µm.The colored X in figure represent the centers of the microanalysis spot.Red spectrum refers to iron and cobalt oxides, the blue one refers to aluminum and titanium oxides.
Fig. 7 .
Fig. 7. Representative TEM image and EDS spectrum of an aggregate found on a M-GTAW sample.Note: Scale bar: 0.27 µm.The colored X in figure represents the center of the microanalysis spot.Spectrum refers to chromium oxide.
Fig. 8 .
Fig. 8. Representative TEM image and EDS spectrum of an aggregate found on a M-GTAW sample.Note: Scale bar: 1.25 µm.The colored X in figure represents the center of the microanalysis spot.Spectrum refers to chromium oxide.
Fig. 9 .
Fig. 9. Representative TEM image and EDS spectrum of an aggregate found on a BKD sample.Note: Scale bar: 0.20 µm.The colored X in figure represents the center of the microanalysis spot.Spectrum refers to an alluminosilicate.
Fig. 10 .
Fig. 10.Representative TEM image and EDS spectrum of an aggregate found on a BKD sample.Note: Scale bar: 0.30 µm.The colored X in figure represents the center of the microanalysis spot.Spectrum refers to a phyllosilicate. | 2018-12-11T02:49:34.867Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "df36d6ec26d2dee951113c89698b6bb0a3ae3b22",
"oa_license": "CCBY",
"oa_url": "http://www.aaqr.org/article/download?articleId=6574&path=/files/article/6574/2_AAQR-17-07-OA-0226_579-589.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "df36d6ec26d2dee951113c89698b6bb0a3ae3b22",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
216494306 | pes2o/s2orc | v3-fos-license | EVALUATION OF TOTAL POLYPHENOLS AND ANTIOXIDANT CAPACITY IN MUSHROOM EXTRACTS PLEUROTUS OSTREATUS AND LENTINULA EDODES
Objective : This research was aimed at assessing the concentration of total polyphenols in ethanolic and methanolic extracts of Pleurotus ostreatus and Lentinula edodes mushrooms and antioxidant activity. Methods: Polyphenols were determined by the Folin Ciocalteu method, using lyophilized mushroom samples for the preparation of extracts and antioxidant activity by the TBARS method. Conclusion: Results: Extracts prepared from mushrooms showed appreciable values of polyphenols, and for the ethanolic extract of Pleurotus ostreatus and Lentinula edodes values of 102.78 and 81.83 mg of gallic acid/100 g of the sample respectively, comparable to those obtained in some fruits For methanolic extracts, values of 100.45 and 78.92 mg of gallic acid/100 g of sample were obtained. Polyphenol concentration values for the Pleurotus were higher in the two types of extracts and lower for the Lentinula edodes . When evaluating the antioxidant activity, high antioxidant activity was found for the two types of mushroom, Pleurotus ostreatus and Lentinula edodes, presenting peroxidase inhibition values of 88.04 and 89.49% respectively.
INTRODUCTION
Knowing that mushrooms are present in all habitats due to adaptability in almost any substrate and climate. There are on average 200,000 species of which only 7,000 of them are known. World production of cultivated mushrooms exceeds 6.2 million tons, the value of which is close to 30 billion dollars. The growth rate is 11% and this is due to research on its medicinal and nutritional properties. This is the reason for the high demand for edible mushroom derived products [1].
The Orellana mushroom (Pleurotus ostreatus) is one of the edible mushrooms with the highest productivity growth during the last ten years due to its nutritional properties and the high percentage of proteins that allow replacing those of animal origin [2]. The mushroom is normally produced in organic matter and is considered as an alternative for the use of agro-industrial waste at low cost [3].
These mushrooms form a large group with very diverse species, differentiated by: color (yellow, white, gray, brown, pink), shapes, flavor and technical requirements [4]. Lentinula edodes is one of the most important edible mushrooms in the world from the point of view of production and is one of the most popular cultivated mushrooms [5].
It is known that the extracts of some mushrooms inhibit antioxidant activity by the natural aging process resulting from the action of free radical products of metabolism, which have antioxidant activity and whose study has focused on the kingdom plantae [6].
The research focused on studying the body of Pleurotus ostreatus and Lentinula edodes in order to exploit the presence of polyphenols, as well as the determination of their antioxidant activity, these compounds could be responsible for the presence of such action in the extracts of these mushrooms.
The objective of this research work was to evaluate the total polyphenols and antioxidant capacity in extracts of Pleurotus ostreatus and Lentinula edodes mushrooms.
MATERIALS AND METHODS
This work was carried out in the Molecular Biology laboratory of the Research Department, as well as an air-conditioned room, with temperature and humidity control, Bolivar State University. To carry out this research, strains of edible mushrooms were used Pleurotus ostreatus (716/12) and Lentinula edodes strain L-SSC.
Experimental measurements
In the powdered mushrooms (Pleurotus ostreatus and Lentinula edodes) physical analyzes were performed as: Humidity, It was performed under the international standard (AOAC925.10); Ashes, using the technique determined by the international standard (AOAC923.03); Elemental analysis of Carbon and Nitrogen, using an elemental analyzer (various macro cube/1922261/120V, USA), this according to the Dumas methodology.
Extracts preparation
To obtain extracts rich in phenolic compounds, extraction was carried out using two types of solvents: methanol and ethanol due to their polarity. For which a block design with factorial arrangement, AxB was applied (table 1). For the process, previously heavy mushrooms (3 g), dehydrated and pulverized by lyophilization and with a humidity of 4-6% were placed in amber glass bottles, then 25 ml of each of the solvents (80%). Each of the dilutions was stirred in a Thermo shaker (YVIMEN TR100-G, USA) for 15 min, then stored for 24 h. At the end of this stage, stirring was repeated to facilitate extraction for 10 min at 25 °C in a cellular ultrasonic chamber with moderate agitation. Finally, the extracts were centrifuged for 12 min at 6000 rpm, at 10 °C, the supernatant was filtered through Whatman # 1 filter paper, the extracts were stored.
Statistical analysis
For this an analysis of variance (ANOVA) was applied to establish the differences between the treatments, also, to know the differences between the means of the treatments, the 5% Tukey test was applied for averages and factors under study.
Determination of the concentration of total polyphenols
The concentration of total polyphenols in extracts was measured by spectrophotometry, based on a colorimetric oxide-reduction reaction. The oxidizing agent used was the folin-Ciocalteu reagent.
The calibration curve was performed using a standard solution of gallic acid (0.1 mg/ml) in volumes from 0 µl to 160 µl at 20 µl intervals. For the concentration of total polyphenols, 250 µl of 1N folin-ciocalteu reagent was added to each of the previously prepared standards and samples and sonicated for 5 min. Subsequently, 250 µl of 7.5% Na₂CO₃ was added and allowed to stand for 1 h. The absorbance at 750 nm was measured. Three grams of mushrooms were used for each treatment with three repetitions.
Determination of antioxidant activity
Ec. 1.
The antioxidant activity was performed under the methodology described by Rojano et al. [7], with modifications, for which different concentrations of the samples (100, 200, 500 and 1000 μg/ml) were prepared, from the prepared dilutions 500 μl were taken and mixed with 500 μl of olive oil (previously oxidized using the TBARS method) in 2 ml eppendorf tubes. In addition, BHT standards (Butyl hydroxytoluene) were prepared at the same concentrations as the sample and similarly mixed with 500 µl of oxidized oil. Samples and standards remained at 28 °C for 8 h in the micro incubator with constant agitation at 400 rpm. After this incubation process, 1 ml of 1% thiobarbituric acid was added to each sample. The prepared samples remained in the micro incubator at 95 °C for 60 min at 400 rpm, then they were cooled and the spectrophotometer was measured at a wavelength of 532 nm. The antioxidant activity was expressed as % oxidation inhibition by the following equation: Where: At: Sample Absorbance, Ac: Control Absorbance
RESULTS AND DISCUSSION
Physical-chemical characterization of lyophilized and powdered mushrooms table 2 shows the results of the chemical-physical characterization and presents the average values of the samples of the lyophilized and powdered mushrooms.
Values between () represents the standard deviation
The moisture content in the two types of mushroom is low as a lyophilization effect, 4% for the Pleurotus ostreatus and 3.13 for the Lentinula edodes, therefore, the moisture content in the two types of mushrooms has a variation of 0.87%. Gómez et al., [8], states that the humidity of lyophilized mushrooms should not exceed 5%. Atehortúa, [9], considers that mushrooms grown in vegetable waste contribute an ash content between 3 and 5%, producing an increase in their macro content (sodium, potassium, calcium and phosphorus) and microelements (iron, iodine, copper, zinc). In our study, the Pleurotus presented a value of 4.28% while Lentinula 3.47%, in fact, both studied mushrooms comply with the established parameters. In reference to the nitrogen content in mushrooms, Pleurotus presented the highest percentage with 5.33% followed by Lentinula with 4.87%. In a work developed by, Vega and Franco, [10], considers that an edible mushroom must be greater than 3.5% because this minimum nitrogen content allows the formation of amino acids, proteins and fiber, in our study, Pleurotus has the highest percentage with 5.33% followed by the Lentinula with 4.87%, values that are within the established bibliographic ranges. Likewise, the same authors consider that the carbon content must be greater than 30%, in this case, the Lentinula mushroom had the highest percentage with 38.87%. Cortez et al. [11], establishes that freeze-dried and powdered mushrooms must have carbon/nitrogen ratio values greater than 6.5%; As shown in our work, the two types of mushrooms have values higher than those mentioned in the bibliography.
Polyphenol concentration
The results obtained show that the content of polyphenols predominates in the a1b1 treatment corresponding to the mushroom Pleurotus+extraction with ethanolic and the lowest value corresponds to treatment 4 (78.92 mg/100g) (table 3). According to Radice et al. [12], the content of polyphenols in food and should exceed 92 mg/100 g of sample, which shows that the Pleurotus mushroom meets this parameter, this is very favorable in the consumer mainly in the treatment of cardiovascular diseases since it has vasodilatory effects.
By means of the analysis of variance that is evidenced in the table 4 that exists in relation to the experimental response total polyphenols due to the effect of factor A: type of mushroom and the effect of factor B: type of solvent. This shows that there is no correlation between these two study variables, that is to say, they are independent in presenting highly significant differences in polyphenol content. A high antioxidant power and inhibition of lipid peroxidation were found in the mushroom Lentinula edodes by Liu et al. (15), who reported values between 80.32-93.73%. Jin et al. (16), found high antioxidant activity in the fruiting bodies of Pleurotus ostreatus
Analysis of antioxidant activity
The antioxidant activity of the mushrooms evaluated by the TBARS method, among which the capacity of the mushroom Pleurotus ostreatus stands out, since having the highest value 89.41%, inhibits oxidative degradation thanks to its ability to react with free radicals (table 6), these values agree with what Guzmán et al. [14], in which it determines that the percentage of peroxidase inhibition of a mushroom must be greater than 85% because in this way the product would act to promote the activity of antioxidant enzymes preventing the mushrooms from deteriorating. | 2020-03-26T10:40:51.840Z | 2020-03-17T00:00:00.000 | {
"year": 2020,
"sha1": "5f18e8df8be568e806e6e4036da93fd52eea42fe",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.22159/ijcpr.2020v12i2.37499",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "20797906edf2093e72c95345733bee92892145a9",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
55151029 | pes2o/s2orc | v3-fos-license | Corrosion Behavior of Ti and TI 6 Al 4 V in Citrate Buffers Containing Fluoride Ions
The effect of fluoride ions concentration on the electrochemical behavior of Ti grade 2 and Ti6Al4V in citrate buffers was studied. Open circuit potential (OCP) measurements and voltammetric studies of the samples in the fluoride containing citrate buffers revealed a dissolution process when the pH falls below 5.0 and the NaF content is higher than 0.01 M. However, in citrate pH 7.6 the materials showed a passive behavior even in 0.1 M NaF. Some micrographs of Ti grade 2 obtained after longer immersion times in citrate pH 5.0 with 0.01 M NaF showed a surface attack. EIS (Electrochemical Impedance Sprectroscopy) data obtained at the OCP revealed that the film resistance decreases when the immersion time is increased in pH 5.0 containing 0.1 M NaF. In the citrate pH 7.6 the EIS data indicated a two-layer model of an oxide film consisting of a more compact inner layer and a porous outer layer. On the other hand, the EIS results in citrate pH 4.0 change significantly when the fluoride ions concentration increases from 0.01 to 0.05 M. The electrochemical data revealed that the corrosion behavior of Ti grade 2 and Ti6Al4V in the citrate buffers depends on the pH, the fluoride content and the exposure time.
Introduction
Ti and its alloys are extensively used in medicine and dentistry due to their high resistance to corrosion and biocompatibility with living tissues of the human body.It is well known that Ti undergoes an osseo-integration process when it is placed in contact with bones 1,2 .However, the corrosion resistance of Ti decreases when fluoride ions are present in the media.Solutions containing more than 20 ppm of fluoride ions may attack Ti surfaces when the pH falls below 6.0 [3] .
It is well known that the corrosion and pitting processes on Ti-based alloys depend on the fluoride content and the pH of the media [4][5][6][7][8][9][10] .Previous works reported that the dissolution of the oxide passive film occurs on Ti grade 2 and Ti6Al4V in fluoride containing citric acid 4 .According to Reclaru 5 , pitting was detected at some Ti alloys in saliva containing 0.1 % NaF and pH < 4.0.Schiff 6 showed that the corrosion resistance of Ti6Al4V alloy in artificial saliva decreased at pH 2.5.Further, Frateur 7 observed TiO 2 dissolution in 0.2 M fluoride ions concentration with pH 2.0.According to Huang 8 , an increase in fluoride ion concentration leads to a decrease of the corrosion resistance of Ti6Al4V alloy in acid artificial saliva at pH 5.0 and 37 °C.Moreover, in acidic solutions the fluoride ions form HF and a concentration over 30 ppm results in destruction of the TiO 2 passive film 9,10 .
NaF and other fluoride compounds are commonly employed in dental treatments.In fact, most toothpastes and gels used to remove stains from enamel contain a concentration of about 1 and 2% of fluoride, respectively, with a pH between 3.5 and neutral.Despite the benefits of fluoride ions, their infiltration into dental implants may cause Ti corrosive attack if the pH is below neutral.Furthermore, citric acid and citrates are commonly employed in juices and some industrialized foods, so they can contact Ti alloy dental prosthese, which demonstrates the importance of studying the corrosive processes of these materials.The aim of this work is to investigate the corrosion behavior of Ti grade 2 and Ti6Al4V in citrate buffers containing different fluoride ion concentrations by electrochemical techniques.
Experimental
A conventional three-electrode cell was used for the electrochemical experiments, for which the working electrodes were commercialy pure Ti grade 2 [11] or Ti6Al4V 12 rods (Table 1) inserted into a Teflon holder with exposed geometric areas of 0.0177 or 0.0314 cm 2 , respectively.The samples were acquired from Camacan ® Industrial Ltda, Brazil.These electrodes were polished with 600 and 1200 emery papers, degreased with acetone and rinsed in pure water before each measurement.The reference electrode was the saturated calomel electrode (SCE), to which all the potentials were referred, and a Pt wire was used as the counter electrode.
Ti grade 2 sheets (1 cm 2 ) were examined by scanning electronic microscopy (SEM) in the JSM5800 device from JEOL ® .The samples were polished with 600 and 1000 emery papers, degreased with acetone and rinsed in pure water before the immersion tests.
The citrate buffers were prepared with 0.1 M citric acid aqueous solutions (pH 2.0) and the pH was adjusted to 4.0 or 5.0 with 0.1 M sodium citrate aqueous solution (pH 7.6).NaF was added to the citrate buffer pH 4.0 at concentrations of 0.01, 0.025, 0.05 or 0.1 M to investigate the effect of the fluoride content.All electrolytes were naturally aerated at room temperatures (20 ± 1 °C).
Electrochemical impedance spectra (EIS) were obtained using an Autolab PGSTAT-30 device in a frequency range from 10 5 to 10 -3 Hz and the amplitude of the sinusoidal signal was 10 mV.The experimental data were evaluated using a simple-least squares fitting procedure.The experimental data presented a good fitting by the transfer functions of the equivalent circuits (EC) proposed with an error of less than 10%.
The voltammetric curves were obtained using the same device with a sweep rate of 0.05 V/s.The tests were carried out three times to ensure reproducibility.
Results and Discussion
Figure 1 presents the OCP of Ti6Al4V variation with the immersion time in the citrate buffers containing 0.1 M NaF.The OCP in the citric acid (pH 2.0) was found to decrease to -1.0 V in the first 5 minutes, indicating the primary oxide film dissolution.It is known that the reaction of NaF with H + produces HF that dissolves the oxide film on titanium surfaces 13 .In the citrate buffer pH 5.0 the OCP shifts to -0.6 V and remains at this potential after an hour immersion.According to Kelsall 13 , at this potential and pH there is the formation of the lower oxide Ti 2 O 3 in equilibrium with dissolved Ti producing Ti 3+ ions even in fluoride presence.The OCP in citrate buffer pH 7.6 shifts to -0.5 V and increases with the immersion time to values related to a passive oxide film formation, probably TiO 2 [13] .Ti grade 2 presented the same behavior (data not shown).These results indicate that the onset of the dissolution process starts at pH < 5.0 which is in good agreement with previous data reported in the literature 5,9,10 .
Surface analysis of Ti grade 2 by SEM showed significant surface destruction after 4 days immersion in citric acid containing 0.1 M NaF (Figure 2a).The corrosion was also observed on Ti surface after 4 days immersion in citrate buffer pH 5.0 with 0.1 M NaF (Figure 2b).However, the micrograph of Ti grade 2 in citrate buffer pH 7.6 + 0.1 M NaF did not reveal any attack on the metal surface (Figure 2c).These findings indicate that the corrosion process on Ti in the fluoride containing citrate buffers starts at pH 5.0 after longer immersion times.The EDS analysis found only Ti presence.According to Nakagawa 9,10 , there are limits of fluoride contents and pH values at which the corrosion behavior of Ti changes drastically.
The OCP of Ti6Al4V after an hour immersion in citrate pH 4.0 decreases from -0.55 to -0.83 V when the NaF concentration increases from 0.01 to 0.025 M.However, the micrograph of Ti grade 2 after 2 days immersion in 0.01 M NaF containing citric acid shows the surface attack (Figure 3).This fact indicates the onset of the corrosion process in 0.01 M fluoride ions concentration after longer immersion times.
The voltammetric curve of Ti grade 2 in 0.1 M NaF containing citric acid shows an anodic peak around -0.8 V with current densities of about 6000 µA.cm -2 indicating a dissolution process followed by a passive region (Figure 4); however, anodic currents are still observed at the reverse scan and the same anodic peak appears again with current densities of about 2000 µA.cm -2 .Further, current oscillations can be seen at the reverse scan.This kind of oscillations has also been reported in the literature 6,14 and was attributed to the competition between film formation and dissolution.Ti6Al4V presented the same behavior, however the higher anodic currents detected at the reverse scan (5000 µA.cm -2 ) indicate an enhanced dissolution rate.These features reveal that the film is unstable and probably contains some pores and/or defects on its surface, as dissolution processes can be observed at the reverse scan.Similar results were obtained by other authors 6,9,10 for Ti6Al4V in acidified salivas with NaF and pH ≅ 4.0, but the reverse scans were not presented.According to Shiff 6 , the passive region observed at anodic potentials showed that the film became stable.The major difference of our work is that the anodic currents at the reverse scan show that the film is not stable in the media evaluated.
When the pH of the buffer was increased to 5.0, the voltammetric curve of Ti6Al4V (Figure 5) shows an active to passive transition with current densities of about 300 µA.cm -2 , in accordance with what was reported for Ti6Al4V in fluoride containing salivas with pH 5.0 [8][9][10] .The current densities decrease to about 100 µA.cm -2 at pH 7.6 showing a passive behavior with a current almost constant indicating the film growth, and a small cathodic peak appears around -0.8 V.These facts are in a good agreement with the literature indicating that Ti and its alloys are passive in fluoride containing media with pH > 5.0 [5,6,9] .
Figure 6a shows the fl uoride concentration effect on the voltam-6a shows the fluoride concentration effect on the voltammetric curves of Ti grade 2 in citrate buffer at pH 4.0.The curves revealed an active to passive behavior and the current densities increase from 250 to 600 µA.cm -2 on increasing the fluoride concentration from 0.01 to 0.05 M. For Ti6Al4V the same behavior was observed in 0.01 M NaF, but the dissolution process is noticed at potentials close to -1.0 V in the higher concentration with current densities of about 2000 µA.cm -2 (Figure 6b); however the anodic peak does not appear at the reverse scan.It is evident that Ti6Al4V is more susceptible to corrosion in this media, which is attributed to the presence of Al and V causing the formation of a more defective film.In fact, Al and V improve the alloy strength 3 , but, on the other hand, they increase the anodic dissolution of the alloy 15 .
EIS studies
Figure 7a presents Nyquist plot of Ti6Al4V obtained after one day immersion in 0.1 M NaF containing citrate buffer at pH 7.6.The sepctrum shows a near capacitive behavior typical of passive materials and this does not change on increasing the exposure time but the film resistance decreases after 7 days immersion.These findings are different from those observed in Ringer serum containing equal fluoride concentration, in which the oxide film stability was enhanced after 7 days' immersion in this medium 16 .This difference may possibly be related to the presence of citric acid in the buffer producing a more defective film.These facts are in good agreement with was reported in the literature for Ti based materials [17][18][19][20][21][22] .The equivalent circuit (EC) used to describe the experimental data was based on a two-layer model consisting of a barrier inner layer and a porous outer layer [16][17][18][19][20][21] 7b.R s is the solution resistance, R 1 and Q 1 the resistance and capacitance of the outer layer and R 2 and Q 2 the resistance and capacitance of the inner layer.According to Rammelt 23 the dispersive behavior observed at rough electrodes can be described by a constant phase element (CPE).The impedance of this element is given by Equation 1: [23,24] ( The value of n is associated with the non-uniform distribution of current as a result of roughness and surface deffects 20 .CPE describes an ideal capacitor for n = 1, an ideal resistor for n = 0 and -1 for a pure inductor 19 .The appearance of a CPE is due to the presence of inhomogeneities in the electrode and it can be described in terms of a distribution of relaxation times, or it may arise from non-uniform diffusion 19 .The simulated data are given in Table 2.
Significant changes are observed at the Nyquist plots of Ti6Al4V in the citrate buffer pH 5.0 containing 0.1 M NaF (Figure 8).The diagram obtained after 1day immersion shows a depressed capacitive response, however the emergence of a low frequency inductive looping is seen after 7 days immersion.Similar impedance spectra have been observed by Scully 17 for β-Ti alloy in 5 M HCl solution.The origin of this looping is not clear but it may indicate changes in the properties of the oxide film.According to the literature this looping may be attributed to a diffusion-controled process within the oxide 25 or to a surface charge at the metal-oxide interface 26 .The equivalent circuit proposed to describe the EIS data after 1 day immersion is the same of Figure 7b.For longer exposure the EC used is R S (C 1 [R 1 L 1 ]), where R S , C 1 , R 1 and 1 represent the resistance of the solution, the total system capacitance, the polarization resistance (R P ) and the inductive element, respectively (Figure 9).The results indicate that spectra.The EC used to describe the EIS spectrum in 0.01 M NaF (Figure 10a) is the same of that of Figure 7b.On increasing the NaF concentration to 0.1 M (Figure 10b) the diagram shows two capacitive time constants at high and middle frequencies followed by an inductive loop at the lower frequency range.This inductive time constant has often been attributed to surface or bulk relaxation of species on the oxide layer and related to active dissolution 25,26 .The experimental data were described by the EC (Figure 11 ), where R S , represent the resistance of the solution; Q 1 and R 1 the capacitance and resistance of the high frequencies; Q 2 and R 2 the capacitance and resistance of the lower frequencies and L the inductive elment.The polarization resistance R P is the sum of R 1 and R 2 .
Conclusions
OCP measurements and voltammetric studies revealed that Ti and Ti6Al4V undergo a corrosion process when the pH falls below 5.0 and the fluoride concentration is higher than 0.01 M. Both materials were passive in citrate buffer pH 7.6, even in the presence of 0.1 M NaF; however, in the buffer at pH 5.0 and in 0.01 M NaF containing buffer pH 4.0 a corrosive attack was observed at the micrographs obtained after longer immersion times.
The EIS studies confirm the passive behavior of Ti6Al4V in citrate buffer at pH 7.6 such as the corrosive process in citrate buffers at pH 4.0 and 5.0.Further, metal dissolution enhances on increasing the fluoride content in citrate buffer at pH 4.0.
The results reported here demonstrate that the corrosion behavior of Ti and Ti6Al4V depends on the fluoride concentration, the pH of the citrate buffers and the immersion time.
Figure 4 .
Figure 4. Voltammetric curves of Ti and Ti6Al4V in 0.1 M NaF containing citric acid solution.
Figure 5 .
Figure 5. Voltammetric curves of Ti6Al4V in solutions at pH 5.0 and pH 7.6 containing 0.1 M NaF.
Table 2 .
Fitting parameters used to simulate the EIS of Ti6Al4V in citrate buffer at pH7.6 | 2018-12-06T01:00:48.352Z | 2010-03-01T00:00:00.000 | {
"year": 2010,
"sha1": "cbc41d4542af6487bcd9a00e3d5da770d7382478",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/mr/a/HW7M7NMCyrdSmSVYZpQBgFK/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "cbc41d4542af6487bcd9a00e3d5da770d7382478",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
133688580 | pes2o/s2orc | v3-fos-license | Studies on Linseed (Linum usitatissimum L.) based Intercropping Systems as Influenced by Integreted Nutrient Management on Yield and Economics under Moisture Scarce Condition
With growing population the demand of water for various purposes is ever increasing. On the other hand, the availability of water resources is limited in space and time. A systematic and scientific planning for its optimal utilization is high imperative. The scarcity of water regarded as the most important factors in crop production, is usually a limiting factor in semi-arid regions. To meet the future food demands for water among various sectors, a more efficient use of water will be essential (Bhatt et al., 2016).
Introduction
With growing population the demand of water for various purposes is ever increasing. On the other hand, the availability of water resources is limited in space and time. A systematic and scientific planning for its optimal utilization is high imperative. The scarcity of water regarded as the most important factors in crop production, is usually a limiting factor in semi-arid regions. To meet the future food demands for water among various sectors, a more efficient use of water will be essential (Bhatt et al., 2016).
The integrated nutrient management has now gaining importance because of the present negative balance and neither the chemical fertilizers alone nor the potential alternative source of nutrient can achieve the production sustainability of soils and crops under intensive cultivation.
Under such conditions integration of indigenously available organic sources of nutrients with inorganic sources is of vital significance for sustaining the productivity and fertility of soil (Sharma and Saroa, 2017). Vermicompost is a good organic source of plant nutrient and growth hormone which enhance plant growth and microbial population (Awasthi et al., 2011).
Inoculation of biofertilizers (PSB) reduce the use of inorganic fertilizers with a view to attain an ecofriendly environment. Significant advantages in land use efficiency, crop productivity and monetary returns in intercropping as compared with sole cropping of component crops have been recorded under varied agroclimatic conditions (Singh et al., 2008, Rehman et al., 2009and Singh et al., 2011. The role of balanced use of organic and inorganic fertilizers in boosting productivity is well known. However, such information is lacking in the sub-tropical rainfed agroecosystems of Uttar Pradesh.
Henceforth this study was undertaken to evaluate the suitable cropping pattern, nitrogen fertilization and standardization of vermicompost doses in the region besides their optimal combination of the monetary and non-monetary inputs on productivity of rainfed crops.
Materials and Methods
A field experiment was conducted during rabi seasons of 2015-16 and 2016-17 at Soil Conservation and Water Management Farm of C S Azad University of Agriculture and Technology, Kanpur in alluvial soil under rainfed condition. The soil of the experimental field was sandy loam in texture and slightly calcareous having organic carbon 0.32%, total nitrogen 0.03%, available P 2 O 5 16.0 kg ha -1 , available K 2 O 155 kg ha -1 , pH 7.7, electrical conductivity 0.37 dS m -1 , wilting point 6.2%, field capacity 18.4%, water holding capacity 29.6%, Bulk density 1.46 Mg m -1 , Particle density 2.56 Mg m -1 and porosity 42.9%. The field experiment was conducted in split plot design with three replications, keeping cropping systems in main plots and INM in subplots. The treatment comprising 9 cropping systems viz. C 1 : Linseed sole, C 2: Lentil sole, C 3: Barley sole, C 4: Linseed + lentil (3:1), C 5: Linseed + barley (3:1), C 6 : Linseed + lentil (4:1), C 7 : Linseed + barley (4:1), C 8: Linseed + lentil (5:1) and C 9: Linseed + barley (5:1) and 3 integrated nutrient management viz. N 1 : RDN, N 2 : 75% RDN through inorganic + 25% RDN through vermicompost N 3: 75% RDN through inorganic + 25% RDN through vermicompost + bio-fertilizer (seed coating) + PSB @ 2.5 kg ha -1 in soil. Cost of cultivation was calculated by taking in to account the prevailing prices of the inputs. The minimum support price for the grains of linseed, lentil and barley were taken in to account over the years and used for computing linseed equivalent ratio as per (Willey, 1979). The economic efficiency in terms of Rs. ha -1 day -1 was worked out by dividing the total net monetary returns by total duration of the crops. The economics of various cropping systems was also worked out to assess the most viable and remunerative cropping systems under rainfed condition.
Results and Discussion
The information on seed yield of linseed, lentil and barley as well as linseed equivalent yield for different treatments indicated that the seed yield was significantly influenced by the different treatments over the periods of experimentation (Table 1). Sole cropping showed significantly higher yield as compared to intercropping treatments. However, linseed equivalent yield was significantly highest under linseed + lentil (3:1) followed by linseed + lentil (4:1) whereas lowest equivalent yield was obtained in the treatment of linseed + barley (5:1) among different cropping systems during two different years. Application of 75% RDN through inorganic + 25% RDN through vermicompost + biofertilizer (seed coating) + PSB @ 2.5 kg ha -1 in soil brought about significantly highest seed yield and lowest values under RDN as well as linseed equivalent yield might be due to integrated application of fertilizers and organic sources has been also reported by Dubey et al., (2015).
Land equivalent ratio (LER) of all treatments of cropping systems was more than unity (Table 2). It was due to beneficial effect of intercropping on productivity of component crops. Treatment of linseed + lentil (3:1) attained highest values (1.32 and 1.33) than other cropping systems during both the years which might be due to comparatively higher productivity of the system. Better utilization of land and growth resources by the crops in intercropping systems has also been reported by Sharma, and Goswami (2010 (Nikam et al., 2008). Economic efficiency of different treatments further proved the potential of different cropping systems as well as application of INM. The scrutiny of the data clearly indicate that economic efficiency was maximum (Rs178.61 ha -1 day -1 ) in the treatment of linseed + lentil (3:1) and minimum (Rs 139.80 ha -1 day -1 ) under linseed + barley (5:1) during the two different years.
Based on two years of experiment it may be inferred that linseed + lentil (3:1) supplemented with 75% RDN through inorganic + 25% RDN through vermicompost + biofertilizer (seed coating) + PSB @ 2.5 kg ha -1 in soil showed good potential for sustainable production and proved to be quite remunerative in rainfed alluvial tract of Uttar Pradesh. | 2019-04-27T13:07:53.246Z | 2017-11-20T00:00:00.000 | {
"year": 2017,
"sha1": "2fba1f132b72602e4013e03bb7e204c49b61ef34",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/6-11-2017/Amar%20Kant%20Verma,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "66d8f096dc4755b10ce00e39e25c3f835ad6daba",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
14220358 | pes2o/s2orc | v3-fos-license | Flow-Volume Parameters in COPD Related to Extended Measurements of Lung Volume, Diffusion, and Resistance
Classification of COPD into different GOLD stages is based on forced expiratory volume in 1 s (FEV1) and forced vital capacity (FVC) but has shown to be of limited value. The aim of the study was to relate spirometry values to more advanced measures of lung function in COPD patients compared to healthy smokers. The lung function of 65 COPD patients and 34 healthy smokers was investigated using flow-volume spirometry, body plethysmography, single breath helium dilution with CO-diffusion, and impulse oscillometry. All lung function parameters, measured by body plethysmography, CO-diffusion, and impulse oscillometry, were increasingly affected through increasing GOLD stage but did not correlate with FEV1 within any GOLD stage. In contrast, they correlated fairly well with FVC%p, FEV1/FVC, and inspiratory capacity. Residual volume (RV) measured by body plethysmography increased through GOLD stages, while RV measured by helium dilution decreased. The difference between these RV provided valuable additional information and correlated with most other lung function parameters measured by body plethysmography and CO-diffusion. Airway resistance measured by body plethysmography and impulse oscillometry correlated within COPD stages. Different lung function parameters are of importance in COPD, and a thorough patient characterization is important to understand the disease.
Introduction
Spirometry and body plethysmography are the most commonly used methods to diagnose, characterize, and assess chronic pulmonary obstructive disease (COPD). The global initiative of obstructive lung diseases (GOLD) classification of COPD [1] is acknowledged around the globe and is recommended both by the American Thoracic Society and the European Respiratory Society. It has long been based on spirometry and health status alone. However, a new version from 2011 proposes the importance of considering exacerbation frequency and assessing the severity of breathlessness, using the modified Medical Research Council Questionnaire (mMRC), in the classification of COPD. For practical purposes, flow-volume spirometry is used to characterize lung function in COPD patients. It is easily used, and the measurements derive reproducible data. Forced expiratory volume in 1 s (FEV 1 ) is most commonly used but is of limited value in relation to functional ability and quality of life when used alone [2,3]. On the other hand, spirometry also provides data of forced vital capacity (FVC) and inspiratory capacity (IC) which are the tools of choice for most population surveys.
It has long been known that spirometry measures mostly the proximal parts of the airway, while COPD is mostly a disease of the distal airways [4]. Akamatsu et al. screened patients from a nonrespiratory section of the hospital, including smokers, former smokers, and never-smokers [5]. They found that 25 out of 288 patients had COPD according to the GOLD standard (21 patients GOLD1, 4 patients GOLD2), but 52% of these patients still claimed to have no respiratory symptoms at all. This suggests that the symptoms of COPD can develop later in the disease stage. It is important to diagnose the patients at an early stage since the disease is progressive and irreversible. Since no treatment is available to stop the progression in the early stage, it is of great importance to identify patients in this stage to evaluate novel therapies for disease progression.
Pulmonary Medicine
It is therefore important to use plausible lung function measurements for a satisfactory diagnosis and monitoring of COPD. Body plethysmography and single breath helium dilution with carbon monoxide-(CO-) diffusion are two commonly used techniques to evaluate lung volumes in order to look at hyperinflation that is not reflected by spirometry. However, the helium dilution method is known to underestimate lung volumes, while body plethysmography measures increased lung volumes in obstructive patients [6]. After administration of tiotropium for two weeks in obstructive patients with hyperinflation, lung volumes such as residual volume (RV) and functional residual capacity (FRC) measured with body plethysmography decreased, while RV and FRC measured by helium dilution method increased [7].
Impulse oscillometry (IOS) can detect distal airway malfunctions that are not measured with normal spirometry. COPD patients have a higher total resistance (R5), and peripheral resistance (R5-R20), and a more negative reactance at 5 Hz (X5) than healthy never-smokers [8]. Increased effect on R5, R5-R20, and X5 was seen with increased disease severity. However, none of the IOS parameters could separate healthy never-smokers from GOLD1 [8]. Interestingly, subgroups of COPD patients showed normal IOS values, as some patients with low reactance area (AX) displayed low FEV 1 , and patients with abnormal R5 showed less emphysema [9]. Several studies have shown a correlation between several IOS parameters and FEV 1 [8,10,11], CT scans, dyspnea, and health status [12]. Frantz et al. recently showed that patients with self-reported chronic bronchitis, emphysema, or COPD have higher resistance and lower reactance than patients without self-reported disease independent of spirometrybased diagnosis [13]. This suggests that IOS could be used to detect pathological changes in COPD earlier than spirometry. In contrast, it has been shown that commonly used pulmonary function tests were more sensitive in detecting COPD than was IOS but had the same specificity in excluding COPD [14].
The aim of the present study was to relate established flow-volume spirometry values to other more advanced measures of lung function using body plethysmography, single breath helium dilution with CO-diffusion and IOS in COPD patients in different stages, and healthy smokers that have not developed COPD. A secondary aim was to evaluate better characterization of lung function impairment of importance in different degrees of COPD. We hope to expand characterization of COPD patients using other parameters than from normally used flow-volume measurements to get an extended picture of the lung physiology in different COPD phenotypes.
Study
Design. The study was approved by the Regional Ethical Review Board in Lund (431/2008), and all study participants signed written informed consent. A physical examination was performed before the start of the study. All subjects performed IOS (Jaeger MasterScreen, Erich Jaeger GmbH, Würzburg, Germany), body plethysmography together with flow-volume spirometry (MasterScreen Body Jaeger) and single breath helium dilution with CO-diffusion test (MasterScreen Diffusion Jaeger) in given order. FEV 1 and FVC were measured using established flow-volume spirometry, and FEV 1 /FVC was calculated. From body plethysmography (BP) inspiratory resistance ( in ), expiratory resistance ( ex ), IC, RV BP , total lung capacity (TLC) BP , and FRC BP were recorded. The technique of single breath helium dilution with CO-diffusion tests (SB) estimates lung volumes, such as RV SB , TLC SB , and FRC SB , diffusing capacity of the lung for carbon monoxide (DLCO) and alveolar volume (VA) was measured, and DLCO/VA was calculated. Resistance at 5 HZ (R5; total resistance) and 20 Hz (R20; central resistance), Resonance frequency (Fres), Reactance at 5 Hz (X5), and Reactance area (AX) were measured by IOS, and R5-R20 (peripheral resistance) was subsequently calculated. All lung function measurements were made according to ERS/ATS standardizations [15][16][17]. Reference values established by Crapo were used [18]. Information about COPD symptoms was documented in a self-filled in Clinical COPD Questionnaire (CCQ) [19].
Statistics.
Nonparametric unpaired data were analyzed first using the Kruskal-Wallis test for trend analyses between several groups and thereafter the Mann-Whitney test between two groups (with correction for ties). Paired data were analyzed using the Wilcoxon test. Correlations were analyzed using Spearman's nonparametric correlation test. All statistical analyses were done using SPSS 20.0 for Windows (SPSS, Inc., Chicago, IL, USA), and a value <0.05 was considered significant. All data were presented as median (interquartile range).
Patient Characteristics.
There were no significant differences in sex or body mass index between healthy smokers and COPD patients ( Table 1). All subjects had matched age (except for patients with GOLD2 that were younger than Significant difference compared to healthy smokers, † significant difference compared to GOLD1, ‡ significant difference compared to GOLD2, # significant difference compared to GOLD3, one symbol flagging P < 0.05, two symbols flagging P < 0.01 and three symbols flagging P < 0.001. SABA: short acting beta agonist, LAMA: long acting muscarinic agonist, LABA: long acting beta agonist, ICS: inhaled corticosteroids, O 2 : oxygen therapy. All data are presented as median (interquartile range) or otherwise stated.
healthy controls), and pack years (except for patients with GOLD3 who had more pack years). CCQ value increased with increasing GOLD stage and was higher in GOLD stage 2-4 compared to healthy smokers (Table 1). One healthy smoker, three patients with GOLD2 and one patient with GOLD4 had low levels of alpha 1 antitrypsin (<0.86 g/L for men and <0.94 g/L for women). According to patient classification, FEV 1 /FVC differed significantly between healthy smokers and GOLD1 but also continued to decrease with increasing GOLD stage. An interesting increase in FVC%p was seen in GOLD1 compared to healthy smokers, and thereafter FVC%p decreased with increasing GOLD stage.
Body Plethysmography.
The Kruskal-Wallis test showed an overall increasing trend among the groups for both in and ex ( < 0.001). Both the in and the ex measured with body plethysmography were increased in GOLD2-4 compared to healthy smokers (Figures 1(a) and 1(b)). IC was decreased, but only in later stages of the disease (GOLD3-4) ( Table 2).
Increase in Lung Volume Measured by Body Plethysmography and Single Breath Helium Dilution with CO-Diffusion
Already in GOLD1. An increasing trend among all the groups was seen for TLC%p BP ( < 0.01), RV%p BP ( < 0.001), and for VA%p SB ( < 0.001) using the Kruskal-Wallis test. Interestingly, both TLC%p BP and FRC%p BP measured with body plethysmography were already significantly increased in GOLD1 (Table 2). In conjunction with this, the alveolar volume (VA%p) measured by single breath helium dilution with CO-diffusion was increased in GOLD1 and decreased in GOLD2-4 compared to healthy smokers ( Figure 2).
Diffusing Capacity Decreased with Increasing GOLD
Stage. An overall difference between the groups regarding diffusion capacity was detected using Kruskal-Wallis. The diffusing capacity (DLCO%p) was decreased in GOLD2-4 compared to healthy smokers. When divided by the alveolar volume (DLCO/VA) a decrease was already seen from GOLD1, due to the early increase in VA%p seen in GOLD1, and extended to GOLD4 ( Figure 2, Table 2).
Difference in RV and TLC Measured by Body Plethysmography and Single Breath Helium Dilution with CO-Diffusion.
RV measured with body plethysmography (RV%p BP ) was increased only in later stages of the disease (GOLD3-4, Table 2). In contrast, a parallel decrease in RV measured by single breath helium dilution with CO-diffusion (RV%p SB ) was seen (Figure 3(a)) and decreased by advancing GOLD stages. This indicates increased air trapping. To pronounce the outcome on individuals' RV, a difference in RV measured with body plethysmography and by single breath helium dilution with CO-diffusion was calculated (RV%p BP−SB ). A clear increasing pattern in RV%p BP−SB was seen with increasing GOLD stage (Figure 3(c)) already from GOLD2.
A similar pattern was seen for TLC, but not as pronounced as for RV. An increase in TLC%p BP was seen in GOLD3-4, together with a decrease in TLC%p SB (Figure 3(b)) in GOLD2-4. Individual differences in TLC%p (TLC%p BP−SB ) show a clear increasing pattern through the GOLD stages already from GOLD2 (Figure 3(d)).
IOS Parameters Increased with Increasing GOLD Stage.
Trends of difference between groups were detected by the 4 Pulmonary Medicine Figure 1: in (a) and ex (b) measured by body plethysmography in controls (healthy smokers) and COPD patients with GOLD stage 1-4. * Significant difference compared to healthy smokers, † significant difference compared to GOLD1, ‡ significant difference compared to GOLD2, # significant difference compared to GOLD3, one symbol flagging < 0.05, two symbols flagging < 0.01, and three symbols flagging < 0.001. Data are presented as individual dots together with median with interquartile range.
Kruskal-Wallis test, and all IOS parameters showed similar patterns, with no difference between healthy smokers and GOLD1, but increasing significantly from GOLD2 (except for R20) to GOLD4 (Figure 4, Table 3).
Established FEV 1 %p Did Not Correlate with Extended
Lung Volume and Diffusing Capacity Measurements. Due to an increasing effect in all lung function parameters with increasing GOLD stage, there was also an evident overall correlation between all lung function parameters within all subjects (data not shown). When correlating the conventionally used parameter FEV 1 %p within each GOLD stage, no correlation was seen with any parameters measured by body plethysmography, single breath helium dilution with COdiffusion, or IOS. Correlations to a subset of the parameters (that differ most pronouncedly between the different GOLD stages) are shown in
Correlations between Parameters of Resistance Measured by Body Plethysmography and IOS, but Not to Lung Volume or Diffusing Capacity
Parameters. An interesting finding was that resistance parameters measured by body plethysmography ( in and ex ) correlated significantly with several resistance and reactance parameters measured by IOS. in , and DLCO SB /VA%p (c) measured by single breath helium dilution with CO-diffusion in controls (healthy smokers) and COPD patients with GOLD stage 1-4. * Significant difference compared to healthy smokers, † significant difference compared to GOLD1, ‡ significant difference compared to GOLD2, # significant difference compared to GOLD3, one symbol flagging < 0.05, two symbols flagging < 0.01, and three symbols flagging < 0.001. Data are presented as individual dots together with median with interquartile range. and ex correlated with R5, R20, R5-R20, and Fres (Table 4) in most GOLD stages (and most pronouncedly in early GOLD stages) and AX and X5 in all GOLD stages. However, neither resistance parameters measured by body plethysmography nor IOS (except for R5-R20 in GOLD4) correlated with lung volume or diffusion parameters in any GOLD stage.
Dyspnea Did Not Correlate to Lung Function Parameters
in Different GOLD Stages. The CCQ score increased with increasing GOLD stage (Table 1), and hence there was an apparent overall correlation with all lung function parameters. However, within the different GOLD stages there was no correlation between the CCQ score and any lung function parameter measured with spirometry, body plethysmography, and single breath helium dilution with CO-diffusion or IOS.
Discussion
The main finding of this study was that established flowvolume parameters, such as FEV 1 , did not correlate with advanced measurements of lung volume, diffusing capacity, and resistance. This illustrates that FEV 1 alone is not a good parameter when used for diagnosis and monitoring of COPD since it does not represent the whole picture of the disease. An interesting parameter was, however, the difference in RV%p measured with body plethysmography and single breath helium dilution with CO-diffusion. The RV%p BP measured with body plethysmography was increased in parallel with a decrease in RV%p SB measured with single breath helium dilution with CO-diffusion with increasing COPD severity. When using the difference between the two RV (RV%p BP−SB ), a clearer and more pronounced pattern appeared, and the effect on lung volume becomes apparent in an earlier disease stage. This provides a good opportunity to measure air trapping and degree of hyperinflation. RV%p BP−SB also correlated with several lung volume parameters, such as IC%p, FRC%p, TLC%p, and DLCO/VA%p, showing this to be an important factor in COPD characterization. A similar parameter, with similar characteristics, was the difference between TLC%p measured with body plethysmography and single breath helium dilution with CO-diffusion. However, it was not as Figure 3: RV% (a) and TLC%p (b) measured by body plethysmography and single breath helium dilution with CO-diffusion. Difference in RV% (RV%p BP−SB ) (c) and TLC% (TLC%p BP−SB ) (d) measured by body plethysmography and single breath helium dilution with COdiffusion in controls (healthy smokers) and COPD patients with GOLD stage 1-4. * Significant difference compared to healthy smokers, † significant difference compared to GOLD1, ‡ significant difference compared to GOLD2, # significant difference compared to GOLD3; § significant difference between measurement from body plethysmography compared to single breath helium dilution with CO-diffusion, one symbol flagging < 0.05, two symbols flagging < 0.01, and three symbols flagging < 0.001. Data are presented as median (IQR) in (a)-(b) and individual dots together with median with interquartile range (c)-(d).
pronounced as the difference in RV%p, and hence of less importance. When comparing RV and TLC from the different measurement methods, a significant difference was already seen in healthy smokers, and was most probably due to methodological dissimilarities (single breath helium dilution with CO-diffusion measuring only volume communicating with ventilated air space, while body plethysmography also measures trapped air space). An important aim was to find a lung function parameter that may show early signs of COPD disease, since COPD is an irreversible progressive disease. When diagnosed with COPD today, the disease has already progressed to a partly irreversible limitation in airflow. It is therefore important to identify patients at an earlier stage, so that novel therapies for earlier disease progression can be developed. It is thus also important to study the initial changes in COPD leading to severe stages. Interesting findings in the present study were increases in RV BP %p, RV SB %p, TLC BP %p, TLC SB %p, FRC%p, and VA%p already in GOLD1, with the increase in VA%p subsequently resulting in a parallel decrease in DLCO/VA %p. This could be the first signs of inadequate elasticity in GOLD1, resulting in increased lung volumes but sustained flow-volume parameters.
All lung function parameters were affected with an increasing pattern through GOLD1-4, but overall there are only minor differences between healthy smokers and GOLD1. Figure 4: R5-R20 (a) and Fres (b) measured by impulse oscillometry in controls (healthy smokers) and COPD patients with GOLD stage 1-4. * significant difference compared to healthy smokers, † significant difference compared to GOLD1, ‡ significant difference compared to GOLD2, # significant difference compared to GOLD3, one symbol flagging < 0.05, two symbols flagging < 0.01 and three symbols flagging < 0.001. Data presented as individual dots together with median with interquartile range.
In contrast, there are marked effects in GOLD3-4, while the patients in GOLD2 show a more variable pattern, presenting a heterogeneous group of patient with overlapping lung function results similar to both GOLD1 and GOLD3. This was most clearly seen for Fres, RV BP %p-RV SB %p, and TLC BP %p-TLC SB %p (Figures 3-4). The explanation for this is not known, but we can only speculate that the COPD in patients with GOLD1 is possibly due only to chronic bronchitis, while patients with GOLD3-4 have additional emphysema formation. The patients in GOLD2 could be a heterogeneous group of patients with either only chronic bronchitis or in combination with additional emphysema. We aim to investigate this hypothesis further because of the importance to categorize the disease not only by severity but also by disease pattern and phenotype in order to develop more specific therapies. Another interesting findings were the correlations between several resistance parameters measured by body plethysmography and IOS. These resistance parameters did not relate to lung volume and diffusing capacity parameters suggesting different pathological entities and thereby different COPD phenotypes. Although IOS is an easy method to use, it may not replace spirometry but could be used as a complement or in cases when spirometry cannot be performed. These findings are in accordance with previous speculations on lung diseases overall [20].
The use of a self-filled in quality of life questionnaire is a subjective measure and is questionable as a valuable tool in diagnosing COPD [21]. In the present study there was an increase in CCQ with increasing GOLD stage, and subsequently an overall correlation to all lung function parameters. However, subgrouped within each GOLD stage, there was no correlation between CCQ and any lung function parameter, even though some of the groups were very heterogeneous.
The diagnostic use is hence of minor interest but could be valuable in following-up the progress of the disease. It would, however, be interesting to compare the lung function parameters to other markers of disease severity such as 6 minutes walking test, mMRC score, exacerbation frequency, or oxygen saturation to investigate if any lung function parameters correlated better with this than FEV 1 does. These could possibly then be used to classify disease severity, phenotype the disease, and work as a tool in regulating medication use.
In conclusion, the present study shows that the use of only FEV 1 in COPD diagnosis and monitoring gives an incomplete characterization of the patients. Extended lung function measurements using body plethysmography, single breath helium dilution with CO-diffusion and IOS show that there was no correlation between FEV 1 , and more advanced lung volume, diffusing capacity, and resistance parameters within different COPD stages. However, other flow-volume parameters, FVC, FEV 1 /FVC, and IC, are related to several more advanced lung function parameters. These parameters should be taken into consideration preferably when the access to more advanced equipment is limited. An interesting parameter is the difference in RV measured by body plethysmography and single breath helium dilution with CO-diffusion that gives a more pronounced measure of air trapping and hyperinflation. Different lung function parameters are of importance in different COPD stages, and a more thorough patient characterization is important for understanding the condition and giving better options for treatment in the future. | 2018-04-03T05:03:55.407Z | 2013-06-13T00:00:00.000 | {
"year": 2013,
"sha1": "60df3c1f1262fbc98aa1b00875a9f69f55f938f4",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/pm/2013/782052.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b15cfd79a4a203e8d9f783a2e03f42d1dffc0631",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237492581 | pes2o/s2orc | v3-fos-license | The impact of tumor detection method on genomic and clinical risk and chemotherapy recommendation in early hormone receptor positive breast cancer
Background Symptomatic breast cancers share aggressive clinico-pathological characteristics compared to screen-detected breast cancers. We assessed the association between the method of cancer detection and genomic and clinical risk, and its effect on adjuvant chemotherapy recommendations. Patients and methods Patients with early hormone receptor positive (HR+) HER2neu-negative (HER2-) breast cancer, and known OncotypeDX Breast Recurrence Score test were included. A natural language processing (NLP) algorithm was used to identify the method of cancer detection. The clinical and genomic risks of symptomatic and screen-detected tumors were compared. Results The NLP algorithm identified the method of detection of 401 patients, with 216 (54%) diagnosed by routine screening, and the remainder secondary to symptoms. The distribution of OncotypeDX recurrence score (RS) varied between the groups. In the symptomatic group there were lower proportions of low RS (13% vs 23%) and higher proportions of high RS (24% vs. 13%) compared to the screen-detected group. Symptomatic tumors were significantly more likely to have a high clinical risk (59% vs 40%). Based on genomic and clinical risk and current guidelines, we found that women aged 50 and under, with a symptomatic cancer, had an increased probability of receiving adjuvant chemotherapy recommendation compared to women with screen-detected cancers (60% vs. 37%). Conclusions We demonstrated an association between the method of cancer detection and both genomic and clinical risk. Symptomatic breast cancer, especially in young women, remains a poor prognostic factor that should be taken into account when evaluating patient prognosis and determining adjuvant treatment plans.
Introduction
Breast cancer is detected by screening mammography, selfdetection and, rarely, by clinical breast exam. At the time of diagnosis, 90% of cases are found to be early breast cancer, in which the disease is confined to the breast and regional lymph nodes; these patients are treated with curative intent [1].
Following the implementation of screening mammography, most breast cancers in the United States are diagnosed by screening mammography, while approximately one-third are diagnosed as a symptomatic tumor [2e4]. Detection methods vary with age. Young women (<50 years) are more likely to present with palpable tumors, whereas identification of cancer by screening mammography increases with age [2,3].
Several studies have demonstrated that symptomatic breast cancers share more aggressive clinico-pathological characteristics including larger size, higher proportion of node positive disease, higher grade and hormone receptor negative subtypes, compared to screen-detected breast cancers [5e12]. Accordingly, patients with screen-detected breast cancers have improved survival rates compared to those with symptomatic cancers, specifically among those diagnosed with Luminal A subtype [5,6,9,11].
The decision to administer adjuvant chemotherapy in addition to the standard preventive endocrine therapy for hormone receptor positive (HRþ) HER2neu-negative (HER2-) breast cancer patients has evolved and currently is often guided by the combination of clinical parameters and genomic tests [13,14]. The OncotypeDX Breast Recurrence Score test was found to be both prognostic for disease recurrence [15,16] and predictive for adjuvant chemotherapy benefit in node negative patients [17e20] and in postmenopausal women with 1e3 positive nodes [21e24]. OncotypeDX did not demonstrate predictivity for chemotherapy benefit in premenopausal node positive patients [24].
In the current study using data on recurrence scores of consecutive women with early HRþ HER2-breast cancer, we assessed the association between the method of cancer detection and the genomic and clinical risk. We implemented a natural language processing (NLP) algorithm to extract the method of tumor detection from the electronic medical record (EMR) and evaluated the contribution of the cancer detection method to adjuvant chemotherapy recommendations, based on the most contemporary treatment guidelines.
Patients and data retrieval
All patients with known OncotypeDX recurrence score (RS), diagnosed with HR positive HER2 negative early breast cancer between 2004 and 2020 at the Tel Aviv Sourasky Medical Center (TASMC), were included. Patients with HER2 positive disease (HER2 þ3 or HER2 þ2 with positive HER2 FISH) were excluded. OncotypeDX test was completed to guide chemotherapy recommendation for women with tumors larger than 1 cm. Patients with node negative disease comprised the majority of the cases. Node positive patients for whom the institutional tumor board believed chemotherapy could potentially be omitted based on the test results were included as well. The test was conducted in the majority upon surgical specimens (90%) and only a minority on biopsy specimens. Women with clear indications for neoadjuvant therapy were not referred for OncotypeDX testing. We retrospectively retrieved pathological characteristics including age, tumor size, grade, Ki67, progesterone receptor (PR) status, HER2 level and nodal status. Luminal subtype was defined with pathological based surrogate definitions (ESMO criteria); Luminal A-like subtype was defined if PR>20% and Ki67 14%. Luminal B-like subtype was defined if PR<20% and/or Ki67 > 14% [25]. The study was approved by the local Institutional Ethics Committee, Number TLV18-0426.
We analyzed all free-text patient visit summaries from the breast-oncology unit and the breast-cancer surgery unit. We developed a rule-based NLP information-extraction algorithm to identify the initial method of tumor detection, analyzing free-text medical reports, written in Hebrew. Our algorithm was designed to search for terms indicating the method of tumor detection. A symptomatic tumor was defined if prior to the diagnostic process there was a palpable tumor, breast pain, or changes in the morphological appearance of the breast noted by the patient or her physician; a screen-detected tumor was defined if the tumor was detected during a routine screening mammography or ultrasound exam. A full description of the algorithm and the validation process are provided in Appendix A.
Assessment of the genomic and clinical risk
Genomic risk: Genomic risk was defined according to the TAI-LORx study [19], OncotypeDX RS 10 was considered low risk, 10 < RS 25 was considered intermediate risk and RS 26 was considered high risk.
Clinical risk: The clinical risk assessment was based on the Adjuvant! Online algorithm (version 8) integrating tumor size, grade and nodal status [26,27]. Since Adjuvant! is no longer available online, we used a binary clinical-risk categorization (low vs. high) model based on the algorithm, as applied in the MINDACT trial (See appendix table S13 in Ref. [28]). A low clinical risk was defined as greater than 92% probability of breast cancerespecific survival at 10 years in women with HRþ HER2-tumors who received endocrine therapy alone [28]. For N0/N1mic patients clinical risk was defined as low if one of the following conditions was present: grade I and tumor size 3 cm, or grade II and tumor size 2 cm, or grade III and tumor size 1 cm. For N1 patients clinical risk was defined as low only if it was Grade I and the tumor size was 2 cm. Otherwise, the clinical risk was defined as high.
Assessment of the probability for adjuvant chemotherapy recommendation
The probability for adjuvant chemotherapy recommendation was calculated based on the genomic and clinical risk using the model suggested by the phase III TAILORx study [19], the subsequent analysis by Sparano et al. [29] and the recently published RxPonder results [24].
In the TAILORx trial, node-negative patients with RS 25 did not benefit, while node-negative patients with high genomic risk (RS 26) did benefit from adjuvant chemotherapy [18e20]. However, an exploratory analysis revealed that younger women under the age of 50 may benefit from adjuvant chemotherapy, even with lower genomic risk (RS 16e25) [19]. A secondary analysis of the TAILORx trial demonstrated that in node-negative younger women the benefit from adjuvant chemotherapy is defined by both the genomic risk and the clinical risk [29]. According to this model, adjuvant chemotherapy should be considered in node-negative women of all ages with a high RS (26). Additionally, chemotherapy should be considered in younger women (Age 50) with a RS 16 or higher, based on their clinical risk; In high clinical risk tumors, chemotherapy should be considered with a RS 16 and in low clinical risk with a RS 21.
The RxPonder trial demonstrated that postmenopausal nodepositive (N1) patients with RS 25 did not benefit from adjuvant chemotherapy. However, premenopausal node-positive patients (N1) with RS 25 did benefit from adjuvant chemotherapy regardless of OncotypeDX RS [24]. According to those results, nodepositive postmenopausal patients should be recommended for adjuvant chemotherapy only with a high RS (26). Chemotherapy should be advised to all node-positive premenopausal patients regardless of RS. Patients with more than 3 positive lymph nodes (N2eN3) were excluded from this analysis as there is no evidence that chemotherapy can be omitted in this population.
Statistical analysis
Demographic and clinico-pathological data of patients in the two groups (symptomatic and screen-detected) were compared, using the chi-square test for categorical variables and the t-test for continuous variables. The genomic risk and the clinical risk as well as the probability of adjuvant chemotherapy was compared between the screen-detected and the symptomatic groups using the chi-square test. All p values were two-sided and p < .05 was considered significant. Statistical analysis was performed with IBM SPSS statistics for Windows, version 25 (IBM Corp., Armonk, N.Y., USA).
Patient characteristics
The cohort included 962 consecutive patients with known OncotypeDX scores and available EMRs, who were diagnosed between 2004 and 2020. The NLP algorithm successfully extracted the method of cancer detection in 401 patients. For the remainder of the cohort access to initial diagnostic data was not available as it was stored in a different software system. Most of the women (N ¼ 216; 53.9%) were detected by routine screening, and 185 (46.1%) patients by self-examination or symptoms (Fig. 1). Patient characteristics are summarized in Table 1. Symptomatic women were younger (mean age 53 vs. 61, p < .001), had larger tumors (46% were 2 cm compared to 18% of screen-detected tumors; p < .0001) and high Ki67 (24% were Ki67 > 14% compared to 15% of screendetected tumors; p < .004). A quarter (N ¼ 104) of the cohort presented with node positive disease, with no significant differences in the proportions of node positive patients between the two groups. A higher proportion of patients for whom the biopsy specimen was used for genomic testing had symptomatic tumors (16% compared to 6% of screen-detected tumors; p < .001). Approximately half of the cohort had known Ki67 and PR status which enabled us to define the luminal subtype based on ESMO criteria. No significant differences were found in the proportions of Luminal A-like and Luminal B-like tumors (50% were Luminal B-like in the screen-detected group compared to 59% in the symptomatic group; p ¼ .2).
Impact of initial method of breast cancer detection on the genomic risk of recurrence
The distribution of OncotypeDX RS was significantly different between the two groups ( Fig. 2A, p ¼ .003). The proportion of patients with low RS (0-10) was higher in the screen-detected group compared to the symptomatic group (23% vs. 13%). Conversely, the proportion of patients with high RS (26) was higher in the symptomatic group (24% vs. 13% in the screen-detected group).
The association between the method of cancer detection and the RS was even more pronounced in women 50 years or younger ( Fig. 2B; p ¼ .02). A smaller proportion of women with low or intermediate RS and a higher proportion of women with high RS were identified in the symptomatic group (11% vs. 21%, 56% vs. 72% and 33% vs. 7%, respectively). In women older than 50 there was a higher proportion of low RS in the screen detected group, but the proportions of high RS were similar in both groups ( Fig. 2C; p ¼ .28).
Impact of initial method of breast cancer detection on the clinical risk of recurrence
The clinical risk as assessed using tumor size, grade and nodal status was higher in the symptomatic group; Fifty-nine percent of patients who presented with symptomatic cancer had high clinical risk of recurrence compared to only 40% with screen-detected tumors (Fig. 3A, p ¼ .0001). Similar trends were seen when assessing separately the different age groups (Fig. 3B
Probability of adjuvant chemotherapy recommendation based on the initial method of tumor detection
When applying the model based on the TAILORx analysis [29] and the RxPonder results [24] to our data (Table 2), in women over 50, the probability for adjuvant chemotherapy recommendation is comparable in the symptomatic and screen-detected groups (15% vs. 13%, respectively; p ¼ .7). In women who were 50 years or younger, the probability of adjuvant chemotherapy recommendation was significantly higher in women presenting with symptomatic tumors compared to women with screen-detected tumors (60% vs. 37%, respectively. p ¼ .03).
Discussion
We examined the association between the method of cancer detection and the genomic and clinical risk of disease recurrence in women with early HRþ HER2-breast cancer. Women with symptomatic tumors had both higher clinical risk and higher genomic risk for disease recurrence, compared to patients whose tumors were detected by routine screening. These findings are consistent with previous reports which have demonstrated that symptomatic tumors portend more aggressive clinico-pathological characteristics than screen-detected tumors [5e12]. Accordingly, the method of detection was found to be prognostic for disease survival [5,6] and therefore was incorporated in the PREDICT online prognostication tool together with clinical and tumor characteristics [30,31].
In accordance with previous literature [2,3,8,10,11], our study demonstrates that patients with symptomatic breast cancer tend to be significantly younger (49% of women with symptomatic cancer were 50 years compared to 13% of screen-detected patients). This observation is attributed, at least in part, to the widespread adoption of screening mammography in women over 50. The Israeli breast screening program invites average-risk women aged 50e74 to undergo screening mammography every two years. The screening compliance in this group is 75% [32]. Younger age at diagnosis is associated with more aggressive tumor behavior [33] and may explain the greater prevalence of high genomic and clinical risk tumors in symptomatic patients. However, in a subset analysis of women under the age of 50, the association between the method of detection on the genomic and clinical risk remained significant. Therefore, the higher prevalence of genomic and clinical risk tumors in symptomatic patients cannot be explained by age alone.
There are few reports examining the association between method of detection and genomic risk. Esserman et al. [34] compared the 70-gene signature MammaPrint in two groups of women in the Netherlands. The first group included women diagnosed between 1984 and 1992, before the era of widespread use of screening mammography. The second group included women diagnosed between 2004 and 2006 when image-based screening reached 75e80% of the population. Similarly, Drukker et al. [35] analyzed 1165 patients in the MINDACT trial and compared the 70gene signature of cancers detected by image-based screening to interval cancers. In accordance with our findings, both reports suggest that screen-detected cancers are more likely to be of low genomic risk. Additionally, in the predominantly-screened group almost a third had an ultra-low genomic risk [34] leading the authors to suggest that these tumors may account for clinical overdiagnosis. Conversely, in our cohort, in the group of women presenting with symptomatic breast cancer 13% were found to have a low RS (10), suggesting that even ultra-low risk tumors can become symptomatic.
Adjuvant treatment recommendations have evolved tremendously over the past two decades. Presently the decision to add adjuvant chemotherapy to HR þ early breast cancer patients is determined by clinical factors in addition to genomic features derived from molecular tests. We examined the contribution of the method of cancer detection to adjuvant chemotherapy recommendation based on the model suggested by the TAILORx results [19], the subsequent analysis by Sparano et al. [29] and the related results of the RxPonder [24]. When applying this model to our results, we observed that in women age 50 and under, symptomatic cancer significantly increases the likelihood of adjuvant chemotherapy recommendation compared to screen-detected tumors (60% vs. 37%, respectively). These results demonstrate the relevance of the tumor detection-method, underlining the fact that symptomatic tumor at presentation is prognostic, especially in young women.
The TAILORx trial demonstrated the benefit of adjuvant chemotherapy in young women with a RS > 16. However only a minority (13%) of the premenopausal women who participated in the study received ovarian suppression in addition to endocrine therapy [19]. In light of the clear benefit observed with ovarian suppression in addition to endocrine therapy in the SOFT and TEXT studies [36], it is unclear if the benefit of chemotherapy in preventing disease recurrence among premenopausal women, can be attributed at least in part to chemotherapy induced ovarian failure. Our work does not address this question and it is plausible that some patients who were recommended for chemotherapy, would have similarly benefited from ovarian suppression alone.
Approximately 25% of the women in the study had involved lymph nodes, the majority with 1e3 positive nodes (N1). Women with lymph node involvement were evenly distributed between the two study groups. A number of studies have suggested that OncotypeDX is prognostic also in women with positive lymph nodes [21e23]. The recently published results of the prospective RxPonder trial demonstrated that chemotherapy can be spared in postmenopausal women with 1e3 involved nodes and a RS 25 [24]. Most contemporary clinical guidelines have integrated OncotypeDX in the treatment algorithm of node positive (N1) HRþ HER2-early breast cancer patients [37]. Accordingly, we included node-positive (N1) patients in our analysis and assessed the probability of chemotherapy recommendation among these patients based on RS and menopausal status, in line with the RxPONDER results.
In this work we used NLP algorithms to extract the method of tumor detection from free-text visit summaries. There is a growing body of literature in which computational approaches are applied for processing unstructured records to retrieve information and improve diagnosis performance [38,39], support treatment decisions [40], and improve cancer research by providing a better interface to existing knowledge platforms [41]. We believe that in our work, we have demonstrated the potential of integrating a computational approach for extracting information from oncological EMRs that outline the disease course of a patient, formatted in a completely unstructured way.
Our work has several limitations. The main limitation is the selection of women for genomic studies. Over time the recommendations for genomic tests have changed. Moreover, one can assume that in the screen-detected group many low-risk women were not selected for a genomic test, whereas in the symptomatic group many high risk women were recommended for neoadjuvant or adjuvant chemotherapy without undergoing a genomic test. However, such a differential selection bias would be expected to weaken the association we found between method of detection and RS. Additionally, the method of breast cancer detection was defined partially by self-reported data which can be influenced by recall bias and lead to misclassification. The accuracy of the NLP algorithm was 91%, allowing for misclassification of several cases as well. The relatively small final sample size limited our ability to perform multivariable analysis and examine the independent contribution of the method of detection to the recurrence score. Finally, our study does not include long term follow-up which limits our ability to examine the independent contribution of the method of diagnosis to long-term survival.
Conclusion
We demonstrated a strong association between the method of cancer detection and the genomic and clinical risk of recurrence in HRþ early breast cancer patients. Based on our data and current guidelines, most young women presenting with symptomatic breast cancer will be recommended for adjuvant chemotherapy.
Declaration of competing interest
A.Sonnenblick reports personal fees from Eli lilly, Pfizer, Teva, Novartis, Medison and Roche; grants from Novartis and Roche, all outside the submitted work. I.Wolf reports research grants from MSD, BMS, Roche and Novartis, all outside the submitted work. The rest of the authors declare that they have no conflict of interest. Appendix A Automatic extraction of the method of tumor detection using a rulebased NLP algorithm The computational process was executed in three steps: 1. Visit summary chronology: the algorithm organized visit summaries in chronological order for each patient 2. Expression detection: We compiled a list of terms and phrases capturing events of tumor-detection methods. The list included variations of each expression in an attempt to reflect and include different writing styles, synonyms, paraphrases, common misspellings and inflections. Overall, we applied 35 expressions indicating the symptomatic method, and 19 expressions designating the screen-detection method. Hebrew equivalent expressions of "palpable tumor" and "felt pain in the (right/left) breast" are examples of the phrases identifying the symptomatic method, while Hebrew terms stating "routine mammography" and "screening mammography" are examples of phrases identifying screen-detection method. The search for these expressions was applied to the earliest visit summaries for each patient, in a chronological order. Once the appropriate expression was identified and validated (see Validation section), the algorithm returned the relevant detection method. If none of the expressions were identified in any of the patient's visit summaries, the algorithm halted and returned 'unknown', reflecting its inability to identify the method of tumor detection of the patient. The summaries were written in Hebrew, a highly inflected language; Hebrew words are derived from a root and a pattern, combined with prefixes and suffixes, which may interfere with the traditional way of searching text. Therefore, for each expression we considered all the relevant inflections possible in the text. 3. Validation: To eliminate false positives, some of the expressions required extra validation steps. For example, some symptomatic expressions needed validation to confirm that they were not mentioned in negation (e.g., ״did not palpate a tumor.)״ Therefore, we created a few simple negation detection rules. Additional confirmation involved validation that mention of a routine screening mammography did in fact result in detection of a tumor. In addition, our algorithm was validated by a breast medical oncologist. Collectively, our human evaluation set contains 101 cases (10.4% of total cases): 29 cases of the screen-detection method, 21 cases of the symptomatic method, and 51 cases of unknown detection method. The overall accuracy was 91%. The sensitivity and specificity for screen-detection were 96.5% and 95.8%, respectively, and the sensitivity and specificity for symptomatic cancers were 85.7% and 95%, respectively.
Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Data sharing
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request
Ethical approval
The study was approved by the local Institutional Ethics Committee and was performed in line with the principles of the Declaration of Helsinki.
Consent to participate
As data were aggregative and anonymous no informed consent was required by the institutional Committee. | 2021-09-14T06:16:38.570Z | 2021-09-04T00:00:00.000 | {
"year": 2021,
"sha1": "863e3614e676695f8f215e38ff08f587e29d517b",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thebreastonline.com/article/S0960977621004562/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e101ed2b437e0a255c9bb62eb908b4685547d6ca",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17841974 | pes2o/s2orc | v3-fos-license | Confirmation of Hot Jupiter Kepler-41b via Phase Curve Analysis
We present high precision photometry of Kepler-41, a giant planet in a 1.86 day orbit around a G6V star that was recently confirmed through radial velocity measurements. We have developed a new method to confirm giant planets solely from the photometric light curve, and we apply this method herein to Kepler-41 to establish the validity of this technique. We generate a full phase photometric model by including the primary and secondary transits, ellipsoidal variations, Doppler beaming and reflected/emitted light from the planet. Third light contamination scenarios that can mimic a planetary transit signal are simulated by injecting a full range of dilution values into the model, and we re-fit each diluted light curve model to the light curve. The resulting constraints on the maximum occultation depth and stellar density combined with stellar evolution models rules out stellar blends and provides a measurement of the planet's mass, size, and temperature. We expect about two dozen Kepler giant planets can be confirmed via this method.
Introduction
NASA's Kepler satellite has been photometrically monitoring more than 150,000 mainsequence stars since its launch in 2009. The primary goal of the mission is to determine the frequency of Earth-size planets in the habitable zones of Sun-like stars, and in this quest 2326 planetary candidates have been identified with the first 16 months of flight data (Batalha et al. 2012). Of these, 203 are giant planets with radii (R p ) between 6 -15 times the radius of Earth (R ⊕ ). With Kepler's high photometric precision, both the primary transits and secondary transits (occultations) can be measured for many of these giant planets.
Occultation measurements allow us to better characterize planets by providing constraints on the size and orbital parameters of a companion that can produce the shape of the light curve. In addition, depending on the wavelength at which they are gathered, they can provide information to estimate the effective temperature and reflectivity of the planet. The use of phase curves (the variations in the light curves of a star+planet system as the planet orbits the star) as a means to detect exoplanets was presented by Jenkins & Doyle (2003). Their analysis predicted hundreds of close-in giant planets with periods up to 7 days could be detected by Kepler via their reflected light signatures. More recent studies have shown that both transiting and non-transiting planets can be detected by measuring the variations in light induced by the companion (Mazeh & Faigler 2010;Faigler & Mazeh 2011;Shporer et al. 2011;Mazeh et al. 2012). Herein, we present a new method to confirm giant planets based solely on the analysis of light curves by modeling both transits and occultations and eliminating other potential non-planetary sources that could produce the shape of the light curve.
Nearby stars that are captured within the target star aperture can dilute the transit signal, resulting in an underestimate of the transit depth and therefore the size of the target star's companion. These third-light or 'blend' scenarios can include a background or foreground eclipsing binary star system or a physically bound stellar companion in a hierarchical triple star system (Seager & Mallén-Ornelas 2003), each of which has the potential to produce a transit-like signature.
Confirmation of Kepler candidate planets typically requires additional ground-based follow-up observations due to the prevalence of astrophysical false positives. These tech-niques, which include spectroscopy, speckle and adaptive optics imaging, precise Doppler measurements and combinations thereof, are used to help eliminate blend scenarios in order to confirm that a Kepler planetary candidate is indeed a planet. The number of planets that can be confirmed in this manner, however, is limited by the availability of telescope time. In this article, we present a new method of confirming giant planets without the need for follow-up observations. We generate a full phase photometric model light curve that includes the primary transits, occultations, ellipsoidal variations, Doppler beaming and reflected/emitted light from the planet. We then inject a full range of dilution values into the model to simulate third-light contamination, then re-fit the diluted light curve models to the photometry. Comparison of the fitted parameters with stellar evolution models can eliminate systems that are inconsistent with a stellar blend.
To demonstrate this confirmation method, we analyzed the photometry of a star in the Kepler Field of View which shows a signature of a transiting Jupiter-sized planet in a 1.86 day orbit. Kepler-41b was recently confirmed through radial velocity measurements (Santerne et al. 2011a), thereby providing a good test case for our method. In Section 2 we present the photometry of Kepler-41 and discuss our method to correct for systematics and stellar variability. Our full phase photometric model and best fit parameters are presented in Section 3, and the confirmation technique and results are discussed in Section 4. Albedo estimates for Kepler-41b and other hot Jupiters are discussed in Section 5, and Section 6 provides a summary.
Kepler Photometry
Kepler-41 (RA=19 h 38 m 03 s .18 and Dec=45 • 58'53.9"), also identified by Kepler identification number (KID) 9410930 in the Kepler Input Catalog (KIC) (Brown et al. 2011) and by Kepler Object of Interest number KOI-196, is a G6V star with an apparent magnitude in the Kepler bandpass of K p = 14.465. Kepler-41b was identified as a Jupiter-sized candidate companion to this star in Kepler's second data release (Borucki et al. 2011). The SOPHIE spectrograph obtained radial velocity measurements of this object (Santerne et al. 2011a), which led to an estimated mass of 0.55 ± 0.09 M J .
The Kepler observations of Kepler-41 described in this article were acquired between 2009 May 13 and 2011 March 5 and include quarters Q1 through Q8. Data were sampled nearly continuously during each quarter at 29.42 minute long cadence (LC) intervals (where each LC includes 270 summed 6.5 second exposures). Although short cadence data (which are sampled more frequently at 58.85 s intervals) are more sensitive to the ingress/egress of a transit and can provide better constraints on the mean stellar density, they are not necessary to measure the phase curve and also are not aways available. The raw pixels collected for these stars were calibrated (Quintana et al. 2010) and aperture photometry, background removal and cosmic ray corrections were performed with the Photometric Analysis (PA) software maintained by the Kepler Science Operations Center to produce the light curves (Twicken et al. 2010). Note the light curve data for this target are publicly available at the Mikulski Archive for Space Telescopes (MAST 1 ).
The effects of instrumental signals in the flux time series were mitigated by fitting and subtracting cotrending basis vectors 2 (their use is documented in Barclay et al. (2012a)) from the light curve using the PyKE software 3 . We used the first four cotrending basis vectors and fit them to the data using a linear least-squares approach. The light curve was stitched together by normalizing the flux by the median value per quarter. Outliers and additional remaining signatures due to instrumental artifacts (such as those due to 'sudden pixel sensitivity dropouts', as described in Smith et al. (2012)) were then identified and removed. In total, 215 measurements were discarded yielding a total of 29,680 LC measurements.
We next applied a Fourier decomposition algorithm to separate out star spot-induced variability from the light curve. Assuming a coherent signal due to the planet and a constant orbital period, sinusoidal components to the light curve were iteratively fit and removed to filter out all frequencies that were not affiliated with the planet orbital period and its associated harmonics. Specifically, if the amplitude of the peaks in the Fourier Transform were >3.6 times the standard deviation (corresponding to 3σ), we removed those frequencies from the light curve. The PA-corrected, cotrended, and Fourier-filtered light curves for Kepler-41 are shown in Figure 1. The next section describes our light curve modeling to compute the best-fit planet parameters.
Model Fitting
Our full phase photometric model uses a circular orbit (eccentricity = 0) and we adopt the formalism of Mandel & Agol (2002) to compute the light curve model. Figure 2 shows the phase-folded and binned photometry for the primary transit (lower curve) centered at orbital phase φ = 0 with the best-fit model shown in red. Here, φ ≡ 2π(t − T φ )/P , where T φ is the epoch (the time of first mid-transit) and P is the orbital period of the planet. The occultation (top curve) near φ = 0.5 has been magnified (see top and right axes) and the best-fit model is shown in green. Our model of the transit for Kepler-41 includes nonlinear limb darkening with four coefficients that we compute by performing a trilinear interpolation over T eff * , log g, and Z ≡ [Fe/H] using tables provided by Claret & Bloemen (2011). The stellar properties (T eff * , log g, and Z) were adopted from Santerne et al. (2011a) The occultation is modeled in the same manner but we assume the companion is a uniform disk (we neglect limb darkening) due to the relatively short ingress and egress times. Our model of the phase-dependent light curve takes into account the photometric variability that is induced by the companion, which includes ellipsoidal variations (F ell ), Doppler beaming (F dop ), and light contributed by the planet (F ref , which includes both reflected star light and thermal emission). The relative flux contributions from each of these time-dependent effects are distinct and can be decomposed as where F * is the illumination measured at phase φ = 0.5 when the star is blocking the light from the companion, and F tot = F * + F ref (φ = 0).
Ellipsoidal variations in the light curve are caused by changes in the observable surface area of the star due to tidal distortions induced by the companion (Pfahl et al. 2008). They have previously been detected in eclipsing binary stars (Wilson & Sofia 1976) and more recently in exoplanet systems (Mazeh & Faigler 2010;Faigler & Mazeh 2011;Shporer et al. 2011;Welsh et al. 2010a;Mazeh et al. 2012;. The amplitude of the ellipsoidal variations is roughly equal to the ratio of the tidal acceleration to the stellar surface gravity (Pfahl et al. 2008) assuming tidal equilibrium, and can be approximated by where M p and M * are the masses of the planet and star, respectively, R * is the stellar radius, a is the semimajor axis, i is the inclination of the orbit relative to the line-of-sight. The parameter α ell is defined as where u and g are the limb darkening and gravity darkening coefficients, respectively (Morris 1985). We compute u = 0.6288 and g = 0.4021 by linearly interpolating over T eff * , log g, and Z using tables provided by Claret & Bloemen (2011). For a circular orbit, the contribution of flux from ellipsoidal variations, which oscillate on timescales of half the orbital period (see Figure 3), is Doppler beaming is an apparent increase/decrease in stellar flux due to Doppler shifts in the stellar spectrum that are caused by the reflex star motion around the center of mass due to the companion. These signals have only recently been measured in transit light curves (Mazeh & Faigler 2010;Shporer et al. 2011) and oscillate with the orbital period. Note that we did not detect variations from Doppler beaming in the Kepler-41 light curve, but we include a description here because it may be applicable to other planet candidates. The amplitude of this signal is where c is the speed of light and α dop is a Doppler boosting factor that depends on the wavelength of observation and on the stellar spectrum. We compute α dop = 1.09 using the methodology as described by Loeb & Gaudi (2003). For Keplerian circular orbits, the (non-relativistic) radial velocity semi-amplitude is defined as where G is the gravitational constant. The contribution of flux from Doppler beaming is The flux variations due to reflected/emitted light from the companion can be approximated by where A G is the wavelength-dependent geometric albedo, and Ψ is the phase function for a diffusely scattering Lambertian Sphere Values of the model parameters are derived using a Levenberg-Marquardt least-squares χ 2 approach (Press et al. 1992). We fit for the orbital period (P ), epoch (T φ ), impact parameter (b), scaled planet radius (R p /R * ), geometric albedo (A G ), secondary eclipse depth (ED), radial velocity semi-amplitude (K), ellipsoidal variations (A ell ), and mean stellar density (ρ * ). We then use a Markov Chain Monte Carlo (MCMC) method (eg. Ford 2005) to estimate values and uncertainties using this initial model solution to seed the runs. Four chains of 10 6 samples each are run and the first 25% are discarded to account for burn in, allowing the Markov chains to stabilize. The median values of the best-fit parameters for Kepler-41b are given in Table 1, along with the 1σ (68.3%) confidence intervals. We find M p = 0.598 +0.384 −0.598 M J , R p = 0.996 +0.039 −0.040 R J , yielding a mean planet density ofρ p = 0.74 +0.48 −0.74 gm cm −3 . The best-fit amplitudes of the occultation depth, ellipsoidal variations, and variations in reflected/emitted light were found to be 60±9, 4.5 +2.8 −3.8 , and 37.4 +6.1 −6.6 ppm, respectively.
Confirmation Method
Our confirmation method involves two main steps: (1) We first simulate third light contamination scenarios by injecting a wide range of dilution factors into the full phase photometric model and re-fit each diluted light curve model to the photometry. The results from these model fits set limits on the stellar parameters of possible blends; (2) We next compare these stellar parameters with stellar evolution models to eliminate star/companion systems that are unphysical or inconsistent with a stellar blend. The goal is to determine the probability that the planet-like signature could be caused by a contaminating star in the aperture.
To model a full range of stellar blends, we iteratively dilute the best-fit model light curve with a dilution factor D which ranges from 1% -100% of the transit depth (using 1% intervals). For each value of D, we fit the diluted model to the light curve data and recompute χ 2 (the goodness-of-fit estimator),ρ * , P , b, R p /R * , ED, A ell , and A G . Figure 4 shows results from these dilution fits for Kepler-41. The χ 2 value and the change in χ 2 (∆χ 2 ) as a function of injected dilution are shown in the top two panels. The lower six panels show the best fit results from six of the above parameters as a function of dilution.
To determine the maximum third light from a potential blend, we solve for the dilution value at which ∆χ 2 increases by 1, 4, or 9, equivalent to 1σ, 2σ, or 3σ (68.3%, 95.4%, or 99.7%) confidence intervals, respectively. These are shown by red, blue, and green horizontal lines in the top right panel of Figure 4. This resulted in maximum dilution values of 0.5 (1σ), 0.6 (2σ), and 0.67 (3σ) (shown by the vertical red, blue, and green lines in the lower six panels of Figure 4). These constraints on the dilution values in turn place limits on the valid parameter values of the companion which we can then compare to stellar evolution models to begin ruling out stellar blends.
We use Yonsei-Yale (YY) stellar evolution models (Yi et al. 2003;Demarque et al. 2004) which provide a stellar age, T eff * , R * ,ρ * , and log g for a given M * and Z. We start with a grid of stellar masses (M * = 0.4 -5 M ⊙ with 0.1 M ⊙ increments) and metallicities (Z = 0.00001, 0.0001, 0.0004, 0.001, 0.004, 0.007, 0.01, 0.02, 0.04, 0.06, 0.08) that covers the full range of input values to the YY models. For each (M * , Z) pair, we extract all YY models that haveρ * values within the constraints set by the dilution fits. For Kepler-41, the valid values ofρ * range from 1.17 gm cm −3 to a maximum value of 1.7795, 1.7800, or 1.7863 gm cm −3 for the 1σ, 2σ, and 3σ constraints, respectively. Note the confidence intervals that provide constraints on the fit parameters are measured with respect to ∆χ 2 rather than the distribution of each fit parameter. We require that the age from each model is less than the age of the Universe (taken to be 14 Gyr), and consider only those with α-enhanced mixture equal to Solar mixture. The total number of models extracted for Kepler-41 was 1931Kepler-41 was , 1947Kepler-41 was , and 1950 for the 1σ, 2σ, and 3σ cases, respectively.
For each model, we derive the planet mass M p , radius R p , equilibrium temperature T eqp , effective temperature T effp , and mean densityρ p as follows. We first take theρ * value for the given model and interpolate over the valid range ofρ * (as shown in Figure 4) to find the corresponding dilution factor. This dilution value is then used to determine the values of the additional fit parameters (R p /R * , A ell , P , b, and ED) for that model. From these estimates, we can derive the remaining parameters that we use to characterize the star-companion system.
The planet radius is found by R p = (R p /R * )R * . The planet mass is roughly proportional to the amplitude of the ellipsoidal variations (Pfahl et al. 2008) and can be estimated using Equation 2. We solve for a/R * by combining the mean stellar densityρ * = M * /((4/3)πR 3 * ) with Kepler's third law (for M p ≪ M * ) M * = 4π 2 a 3 /(P 2 G), yielding The inclination is computed from the impact parameter, which (for a circular orbit) is b = a/R * cosi . The planet mass is then which, along with R p , provides a measurement of the planet densityρ p . We note that the amplitude of the Doppler beaming, if detected (which is not the case for Kepler-41), can also be used to estimate the planet's mass (Shporer et al. 2011;Barclay et al. 2012b).
The planet equilibrium temperature can be estimated by where A B is the wavelength-integrated Bond Albedo (we use A B = 0.02, which is the approximate value for hot Jupiters in the Kepler bandpass), and f is a circularization factor which equals 1 for isotropic emission (Rowe et al. 2008a).
The secondary eclipse depth is approximately equal to the ratio of the planet and star luminosities, ED ≈ L p /L * , and can be used to estimate the planet effective temperature T effp . Assuming blackbody radiation, the bolometric luminosity (the total amount of energy emitted across all wavelengths) of an object can be computed from the Stefan-Boltzmann equation L ∝ R 2 T 4 eff , where R and T eff are the radius and effective temperature of the body, respectively. The planet effective temperature can be solved from Alternatively, we can compute L p /L * by integrating the planet and star Planck functions over the Kepler bandpass (λ ∼ 400 -900 nm). The wavelength-dependent luminosity can be solved from where B λ is the Planck function: Here, h is Planck's constant, k is Boltzmann's constant and c is the speed of light. This latter method provides a more accurate estimate of T effp , and results using both methods to compute T effp will be discussed in the next section.
Results
Our dilution models at the 1σ, 2σ, and 3σ confidence levels combined with stellar evolution models yield a total of 5828 star/companion model configurations, each providing an estimate of stellar age, T eff * , R * ,ρ * , log g, M p , R p , T effp , T eqp , andρ p . The next step is to examine these models and eliminate those that furnish unphysical stellar properties. Figure 5 showsρ * as a function of T eff * for all available YY evolution models (shown by the red curves in each panel). Also shown are the values of mean planet density as a function of planet effective temperature (ρ p versus T effp ) for each 3σ dilution model (shown by the multi-colored tracks in each panel of Figure 5, each color representing a value of metallicity). In order to comprise a viable stellar blend, a dilution model needs to reside in a region that overlaps with a stellar evolution track. In the left panel of Figure 5, T effp was calculated by integrating the Planck function over the Kepler bandpass (as described in the previous section). None of the resulting dilution models lie in the vicinity of any stellar evolution track, thus eliminating all potential stellar blends in the Kepler-41 photometry. Using this method to calculate T effp , we can conclude that the companion to Kepler-41b is a planet.
The right panel of Figure 5 shows the dilution models computed using values of T effp that were estimated from the bolometric luminosities of the star and companion (which are not as precise as those computed by integrating the star and companion Planck functions, but are simpler to derive). In this case, a subset of the models do overlap with stellar evolution tracks. Although we have shown that we can use the alternative method of computing T effp to confirm the planetary nature of Kepler-41b, we show here how we can further examine these overlapping models to exclude them as potential third light contaminants (which may be necessary for confirming other planet candidates). Figure 6 shows T eqp as a function of T effp for the subset of dilution models that are consistent with stellar evolution tracks (those shown in the right panel of Figure 5). For the companion to be of planetary nature, we expect a near balance in these temperatures, T effp ≈ T eqp , meaning that any incident energy upon the planet is re-radiated. If T effp >> T eqp , however, the companion must be burning Hydrogen, i.e., of stellar nature (although it is feasible that the object could be a young planet). For the case of Kepler-41b, T effp for these remaining dilution models aren't substantially greater than their corresponding values of T eqp . We therefore can't definitively rule out stellar blends using this comparison, but it may be useful for other planet candidate systems.
We next compared the companion mass to the stellar mass (Figure 7) for the same dilution models as shown in Figure 6. The values of M * are all between 0.8 -1.1 M ⊙ whereas the dilution models all have masses below 0.004 M ⊙ , well below the ∼0.08 M ⊙ mass limit required for Hydrogen burning (Kumar 1963), i.e., the companion cannot be a star. With this comparison, we can eliminate the remaining dilution models since we have shown that we cannot produce a proper stellar blend of any kind.
Our future plans include using this new method of combining phase curve modeling with stellar evolution models to both confirm and characterize additional Kepler planets. The potential to detect occultations in planet candidate light curves can be determined by combining signal-to-noise measurements with an assumption of an albedo of approximately 30% (Rowe et al. 2013). Based on the planet and star characteristics tables from Batalha et al. (2012), we expect about two dozen planet candidates in the Kepler Field of View will have the potential to be confirmed with this method.
The Spitzer Space Telescope gathered thermal planetary emission measurements in infrared wavelengths for several dozens of hot Jupiters. The aforementioned Kepler detections allow us to probe giant planet irradiated atmosphere properties at optical depths that were not explored before, thereby constraining further the energy budget of hot Jupiter planets (e.g., Madhusudhan & Seager 2009). In our solar system, gas giant geometric albedos range from 0.32 for Uranus to 0.50 for Jupiter in a bandpass similar to Kepler 's (Karkoschka 1994). This is mainly due to their low equilibrium temperatures, as compared to hot Jupiters, which allow the formation of cloud decks made of ammonia and water ice in their atmosphere that are highly reflective in visible wavelengths (Demory et al. 2011b).
Hot Jupiters emit very little in visible wavelengths. The albedos of hot Jupiters were expected to be low due to efficient reprocessing of stellar incident radiation into thermal emission (Marley et al. 1999;Seager & Sasselov 2000;Sudarsky et al. 2003). In addition, the presence of alkali metals in hot Jupiter atmospheres (Na and K) as well as TiO and VO (at the hotter range) is expected to cause significant absorption at visible wavelengths, rendering most hot Jupiters dark.
The first constraint on a hot Jupiter visible flux was obtained with the MOST satellite (Walker et al. 2003) observing HD209458b (Rowe et al. 2008b). The corresponding geometric albedo 3σ upper-limit of A g < 0.08 confirmed these earlier theoretical predictions. The majority of hot Jupiter occultations measured by Kepler photometry corroborate today the hypothesis that hot Jupiters emit very little in visible wavelengths, their measured geometric albedo being attributed to thermal emission leaking into shorter wavelengths rather than contribution from Rayleigh scattering, clouds or hazes.
Remarkably, a few irradiated giant planets exhibit visible flux in the Kepler bandpass that exceeds the expected contribution from thermal emission alone. A recent detailed analysis of Kepler-7b occultation measurements showed a significant departure of the measured brightness temperature as compared to the equilibrium temperature, suggesting that the planetary flux is dominated by Rayleigh scattering and/or hazes (Demory et al. 2011a). In addition, combining visible and Spitzer infrared occultation measurements showed that Kepler-12b also exhibits an excess of flux in the visible, possibly indicating a reflective component in this low-density hot Jupiter atmosphere (Fortney et al. 2011). Ideas that have been invoked to explain the wide variation in observed hot Jupiter albedos include variations in planetary densities (Sudarsky et al. 2003) and condensates phase transitions at narrow temperature ranges (Demory et al. 2011a;Kane & Gelino 2012).
Kepler-41b as another outlier?
Our global analysis yields an occultation depth of 60 ± 9 ppm, which translates to a geometric albedo of A g = 0.23 ± 0.05. Using a blackbody spectrum for the host star, the corresponding brightness temperature is 2420 K, which is ∼400 K larger than the maximum planetary equilibrium temperature, assuming zero-Bond albedo and no stellar incident energy recirculation from the day hemisphere to the night hemisphere. Kepler-41b shows similar brightness temperature excess as Kepler-7b, possibly suggesting contribution from Rayleigh scattering and/or hazes.
The planetary phase modulation, caused by the combination between reflected light and thermal emission, has an amplitude that is ∼1σ smaller than the occultation depth and slightly offset from the mid-occultation timing. At the high atmospheric pressures probed by Kepler (P∼1 bar), we would expect the even temperature across hemispheres to yield a phase curve exhibiting only nominal modulation. This result suggests that either atmosphere dynamics at depth are significantly more complex than this description or that the phase curve modulation is dominated by reflected light instead of thermal emission. Detailed modeling and Spitzer infrared observations would be especially useful toward a precise constraint on the planetary energy budget and could unambiguously disentangle the thermal emission and reflected light components.
Summary
We have presented a new method to confirm giant planets purely by analysis of the photometric light curve combined with stellar evolution models. We have developed a full phase photometric model that includes both primary and secondary transits along with flux contributions from ellipsoidal variations, Doppler beaming, and reflected/emitted light. We inject a full range of dilution values into the model light curve to simulate third light contamination from stellar blends, and iteratively fit each diluted model light curve to the photometry. We then compare these fit results to stellar evolution models to determine if any set of diluted model parameters are valid (meaning the star and companion have masses, sizes and orbits that are consistent with a stellar evolution model) and match the shape of the photometric light curve.
We applied this method to Kepler-41, a G6V star with a recently confirmed giant planet (Santerne et al. 2011a), using Kepler photometry taken during quarters Q1 -Q8. The phased light curve shows a clear secondary occultation with a depth of 60±9 ppm. The phase of this occultation is near φ = 0.5, indicating the orbit of Kepler-41b is likely nearly circular. We detected flux variations due to reflected/emitted light from the planet (with an amplitude of 37.4 +6.1 −6.6 ppm) and ellipsoidal variations (4.5 +2.8 −3.8 ppm), the latter of which enables us to estimate the mass and density of the planet. We did not detect variations due to Doppler beaming, but these measurements -if detected in the light curves of other planetary candidates -can also be used to measure the planet mass.
To determine whether any dilution models have properties consistent with a star, we first compared the denities and effective temperatures derived from the diluted models (ρ p versus T effp ) to all Yonsei-Yale evolution tracks (which provideρ * as a function of T eff * for all valid stellar evolution models). To estimate T effp for the dilution models, we computed the ratio of the planet luminosity to that of the star (L p /L * ), which is approximately equal to the measured secondary eclipse depth, and solved for T effp . We first computed L p /L * using the bolometric luminosities (the total amount of energy emitted across all wavelengths) for the planet and star. For comparison, we also computed these luminosities by integrating the Planck functions over the Kepler bandpass. This latter method to compute T effp resulted in unphysical star/companion parameters for all dilution models, thereby eliminating the possibility that the companion to Kepler-41b could be a stellar blend. Using values of T effp that were computed from bolometric luminosities yielded a small subset of dilution models that were consistent with stellar evolution tracks. For these models, we further examined the temperatures and masses of each system to filter out additional inconsistencies. We found that all companion masses from these remaining diluted models were well below the ∼0.08 M ⊙ limit for Hydrogen-burning, indicating that the companion cannot be a star. Although both methods to compute T effp provided enough information to rule out stellar blends in the Kepler-41 photometry, we recommend computing T effp with the more accurate method of integrating the star and planet luminosities over the Kepler bandpass.
Our best-fit model of Kepler-41b yields M p = 0.598 +0.384 −0.598 M J and R p = 0.996 +0.039 −0.040 R J . From our analysis of the phase curve combined with stellar evolution models we can therefore independently confirm that Kepler-41b is indeed a planet. This confirmation method can be applied to additional Kepler planet candidates that show a clear occultation in their light curve.
This paper includes data collected by the Kepler mission. Funding for the Kepler mission is provided by the NASA Science Mission directorate. Some/all of the data presented in this paper were obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX09AF08G and by other grants and contracts. The best-fit model for Kepler-41b phased to the orbital period and magnified to show the occultation. Our full phase photometric model includes flux variations induced by the companion that can be decomposed. These include Doppler beaming (blue dotted curve), ellipsoidal variations (green dashed curve) and reflected/emitted light (orange dotdashed curve). The sum of these three effects is shown in red. Note we did not detect Doppler beaming in the light curve of Kepler-41, but we include a description of this effect in this article because it may be applicable to other planet candidates. We solve for the maximum allowed dilution (i.e., the maximum amount of 3rd light from a potential blend) by measuring where ∆χ 2 changes by 1, 4, or 9 (corresponding to 1σ, 2σ, or 3σ), as shown in the top right panel by the red, blue and green horizontal lines, respectively. The lower six panels show six of the fit parameters as a function of dilution, and the red, blue and green vertical lines determine their range of valid values as constrained by the dilution fits. Comparison of each valid dilution model to stellar evolution models rules out massive, stellar objects, confirming the planetary nature of Kepler-41b. The mean stellar densityρ * is shown here as a function of T eff * for all available Yonsei-Yale stellar evolution tracks (shown by the red curves in each panel). The companion ρ p and T effp from the dilution model fits are overplotted for a range of metallicities Z (colored points in each panel). The dilution models in the left panel were computed using estimates of T effp that were computed by integrating the planet and star Planck functions over the Kepler bandpass and comparing the ratio of the resulting luminosities to the secondary eclipse depth ED. All dilution models in this case are inconsistent with any stellar blend (there is no overlap with the stellar evolution tracks), and we can conclude that the companion to Kepler-41 is a planet. In the right panel, the dilution models were computed using T effp values that were calculated from the ratio of the planet and star bolometric luminosities (over all wavelengths). This was done to determine if this simpler method (albeit not as precise) to compute T effp is sufficient to rule out potential blends. In this case, a subset of dilution models overlap with stellar evolution tracks and therefore need to be examined further (see Figures 6 and 7) in order to rule out stellar blends. Figure 5), the equilibrium temperatures can be compared to the effective temperatures. To be of stellar nature, the values of T effp for each model would need to be much greater than the corresponding values of T eqp (indicating that the companion is burning Hydrogen). In this case, the temperatures are comparable and cannot be used to definitively rule out stellar blends, but this comparison may be useful to confirm other planet candidates. Figure 6), the relation between each companion mass M p and the corresponding stellar mass M * is shown here. All dilution models have a companion mass less than that needed for Hydrogen burning (∼0.08 M ⊙ ), indicating that the companion cannot be a star. With this comparison, we can eliminate these remaining dilution models and conclude that the companion to Kepler-41b is a planet. | 2013-03-04T21:15:59.000Z | 2013-03-04T00:00:00.000 | {
"year": 2013,
"sha1": "ea5910898a8bae92507ac471cac4d25a39fe7fd2",
"oa_license": "CCBYNC",
"oa_url": "https://dspace.mit.edu/bitstream/1721.1/93201/2/Quintana-2013-CONFIRMATION%20OF%20HOT.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "862d64a385da7c9ebbd24527b84925375a0b6a04",
"s2fieldsofstudy": [
"Physics",
"Geology"
],
"extfieldsofstudy": [
"Physics"
]
} |
13988259 | pes2o/s2orc | v3-fos-license | Arrhythmogenic right ventricular cardiomyopathy/dysplasia (ARVC/D) in clinical practice
Abstract Arrhythmogenic right ventricular cardiomyopathy/dysplasia (ARVC/D) is an inherited myocardial disease characterized by fibro‐fatty replacement of the right ventricular myocardium, and associated with paroxysmal ventricular arrhythmias and sudden cardiac death (SCD). It is currently the second most common cause of SCD after hypertrophic cardiomyopathy in young people <35 years of age, causing up to 20% of deaths in this patient population. This condition has a male preponderance and is more commonly found in individuals of Italian and Greek descent. To date, there is no single diagnostic test for ARVC/D and the diagnosis is made based on clinical, electrocardiographic, and radiological findings according to the Revised 2010 Task Force Criteria. In this review, we will discuss the mainstay treatment which includes pharmacotherapy, implantable cardioverter‐defibrillator insertion for abortion of sudden cardiac death, and in the advanced stages of the disease cardiac transplantation.
| INTRODUCTION
Arrhythmogenic right ventricular cardiomyopathy/dysplasia (ARVC/ D) is an uncommon inherited cardiac disease characterized by progressive right ventricular (RV) dysfunction due to fibro-fatty replacement of the myocardium and associated with high risk of ventricular arrhythmias and sudden cardiac death (SCD). 1,2 ARVC/D has a predominantly autosomal dominant inheritance, although recessive forms associated with a cutaneous phenotype, such as Naxos disease and Carvajal syndrome, are also observed. 3,4 Despite RV abnormalities being the predominant finding, it has been recently appreciated that patients with ARVC/D may also have some degree of left ventricular (LV) involvement 5 and indeed severe LV impairment can sometimes be the initial manifestation of the disorder. 6 An LV-predominant form of ARVC/D has recently been described. 7 Independently of which ventricle is initially or predominantly affected, during the later stages, advanced disease can result in biventricular heart failure, which may closely resemble dilated cardiomyopathy (DCM). LV dysfunction is observed more frequently with greater RV dysfunction, worse functional class, and leads to an increased tendency to cardiovascular adverse events relating to heart failure. 8 The same study showed no clear relationship between LV involvement and an increased rate of arrhythmic events. 8 Clinical manifestations vary with the age of the patient and stage of disease. 9 In this article, we will review the pathophysiology or ARVC/D, the main diagnostic modalities used clinically aiding the diagnosis and patient management.
| PATHOPHYSIOLOGY OF ARVC/D
The pathophysiological mechanisms in ARVC/D involve desmosomal abnormalities that can arise from mutations in cell adhesion proteins or intracellular signaling components. 10 A number of genes have been implicated in the pathogenesis of ARVC/D, 11 as illustrated in Table 1. Particularly, reduced cardiac desmoglein-2 and desmocollin-2 levels appear to be specifically associated with ARVC/D, independent of the gene mutations found. 12 The desmosome normally maintains cell-to-cell adhesion and confers mechanical strength to tissues (Figures 1 and 2). In the extracellular space, desmosomal cadherins (desmocollin and desmoglein) bind strongly to each other. Cadherins span the plasma membrane and attach to linker proteins (plakoglobin, desmoplakin, and plakophilin-2) in the intracellular space. 1 Plakoglobin and desmoplakin are intracellular proteins anchoring desmosomes to desmin intermediate filaments. Moreover, plakoglobin contributes to interlinking adherens junctions with the actin cytoskeleton and participates in cellular signaling to the nucleus and desmosome organization. 13,14 Defects in linking sites of these proteins can interrupt cell adhesion, especially under conditions of increased mechanical stress or stretch, leading to cell death, progressive loss of myocardium, and fibro-fatty replacement. 15 As such, surviving myocardial fibers within the fibro-fatty tissue from zones of slow conduction provide a medium for re-entry ventricular arrhythmias. [16][17][18][19] The degeneration-inflammation model posits that the resulting cellular damage is found in tissues under high mechanical stress. 1 Indeed, this notion is in keeping with the observations that exercise increases age-related penetrance and risk of arrhythmias in ARVC/D-associated mutation carriers. 20 The potential role of calcium-sensitive pathways in the pro-arrhythmia mechanism of ARVC/ D has been proposed. 21 In a recent meta-analysis, however, the presence of desmosomal gene mutations was not associated with global or regional structural and functional alterations, epsilon wave, or VT of left bundle branch morphology. 22
| CLINICAL PR ESEN TATION
Classically, ARVC/D usually presents between the second and fourth decades of life with syncope, symptomatic arrhythmias, or SCD. An example of monomorphic VT in a patient with ARVC/D is shown in Figure 3 (reproduced from 23 with permission). Chest pain can be the presenting finding of the disorder. 24 One-third of the patients become symptomatic before the 30th year of life. ARVC/D can lead to deleterious consequences, such as ventricular arrhythmias, pump failure, and death. Competitive sports have been associated with a twofold increased risk of ventricular arrhythmias and mortality, and earlier presentation of symptoms comparing with inactive patients and patients who participated in recreational sport. 25 Another interesting finding is the relation between meteorological factors and outcomes in patients with ARVC/D. Particularly, higher temperature and larger variation in humidity within 3 days of events were independently associated with the development of ventricular arrhythmic and sudden mortality events. 26 Intracardiac thrombosis may occur in certain patients with ARVC/D. 27 Atrial arrhythmias are also common in ARVC/D and present at a younger age than in the general population. 28 Atrial arrhythmias are associated with male gender, increasing age, and left atrial dilation and clinically important, and they are associated with inappropriate implantable cardioverter-defibrillator shocks 29 and increased risk of both heart failure and death. 28 In addition to tachy-arrhythmias, brady-arrhythmias are also observed in this condition. first-degree relatives, who often have an incomplete disease phenotype. 37 According to these recommendations, familial ARVC/D is said to occur when the following conditions are met: (i) T-wave inversion in the right precordial leads in individuals older than 14 years of age; (ii) late potentials by signal-averaged ECG (SAECG); and (iii) ventricular tachycardia with left bundle branch block morphology on the ECG or exercise testing or >200 premature ventricular contractions in 24 hours.
| Electrocardiography
In the ECG, epsilon waves, which are late potentials occurring between the end of the QRS complex and the onset of the T-wave, and T-wave inversion in the right precordial leads of V1 to V3 may be observed. Epsilon waves are specific for ARVC/D although it is only observed in 30% of patients and are best seen in the right precordial leads V1-V3 ( Figure 4). Particularly, epsilon waves in lead aVR in patients with arrhythmogenic right ventricular cardiomyopathy are rare electrocardiographic findings with a specificity of 100%. 38 The detection of epsilon waves on 12-lead ECG has been associated with higher episodes of sustained VT, but reassuringly, this did not lead to increased SCD incidence. 39 A case of a child with extensive involvement of both right and left ventricular walls and epsilon waves in all precordial leads has been reported. 40 However, interobserver variability in the assessment of epsilon waves is high. 41 As a result, the assessment of the epsilon waves must be performed cautiously particularly in patients with who would not otherwise meet diagnostic criteria. The sensitivity of epsilon waves on the ECG is low between 25% and 38%, and therefore, a normal ECG does not exclude the diagnosis of ARVC/D. 42,43 The use of Fontaine bipolar precordial lead electrocardiography (F-ECG) increased the sensitivity to 50%. 43 o RV ejection fraction ≤40% • Regional RV akinesia or dyskinesia and one of the following: o Ratio of RV end-diastolic volume to BSA ≥100 mL/m 2 and <110 mL/m 2 (male) or ≥90 mL/m 2 and <100 mL/m 2 (female) o RV ejection fraction >40% and ≤45% RV angiography • Regional RV akinesia, dyskinesia, or aneurysm • Regional RV akinesia, dyskinesia, or aneurysm LI ET AL.
| Differential diagnosis
It is important to differentiate ARVC/D from other right ventricular disorders, such as Brugada syndrome, as overlapping features may be found. 55 Moreover, cardiovascular conditions such as peripartum cardiomyopathy 46 or athlete's heart can present with similar clinical and imaging findings. 56 A correct diagnosis is important because unlike ARVC/D, athlete's heart would not justify disqualification from competitive sports. 56 Distinguishing between ARVC/D and athlete's heart remains a diagnostic challenge. High-level endurance training is associated with RV elongation, dilation, and hence enlargement compared to isometric physical activities. 57 As such, an enlarged RV dimension alone is not a reliable diagnostic criterion for ARVC/D in elite athletes. A large proportion of athletes also express echocardiographic morphological findings often evident in documented ARVC/D, including rounded RV apex, and prominent RV trabeculations and moderator band. 58 By contrast, impaired RV systolic function is associated only with ARVC/D and not with athlete's heart. The use of both RV dilation and systolic dysfunction might serve as a useful diagnostic tool to separate between the two.
Despite all the imaging parameters, and new normal specific ranges for athletes, 58 reaching a diagnosis of ARVC/D in an athlete can sometimes remain challenging and a short period of detraining with subsequent assessment usually with cardiac MRI can be helpful in resolving this ambiguity. 59
| MANAGEMENT
In ARVC/D, the main goal is to avoid the high-risk events of malignant arrhythmias and SCD and slow the progression of heart failure. 2,60 Competitive sports are discouraged. 2,61 While patients with ARVC are allowed to perform exercise including sports as part of a healthy lifestyle, they should not exercise to maximal capacity and be vigilant to any symptoms of palpitations. 20 Frequent endurance exercise increases the risk for VT/VF and heart failure. 20 The management of patients with ARVC/D in specific situations such as pregnancy is beyond the scope of this review, but is directed to the following reference. 62 Anti-arrhythmic medications, such as beta-blockers and class-III agents, are advised. 2 Sotalol and amiodarone with or without the need for conventional beta-blockers are potent (effective is used in 3 sentences in a row). 16,63 Calcium channel blockers may be effective in selected patients. 64 The addition of flecainide in combination with sotalol/metoprolol may be an adequate strategy for the control of ventricular arrhythmias in patients with ARVC/D refractory to single-agent therapy and/or catheter ablation. 65 In general, the most sufficient combinations appear to be sotalol or flecainide and amiodarone/beta-blockers. 64 The American College of Cardiology, the American Heart Association and the European Society and Cardiology recommended ICD implantation for the prevention of SCD events. 66 Risk stratification and indication to ICD implantation in ARVC/D has been proposed by an international task force consensus statement. 67 One study reported that the annual cardiac mortality in patients with ARVC/D who were implanted with an ICD was 0.9%. 68 Finally, VT ablation targeting late potentials abolition seems to be effective in preventing VT recurrence in patients with or without RV structural abnormalities. 69 In the multicenter registry, clinical response (freedom from SCD, VT requiring hospitalization, or heart transplantation) after the last ablation (predominantly endocardial) was 86% at 1 year, 69% at 5 years, and 60% at 10 years. 70 On the other hand, the combined endocardial and epicardial approach resulted in better procedural success and long-term VT-free survival compared with the endocardial approach in ARVC/D patients with recurrent VTs. 71,72 In fact, the combined endocardial and epicardial VT ablation eliminated all clinical and induced VTs, and the addition of scar dechanneling resulted in noninducibility in all cases. 72 Identification of conducting channels (CCs) inside or between the scars can be achieved via endocardial high-density substrate mapping. 73 However, another single-center study showed that the vast majority of critical VT circuits were epicardial while epicardial ablation of VT appeared to be both safe and effective in achieving arrhythmia control in ARVC/D. 74 A recent meta-analysis showed better outcomes with the combined endocardial and epicardial ablation approach compared with the endocardial approach only. 75 However, a stepwise approach with an endocardial ablation first and additional epicardial ablation only if VT episodes are still inducible has been proposed. 76,77 Furthermore, an inducibility-guided catheter ablation strategy of VT in patients with ARVC/D has been proposed to prevent unnecessary epicardial ablation procedures. 78 Electrical regression of SAECG after catheter ablation in ARVC/D has been found to be associated with fewer ventricular arrhythmia recurrences. 79 Other treatment options like bilateral cardiac sympathectomy need to be studied further in order to investigate its optimal timing and use in ARVC/D management. 80 | 2018-05-09T00:43:46.005Z | 2017-12-21T00:00:00.000 | {
"year": 2017,
"sha1": "035976a0c2122b5605f4ef29600bc65b1d9d474a",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/joa3.12021",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "035976a0c2122b5605f4ef29600bc65b1d9d474a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14744267 | pes2o/s2orc | v3-fos-license | Penicillium excelsum sp. nov from the Brazil Nut Tree Ecosystem in the Amazon Basin’
A new Penicillium species, P. excelsum, is described here using morphological characters, extrolite and partial sequence data from the ITS, β-tubulin and calmodulin genes. It was isolated repeatedly using samples of nut shells and flowers from the brazil nut tree, Bertolletia excelsa, as well as bees and ants from the tree ecosystem in the Amazon rainforest. The species produces andrastin A, curvulic acid, penicillic acid and xanthoepocin, and has unique partial β-tubulin and calmodulin gene sequences. The holotype of P. excelsum is CCT 7772, while ITAL 7572 and IBT 31516 are cultures derived from the holotype.
Introduction
Penicillium species are very important agents in the natural processes of recycling biological matter. Some species cause deterioration of all sorts of man-made goods; some rot fruit or spoil foods; some species secrete secondary metabolites (extrolites) such as mycotoxins (e.g. ochratoxins, patulin, citrinin), while other extrolites are used as pharmaceuticals, including antibiotics such as penicillin and the cholesterol-lowering agent lovastatin [1,2,3,4]. Some species are known for their production of organic acids and diverse enzymes that degrade a wide variety of complex biomolecules [1,2,3]. A variety of species are capable of producing or modifying biological chemicals, and this field is set for great expansion. A few species are directly involved in food production: this field is not likely to expand, because many species produce mycotoxins. Penicillium is an ascomycete genus and belongs to the family Aspergillaceae [4]. More than 350 species are currently accepted in this genus [5].
The Amazon rainforest has multiple ecosystems with a huge fungal biodiversity. It has an important role in the global weather balance and is the location of many native people. The equatorial climate is hot and humid, with an average temperature of 26°C and relative humidity 80-95%.
Brazil nuts are one of the most important products taken from the Amazon rainforest region. Brazil nut trees, Bertholletia excelsa Humb. & Bonp., grow wild, take 12 years to bear fruit, may live up to 500 years and reach up to 60 m high. Pollination of the unusual flowers requires wild, large bodied bees, especially from the family Euglossinae [6]. The fungal species most commonly isolated from brazil nuts are Aspergillus flavus, A. nomius, A. pseudonomius, A. niger, A. tamarii, Penicillium glabrum, P. citrinum, Rhizopus spp., Fusarium oxysporum [7,8,9,10,11,12] and A. bertholletius, a species described recently [13].
During a study of the mycobiota of the brazil nut tree ecosystem, including flowers, brazil nuts, soil, bees and ants, an undescribed Penicillium species was found. This species is described here as Penicillium excelsum sp. nov.
Sample collection, isolation and morphological examination
Samples were collected from the ecosystem of the brazil nut tree, Bertholletia excelsa in the Amazon rainforest in Para and Amazon States, Brazil. Sample collection and methodology have been described previously [13]. Briefly, samples of brazil nut kernels and shells, flowers and leaves, soil from beneath the trees, plus bees and ants. Collecting was carried out in collaboration with the Brazilian Ministry of Agriculture.
For fungal isolation, nuts and shell samples were disinfected in sodium hypochlorite solution, then plated onto dichloran 18% glycerol agar (DG18), according to the methodology of Pitt and Hocking [2]. Soil samples were mixed with sterile water containing peptone (0.1%), then serially diluted and spread plated onto DG18. Flower and leaf samples were surface disinfected as above and plated onto DG18 while bee and ant samples were plated on DG18 without surface disinfection. All plates were incubated at 25°C for 7 days, then all colonies of Penicillium species were transferred onto Czapek yeast extract agar [2] and incubated at 25°C for 7 days for further identification.
The Penicillium isolates were examined on standard identification media for Penicillium species according to Pitt [14] namely: Czapek yeast extract agar (CYA), malt extract agar (MEA, Oxoid), and 25% glycerol nitrate agar (G25N) at 25°C and also on CYA at 37°C and 42°C, plus oatmeal agar (OAT), creatine sucrose agar (CREA) and yeast extract sucrose (YES) agar [3]. The incubation time for all media was 7 days and plates incubated in the dark.
The standard conditions used for the description of Penicillium excelsum are taken from Pitt [14] and Frisvad and Samson [15]. Capitalized colours are from the Methuen Handbook of Colour [16].
DNA extraction, amplification, sequencing and phylogenetic analysis A standard phenol:chloroform extraction protocol [17] was used for genomic DNA isolation from an extype culture (ITAL 7572). The primer-pairs ITS1-ITS4 [18], Bt2a-Bt2b [19] and cmd5-cmd6 [20] were used to amplify the ITS1-5,8S-ITS2 region (ITS), partial β-tubulin gene (BenA) and partial calmodulin gene (CaM) respectively, adopting a standard amplification cycle, which ran 35 cycles with an annealing temperature of 55°C [5]. Excess primers and dNTPs were removed from the PCR product using the Wizard1 SV Gel and PCR Clean-Up System (Promega, Wisconsin, USA). Purified PCR products were sequenced in both directions using a BigDye1 Terminator v3.1 Cycle Sequencing kit (Applied Biosystems, California, USA) according to the manufacturer's instructions. A volume of HiDiformamide (10 μl) was added to the sequencing products, which were processed in an ABI 3500XL Genetic Analyzer (Applied Biosystems). Contigs were assembled using the forward and reverse sequences with the programme SeqMan from the Laser Gene package (DNAStar Inc., Wisconsin, USA). All sequences were subjected to Basic Local Alignment Search Tool (BLAST) against the NCBI database to identify Penicillium species with similar DNA sequences. The ITS and BenA sequences were aligned by ClustalW algorithm using Mega5.1 software (21) with those from Penicillium subgenus Aspergilloides section Lanata-Divaricata type or neotype strains, as recently suggested by Visagie et al [5]. Phylogenetic trees were constructed with Mega5.1 software [21], using the Neighbor-Joining (NJ) and Maximum Likelihood (ML) methods based on the Tamura-Nei model [22]. To determine the support for each clade, a nonparametric bootstrap analysis was performed with 1,000 resamplings.
Extrolite analysis
Cultures were analysed by High Performance Liquid Chromatography (HPLC) with a diode array detector (HPLC-DAD) as described by Frisvad and Thrane [23] and modified by Houbraken et al. [24], as previously described [13]. Three agar plugs each from CYA and YES medium were pooled and extracted with 0.75 mL of a mixture of ethyl acetate/ dichloromethane/methanol (3:2:1) Results and Discussion
Sources of the isolates
In total, 116 isolates of -the new species described here as Penicillium excelsum were found in brazil nut shells and kernels, from soil close to Bertholletia excelsa trees, and from flowers, bees and ants associated with Bertholletia trees. The origins of representative P. excelsum isolates are shown in Table 1. Soil may be the primary habitat of this species, as many species of Penicillium are soil fungi [4,25]. However, this study shows that P. excelsum also occurs on bees and ants, which may carry spores to the flowers, and other locations by contact or excreta which will all play a role in dispersal of this species.
Extrolites
HPLC-DAD analysis of extracts showed that several strains of P. excelsum produce andrastin A, penicillic acid, while some also produce xanthoepocin. Strain ITAL 3000 also produced curvulic acid. Related species also produce penicillic acid, for example P. brasilianum, P. cremeogriseum, P. ochrochloron P. pulvillorum and P. vanderhammenii [24,26,27]. P. pulvillorum and P. simplicissimum have also been reported to produce andrastin A, and P. brasilianum, P. ochrochloron, P. pulvillorum, P. rolfsii, P. simplicissimum and P. svalbardense have been reported to produce xanthoepocin [24,28]. Even though andrastin A, penicilllic acid, and xanthoepocin have been found in species outside section Lanata-Divaricata [15] the particular combination of these extrolites is mostly found in this section. P. excelsum produces a profile of extrolites close to that of P. brasilianum, P. ochrochloron, P. pulvillorum and P. rolfsii and the close relationship is confirmed by sequence and morphological data as shown in Figs 1, 2 and 3.
Phylogenetic analyses
P. excelsum ITS, BenA and CaM sequences were found to be different from all other sequences in NCBI (accessed 30 May, 2015). When the BLAST searches were performed using the option "sequences from type material" [29] the sequences harmonized in showing that P. excelsum is most similar to P. ochrochloron neotype strain CBS 357.48 and P. pulvillorum neotype CBS 280.39. Both, P. pulvillorum and P. ochrochloron belong to Penicillium subgenus Aspergilloides section Lanata-Divaricata in the recent phylogenetic reclassification of Penicillium [4]. A more recent study [5] provided GenBank accession numbers to reference sequences for all accepted Penicillium species. Using these reference sequences, ITS-based phylograms (data not shown) generated using Neighbor-Joining and Maximum Likelihood techniques confirmed the placement of P. excelsum in section Lanata-Divaricata. Although the ITS phylograms of P. excelsum clustered and were differentiated from other species of section Lanata-Divaricata, the majority of bootstrap values of branches were low, meaning that the ITS tree was poorly resolved. The ITS region is accepted as the primary fungal barcode [30]; however, it is well known that the ITS region provides only poor resolution of many Penicillium species [5,31]. In consequence, it has been proposed [5] that β-tubulin (BenA) is an optimal secondary identification marker for Penicillium species. BenA-based phylograms, generated using both Neighbor-Joining and Maximum Likelihood methods, placed P. excelsum on a branch separated from all other species on Penicillium section Lanata-Divaricata (Figs 1 and 2). Neighbor-Joining and Maximum Likelihood based phylograms were consistent and reveled that P. excelsum represent a separated lineage within a clade composed of P. pulvillorum, P. svalbardense, P. piscarium, P. ochrochloron, P. rolfsii, and P. subrubescens.
On G25N at 7 days, 25°C, colonies 10-14 mm in diameter, low and dense, coloured buff with light sporulation; reverse brown to deep brown.
On YES agar at 7 days, 25°C, colonies 34-42 mm in diameter, moderate sporulation and a brown reverse.
At 37°C on CYA, colonies 8-22 mm in diameter, coloured grey to brown; soluble pigment brown, reverse deep brown.
At 42°C on CYA, no growth.
Distinguishing features
This species is classified in Penicillium subgenus Furcatum section Furcatum in the classification of Pitt [14] and Penicillium subgenus Aspergilloides section Lanata-Divaricata according to Houbraken and Samson [4].
Morphologically, P. excelsum differs from the closely related P. subrubescens, P. pulvillorum, P. piscarium, P. rolfsii, P. ochrochloron and P. svalbardense by having a combination of smooth stipes, the frequent formation of rami, and the production of large, ellipsoidal, smooth walled conidia. P. ochrochloron and P. rolfsii are similar, but have finely roughened conidia. P. subrubescens, P. pulvillorum, P. piscarium and P. svalbardense produce globose to subglobose conidia, and in addition the conidia of P. piscarium are distinctly rough-walled. P. excelsum grows well at 37°C, though not as well as P. rolfsii. Most isolates of P. subrubescens and P. pulvillorum produce a red reverse colour on malt extract agar, whereas the reverse of P. excelsum is pale brown.
This species is also distinguished by a unique profile of extrolytes and by unique DNA sequences in the ITS, BenA and CaM genes. This species is also notable in that cultures on CYA, MEA and YES agar cause the polystyrene plastic in Petri dishes to become opaque over time (Fig 4). The opaqueness cannot be removed using a scapel, as the chemical reaction with the plastic lids was irreversible. A volatile compound produced by the fungus as it grows must be responsible. A preliminary examination of the volatiles from P. excelsum showed that it produced large amounts of acetic acid. An HPLD-DAD analysis of the opaque layer on the Petri dish lid revealed no detectable extrolites, indicating that the compound responsible for the opaqueness is without a chromophore. This effect has not been reported from any Penicillium or Aspergillus species. Further studies will be carried out in order to determine these compounds.
Conclusion
P. excelsum represents a new important phylogenetic species after applying a polyphasic approach using morphological characters, extrolite data, ITS, BenA and CaM partial sequences. P. excelsum is distinguished by a combination of a unique profile of extrolites, DNA sequence, micro-morphological features and the unique capacity to render Petri dish lids irreversible opaque. | 2018-04-03T05:14:43.453Z | 2015-12-30T00:00:00.000 | {
"year": 2015,
"sha1": "d0f4b856ce0fcf8305c33170c98077911626fd6b",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0143189&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d0f4b856ce0fcf8305c33170c98077911626fd6b",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
207489230 | pes2o/s2orc | v3-fos-license | Imaging of Tuberculosis of the Abdominal Viscera: Beyond the Intestines
There is an increasing incidence of both intra- and extra-thoracic manifestations of tuberculosis, in part due to the AIDS epidemic. Isolated tubercular involvement of the solid abdominal viscera is relatively unusual. Cross-sectional imaging with ultrasound, multidetector computed tomography (CT), and magnetic resonance imaging (MRI) plays an important role in the diagnosis and post treatment follow-up of tuberculosis. Specific imaging features of tuberculosis are frequently related to caseous necrosis, which is the hallmark of this disease. However, depending on the type of solid organ involvement, tubercular lesions can mimic a variety of neoplastic and nonneoplastic conditions. Often, cross-sectional imaging alone is insufficient in reaching a conclusive diagnosis, and image-guided tissue sampling is needed. In this article, we review the pathology and cross-sectional imaging features of tubercular involvement of solid abdominopelvic organs with a special emphasis on appropriate differential diagnoses.
INTRODUCTION
There has been a resurgence of tuberculosis with the advent of the HIV/AIDS epidemic. Extra-pulmonary tuberculosis is also increasing in incidence due to multidrug resistant tubercle bacilli and certain genetic variations in mycobacteria such as phospholipase c gene D (plcD) mutations. [1] Genitourinary tuberculosis has been reported to be the most common site for extrapulmonary tuberculosis. [2] Abdominal solid organ involvement other than the genitourinary tract is less common, constituting 15-20% of abdominal tuberculosis in various studies. [3,4] Imaging diagnosis of solid visceral tuberculosis is often elusive as only 15% of patients with abdominal tuberculosis have evidence of pulmonary involvement. [5] Further, radiological features are nonspecific, mimicking a wide gamut of pathologies such as lymphoma, leukemia, metastasis, sarcoidosis, histoplasmosis, and pyogenic infections. The diagnosis of tuberculosis is therefore not made prospectively. Accordingly, the aim of this article is to provide a comprehensive cross-sectional imaging review of tuberculosis of abdominal solid organs, including genitourinary tuberculosis.
PATHOGENESIS OF TUBERCULOSIS IN SOLID ABDOMINAL VISCERA
The most common route of spread of tuberculosis to the solid abdominal viscera is the hematogenous route and less commonly the lymphogenous route. The disseminated bacilli lodge in the visceral parenchyma and multiply within the macrophages producing granulomatous inflammation consisting of epithelioid macrophages and Langhans giant cells eventually resulting in caseous necrosis. In the kidney, the bacilli lodge in the cortex in the glomerular and peritubular capillaries and form caseous granulomas that can cavitate and communicate with collecting tubules with subsequent dissemination into renal pelvis, ureter, and urinary bladder. In the immune-competent patients, the caseous necrosis is gradually replaced by fibrosis and calcification resulting in calcified granulomas. [6] In the immunocompromised patients, the granulomas are less well-formed and caseous necrosis is not a feature. [7] Renal tuberculosis Genitourinary tuberculosis is the most common site of extra-pulmonary tuberculosis and accounts for 15-20% of extra-pulmonary infections. [8] Although the mode of spread to the kidneys is primarily hematogenous, the disease is often more severe in one kidney. [8] Ultrasound findings of renal tuberculosis are nonspecific, including hypoechoic parenchymal masses, dilated irregular calices, and hydronephrosis with debris. [9] Computed tomography (CT) findings include hypodense parenchymal lesions, miliary nodules [ Figure 1], and renal abscess [ Figure 2]. Chronic cases demonstrate cortical thinning and parenchymal scarring. CT urography is helpful in demonstrating collecting system involvement, including findings such as abnormal urothelial thickening and enhancement, uneven caliectasis, infundibular stricturing [ Figure 3], and hydronephrosis giving the appearance of a multiloculated cyst. [10] CT also demonstrates calcification in over 50% cases. Magnetic resonance imaging (MRI) has limited additional diagnostic value in renal tuberculosis. [11] Tubercular lesions are iso-to hypointense on T1-weighted (T1W) images and iso-, hypo-, or hyperintense on T2-weighted (T2W) images; depending on presence or absence of caseous necrosis. MR urography may show changes similar to CT urography [ Figure 4]. The radiological differential diagnosis of early renal parenchymal tuberculosis includes pyogenic renal infections and fungal infections. Correlation with clinical presentation usually helps in differentiating them from tuberculosis. Chronic tuberculosis may be indistinguishable from xanthogranulomatous inflammation. [12] Renal sarcoidosis, which occurs in 7-22% cases can be confused with tuberculosis but presence of noncaseating granulomas, hypercalcemia, and nephrocalcinosis can help in differentiating from tuberculosis. [13] Solitary renal tuberculoma can mimic renal neoplasm like renal cell carcinoma (RCC), especially the papillary type; lymphoma; or metastases. [7] Hepatic tuberculosis The liver is the second most common site of involvement in tuberculosis of the abdominal solid organs. A wide variety of terms have been used by different authors to describe focal hepatic tuberculosis including: Atypical tuberculosis, tubercular hepatitis, tubercular cholangitis, and serohepatic tuberculosis. [14,15] One proposed comprehensive classification of hepatic tuberculosis included the following types: Micronodular, macronodular, mixed, and isolated tubercular abscess. [5] Micronodular tuberculosis refers to miliary tuberculosis where the lesions range in size from 0.5-1.0 cm. Macronodular or pseudotumor type is characterized by nodules of 1-3 cm size. [5,15] Mixed type of hepatic tuberculosis demonstrates both micro and macronodules. Isolated tubercular abscess is the rarest type of hepatic parenchymal tuberculosis, seen in immunocompromised patients, and characterized by a large caseous granuloma that mimics an abscess. However, the prognostic significance of this classification remains unclear.
The only imaging finding in micronodular type may be hepatomegaly if the lesions are below the resolution of ultrasonography (US) or computed tomography (CT). [15] US and CT may demonstrate lesions as tiny hypoechoic or hypodense lesions of varying size with a "bright liver pattern" on US. [16] The differential diagnosis of micronodular hepatic tuberculosis includes metastases, lymphoma, leukemia, sarcoidosis, and fungal infection.
The macronodular form is seen as large lesions with peripheral rim enhancement and central low attenuation on CT. This appearance is due to caseous necrosis [ Figure 1]. [5,17] Macronodular form may appear identical to pyogenic abscess, metastases, and primary liver tumors like hepatocellular carcinoma and cholnagiocarcinoma. [15,18,19] Isolated tubercular abscess mimics pyogenic liver abscess on imaging [ Figure 5]. Calcified granulomas may be the only evidence of tuberculosis in the healed phase. MRI demonstrates tubercular lesions as hypointense with peripheral hypointense rim on T1W images and hypo-, iso-or hyperintense lesions with peripheral less intense rim on T2W images, and rim or heterogeneous enhancement on postgadolinium images [ Figure 6]. [5,20] Image-guided biopsy may be required to obtain a definitive histological diagnosis. [14,21]
Splenic tuberculosis
The spleen is rendered vulnerable to tuberculosis due to its extensive network of reticuloendothelial cells. Indeed, 80-100% of disseminated tuberculosis demonstrates splenic involvement in various autopsy series. [5] Splenic tuberculosis may manifest as isolated splenomegaly, micronodular, or macronodular form. [22,23] Micronodular lesions that occur as a part of miliary disease are usually beyond the resolution of US but may be seen on CT as hypodense foci [ Figure 1]. Macronodular disease is seen as solitary or multifocal lesions of 1-3 cm size that are hypodense on CT, but with a peripheral rim of enhancement [ Figure 1]. [15,24] MRI findings of splenic tuberculosis are variable depending on the stage of evolution of tuberculomas. Lesions may demonstrate gradual peripheral enhancement with complete fill-in of the lesions, with no central caseous necrosis. [23,25,26] Differential diagnosis of splenic tuberculosis includes pyogenic infections and disseminated fungal infections like candidiasis. Leukemic and lymphomatous infiltration is indistinguishable from tuberculosis. Fine needle aspiration of splenic lesions for confirmation of diagnosis is indicated in patients with no other evidence of tuberculosis or those that are unresponsive to treatment. [21,27]
Adrenal tuberculosis
Tuberculosis of adrenal glands is the most common cause of chronic adrenal insufficiency in developing countries. [28,29] Involvement is bilateral in up to 90% of cases as the tubercular bacilli reach the adrenal gland by the hematogenous or lymphatic route and both glands are equally susceptible. [30] Adrenal insufficiency usually occurs when 90% of the gland is destroyed.
CT and MRI reflect the typical pathologic changes. [25,26] In the caseous granulomatous stage, there is mass-like enlargement of the adrenal glands with or without contour preservation. Caseous necrosis is seen as a low attenuation center on CT scans. [31] The glands are usually hypo-to isointense on T1W images and hyperintense on T2W images. Adrenal glands may also enhance homogenously or demonstrate peripheral rim enhancement in the presence of central necrosis [ Figure 7]. [32] In this stage, adrenal tuberculosis may appear identical to metastases, lymphoma, hemorrhage, histoplasmosis, or primary tumors. In a study of 108 patients, Yang et al., [31] showed that bilaterality, preserved contour and peripheral rim enhancement were more often seen in adrenal tuberculosis than in primary tumors. In chronic adrenal tuberculosis, the glands are atrophic and demonstrate calcification on CT and low signal on all MRI sequences.
Pancreatic tuberculosis
The pancreas is uncommonly affected by tuberculosis, with an incidence in various autopsy series of 4.7%. [33] Spread of the bacillus to the pancreas is either from the hematogenous route or by contiguous extension of tubercular lymphadenopathy. The usual imaging appearance is that of a mass in the pancreas mimicking pancreatic cancer. [21,34] On US, pancreatic tuberculosis is seen as one or more solid hypoechoic masses in the pancreatic parenchyma that may sometimes show central liquefaction. [5] Hypodense hypovascular masses with adjacent necrotic or non-necrotic lymphadenopathy are features encountered on CT [ Figure 8]. [21] Peripheral rim enhancement may be seen in some cases. [34] Masses in the pancreatic head region may result in obstructive Figure 8]. The differential diagnosis for pancreatic tuberculosis includes pancreatic cancer, chronic pancreatitis, metastases, fungal infections, sarcoidosis, and Castelman's disease. [35,36] The major challenge on imaging is to differentiate pancreatic tuberculosis from pancreatic adenocarcinoma. Most cases reported in the literature required percutaneous image-guided biopsy for a definitive diagnosis. [36] Soft pointers to a tubercular etiology include absence of vascular invasion or pancreatic duct dilatation, known primary tuberculosis, and necrotic lymphadenopathy.
Male genital tuberculosis
Tubercular involvement of the male genital tract may occur either from extension of the upper urinary tract tuberculosis, or from hematogenous or lymphatic spread. [10] Caseous necrosis in tubercular prostatitis or prostatic abscess is seen on US as hypoechoic areas with surrounding hyperemia and on CT as peripheral rim enhancing hypodense regions mimicking pyogenic abscess. [10] On transrectal US (TRUS), tubercular lesions occur as hypoechoic lesions in the peripheral gland in the posterior and lateral lobes, and difficult to distinguish from adenocarcinoma. [37] CT helps in demonstrating extension of the prostatic abscess into adjacent organs and forming sinuses and fistulas with the perineum. T2W MR images demonstrate prostatic abscess as a heterogeneous high signal lesion with radiating streaky regions of low signal intensity giving the appearance of 'watermelon skin' . [38] TRUS due to its ability for real time depiction of the prostatic anatomy, can help in transrectal biopsies and drainages.
Tubercular epididymitis is seen on US as homogeneous or heterogeneous hypoechoic, enlarged, nodular epididymis whereas tubercular orchitis is seen as multiple hypoechoic intratesticular nodules [ Figure 9]. [9] Scrotal wall thickening, abscess, sinus tracts, hydroceles, and calcifications are other findings seen in severe forms. MRI may show enlarged epididymis with low signal on T2W images in chronic cases due to chronic inflammation and fibrosis. [10] Female genital tuberculosis The most common site of female genital tuberculosis is the fallopian tube which can be seen in 94% of patients. [38] The second most common manifestation is tubercular endometritis, seen in 50% of cases with fallopian tube tuberculosis. [10] Salpingitis is often bilateral and results in infertility. Conventional hysterosalpingography (HSG) remains the main stay for investigating fallopian tube patency. HSG findings include tubal occlusion, stricturing, rigid pipe stem tubes, endometritis with adhesions (causing Asherman syndrome), and T-shaped distortion of the endometrial cavity. [39] US and CT allow for the evaluation of the adnexa, which may show tubo-ovarian abscesses and chronic calcifications [ Figure 10]. MRI better demonstrates changes such as uterine adhesions, hydrosalpinx, and tubo-ovarian abscess [ Figure 11].
IMAGING OF SOLID VISCERAL TUBERCULOSIS IN IMMUNOCOMPROMISED PATIENTS
Tuberculosis tends to be more often extra-pulmonary and widely disseminated in immunosuppressed patients. On imaging, tubercular lesions have a similar appearance to the lesions in immunocompetent patients. However, in severely immunosuppressed patients, the lesions tend to be larger with ill-defined margins. Miliary tuberculosis is more common in the immunocompromised patients, however the miliary nodules are poorly formed and may manifest just as organomegaly. [7] CONCLUSION Although tuberculosis of solid abdominal organs is uncommon, awareness of the imaging findings is important both due to its increasing incidence and the fact that it may . (a, b) Axial CECT images of the upper abdomen demonstrate hypodense mass with nonenhancing necrotic areas (white arrows) in the pancreatic head and uncinate process region causing portal vein thrombosis with cavernous transformation (black arrow). The radiological picture is indistinguishable from pancreatic cancer. Note the absence of arterial encasement. Ultrasound guided fine needle aspiration biopsy revealed granulomatous inflammation. The patient was treated with standard antitubercular therapy with marked improvement. b a mimic a variety of intra-abdominal pathologies. There is a wide ranging differential diagnosis for solid visceral organ tuberculosis on imaging. However, this diagnosis should be considered in individuals who are immunosuppressed, have other risk factors or have a history of pulmonary tuberculosis. To be sure, cross-sectional imaging features alone may be insufficient to reach a definitive diagnosis; and image-guided tissue sampling is often necessary. | 2018-04-03T01:29:34.794Z | 2013-04-30T00:00:00.000 | {
"year": 2013,
"sha1": "0ad9f1cef73fe4d757f3cd68b071f872f2ffa762",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc3690674",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "4299e1b7976f4abc67233b1ae496fac9fbd83128",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245422891 | pes2o/s2orc | v3-fos-license | Research onResourceAllocation andOptimization ofCommunity Intelligent Sports Service for the Elderly Based on Group Intelligence
Objective. +e mainstream development trend in the era of intelligent sports. At present, with the rapid development of science and technology, it is absolutely wise to combine group intelligence with community intelligent sports services for the elderly. Group intelligence has opened a new era of intelligent sports service. Group intelligence has become an important factor in the development and growth of community intelligent sports service for the elderly and has become a hot topic at present. However, intelligence has encountered difficulties on the road of development. At present, the aging of the population is getting worse and worse, and the elderly have higher and higher requirements for fitness and leisure services, which leads to the need for sports services to be continuously strengthened. +e distribution of resources is uneven, the data is not clear enough, and the swarm intelligence algorithm is not perfect. With the adaptation of the elderly to intelligence, more intelligent, concise, and personalized services need to be developed. +e most important method is to optimize the swarm intelligence algorithm continuously. In this paper, PSO algorithm is optimized and HCSSPSO algorithm is proposed. HCSSPSO algorithm is a combination of PSO algorithm and clonal selection strategy, and test simulation experiments, PSO algorithm, CLPSO algorithm, and HCSSPSO algorithm for comparison. From the experimental results, HCSSPSO algorithm has better convergence speed and stability, whether it is data or comparison graph. +e data optimized by HCSSPSO algorithm is higher than the original data and the other two algorithms in terms of satisfaction and resource allocation.
Introduction
At present, people pay attention to the development of science and technology, and at the same time, they also pay attention to the development of sports services for the elderly. Since 2013, group intelligence has opened a new door to the development of sports. Up to now, group intelligence has created a lot of value for sports services for the elderly. With the addition of swarm intelligence, the research on sports service for the elderly has been deepened by experts, and the entry of computer intelligent algorithms and analysis methods has completely changed the original traditional sports service mode for the elderly. With the arrival of the new model, the data sources are more extensive, the data is more accurate in this province, and a qualitative leap has been made fundamentally. In this way, the service structure is no longer single but diversified. e elderly are no longer affected by time, weather, and geographical location, which enables the elderly to participate in community sports and enjoy the services brought by science and technology at will, greatly improving the sports level and enthusiasm of the elderly, and enabling smart sports services to continuously absorb suggestions and continuous optimization and improvement.
Literature biology is the origin of swarm intelligence [1]. After continuous development, a group of uncomplicated agents or groups have become swarm intelligence systems. e emergence of "intelligent" global behavior is not known to individuals because its agent interaction follows the principle of locality and randomness. is means that biologists and computer scientists are obsessed with the complex and self-coordinating groups [2]. Out of individual behavior and interaction, flocks of birds, fish, and social insects show problem-solving skills. In the field of swarm intelligence, the development of new algorithms is closely related to biological behavior. Since 1990s, optimization problems in various fields have been solved by animal-based swarm optimization methods [3]. However, the optimization and deepening of swarm intelligence algorithm require continuous research on biological intelligence behavior. Up to now, swarm intelligence can be summarized into three parts. From the biological basis, scientists have obtained the operating principles in biological systems [4]. From the artificial literature, two basic analysis methods and group model techniques are provided and summarized. From the point of view of swarm engineering, Kazadi is the application foundation, and at the same time, swarm intelligence is dominant in a series of applications such as robot system. Similarly, because of the extensive development of swarm intelligence, it is also excellent in solving some theoretical problems [5]. ere are many swarm intelligence algorithms, and particle swarm optimization is one of them. Particle swarm optimizer was invented by this algorithm. e random velocity of particles will deeply affect the arrangement of particles according to their unique values, and the addition of mutation factor can keep the balance of pBeat locality values. erefore, the optimized particle swarm optimizer has great advantages in solving constraint satisfaction problems like n queen problem. It is not only a theoretical problem, but also an excellent solution to practical problems [6]. According to the ant colony optimization technology in swarm intelligence, a network framework suitable for the performance factors of small satellites can be constructed.
is network framework is different from the traditional network communication architecture and can realize various functions among small satellites more. From the results, it can be seen that the factors of the proposed frame motion change are very consistent. At the same time, it shows that the network topology is not fixed and unadaptable, and it can be transformed to be changeable and adaptable under certain conditions. e necessity of transforming the knowledge of swarm intelligence algorithm effectively needs to be confirmed by considering the brain [7]. In the aspect of development science, the construction of framework is the basis of developing swarm intelligence algorithm, so as to realize the development and evolution of algorithm. However, there are serious problems in the current society, and the aging problem continues to worsen [8]. In order to avoid the weaknesses brought by the traditional old-age care model, a new smart community old-age care model has emerged. is smart community model is based on BCG and provides solutions for changing the original rigid, single, and crude social old-age care services. With the aggravation of population aging, people have higher requirements on the issue of providing for the aged [9]. With the rapid development of science and technology in today's society, people are willing to use science and technology to solve problems and improve schemes. e combination of old-age care and science and technology has become a matter of course and has been widely concerned. However, at present, the market development only considers the ability and needs of the elderly unilaterally, and interactive design is the most prominent aspect. In order to get a suitable interactive design of intelligent aged care services, we must fully consider the needs of the elderly in all aspects and correct the existing intelligent service problems. According to the survey [10], the service level of basic community sports for the elderly is still not high, and there is a high and low gap between urban and rural areas. As the main body of community sports, the satisfaction of the elderly is the future development of community sports [11]. From the aspect of facilities, satisfaction is deeply affected by equipment. From the aspect of sports management, men and women, as well as the elderly in different regions, are obviously satisfied with the first two aspects. From the perspective of education, the fluctuation of satisfaction is gentle under the influence of education. From the aspect of natural environment, the satisfaction degree in each dimension of environment is related to different ages. Community elderly service system is in a series of problems, and the solution can be found in the reform of community sports service management system and professional guidance [12], and laws, venues, facilities, resources, and other aspects should be strengthened. Because the elderly are old, their legs and feet are inconvenient, and their sports level is generally not high. Similarly, their satisfaction with community sports scores is the same [13]. e higher the satisfaction degree of the elderly for community sports including environment and equipment, the higher the willingness of the elderly to participate in community sports. e more the community sports activities and professional guidance, the more the elderly participate in community sports. erefore, only by paying more attention to and improving the above factors can we strengthen the sports behavior of the elderly. From the perspective of the elderly themselves [14], after receiving physical education, the way the elderly use portable smart devices and the ease or difficulty of perceiving this behavior are important factors that positively affect their behavior intentions. e social pressure that the elderly feel about whether to use this kind of equipment is the main reason that negatively affects their behavioral intention, and the way that the elderly use this kind of equipment balances the social pressure and behavioral intention that the elderly feel. All in all [15], the traditional way of fitness will be broken by the Internet of ings, and this new technology will change the world outlook after the Internet. Smart sports are studied based on the Internet, and a series of effective measures to improve the overall physical fitness of the elderly are discussed. At the same time, the development of intelligent sports for the elderly in the community will be deeply analyzed and studied. As a product of smart sports, the information collected by smart devices will be uploaded to the database, analyzed and processed centrally, and managed properly. In this paper, PSO algorithm is optimized and HCSSPSO algorithm is proposed. HCSSPSO algorithm is a combination of PSO algorithm and clonal selection strategy. e PSO algorithm, CLPSO algorithm, and HCSSPSO algorithm are compared by experiments. HCSSPSO algorithm has good convergence speed and stability in both data and comparison graph. e data optimized by HCSSPSO algorithm is higher than the original data and the other two algorithms in terms of satisfaction and resource allocation.
Basic Concepts.
In order to use space, some different kinds of animals will live together, and they are often more complex than other animals. At the same time, this lifestyle means high efficiency and strong creativity. Scientists have taken a fancy to this property, thus developing swarm intelligence [16]. It is precisely because of this property that the control of swarm intelligence is decentralized. Individual behavior in a group will affect other individuals, resulting in a new behavior pattern. ere are five basic principles of swarm intelligence. First, all time and space can be calculated without complexity and follow the proximity principle. Secondly, the change of quality factor is closely related to the group and follows the quality principle. ird, groups should not move in a too narrow environment and follow the principle of diversity. Fourthly, groups should avoid changing their behavior with the change of environment and follow the principle of stability. Finally, the behavior of a group can be appropriately changed according to the conditions with low cost. e development history of swarm intelligence optimization algorithm is shown in Figure 1.
In recent years, the main research aspect is biological swarm intelligence algorithm, and great research results and optimization strategies have also been obtained. e classification diagram of swarm intelligence algorithm is shown in Figure 2.
Basic Concepts.
is algorithm is called PSO for short [17,18], which is developed by researchers through exploring and summarizing the behavior of bird predation. Like birds preying in groups, the optimal solution of this algorithm also needs the cooperation among individual particles. In the process of predation, birds will learn from each other's predation experience, thus forming an exchange of experiences and making the predation process orderly. Similarly, in this algorithm, each particle also needs to realize information exchange, so that the process of finding the optimal solution can be orderly.
(1) Basic PSO Algorithm. In a space that needs to be represented by D component coordinates, there are countless particles with searching ability. ey are not disordered, each particle individual has its own position, and this position is the most suitable position for this individual. PSO algorithm will update the performance of particles according to the velocity update formula. At this time, the intervention of objective function will bring the position comparison before and after the update of particles, which can clearly see whether the updated position is better than the position before the update. Complete the algorithm once in this way. e PSO algorithm flowchart is shown in Figure 3.
e particle iterative update formula is as follows: where x t i is the position of the i-th particle in the t-th iteration, v t i is the velocity of the i-th particle in the t-th iteration, x t+1 i is the position of the i-th particle in the t+1 th iteration, v t+1 i is the velocity of the i-th particle in the t+1 th iteration, p t i indicates the historical optimal position when the i-th particle searches to the t-th generation, p t g is the historical optimal position of the whole population when the t generation is searched, that is, the global optimal position, c 1 ,c 2 is the learning factor, usually 2, and r 1 , r 2 is the disturbance factor, usually randomly taken within [0, 1]. e values of C1, C2 and R1, R2 will affect whether the particles rush across the target region or wander outside the target region, and a better solution can be obtained when they are constant values. e performance of PSO algorithm will influence each other, which leads to the instability of PSO algorithm, so it is necessary to add linear decreasing weights. e speed update equation at this time is e formula of linear decreasing weight strategy is as follows: where t is the current number of iterations, G max is the maximum number of iterations, ω max is the initial inertia weight, and ω min is the inertia weight at the maximum number of iterations G max , usually 0.4.
(2) Comprehensive Learning Particle Swarm Optimization Algorithm. In order to improve PSO algorithm and solve the shortcomings of PSO algorithm, a comprehensive learning strategy is added, and the combination of the two algorithms is developed into CLPSO algorithm. e update speed of CLPSO algorithm is different from that of PSO algorithm [19], and the best position of individuals is an important basis for updating the speed of CLPSO algorithm. e comprehensive learning particle swarm optimization algorithm firstly calculates the learning probability of each particle p c . e formula is as follows: According to the experimental experience, generally a and b are constants, usually a � 0.05 and b � 0.45. e update formula of comprehensive learning particle swarm optimization algorithm is as follows: Pbest is the optimal position of the individual. f i (d) is the dimension value of the d dimension in the best position of the i-th particle individual.
is the learning sample vector set by particle i. pbest f i (d),d indicates the dimension value corresponding to the best position produced by previous iterations of a particle. Each dimension of the particle will produce a random number and compare the random number with the learning probability parameter p c . If the former is greater than the latter, the dimension of the particle in the best position in each iteration will be learned; otherwise, the dimension of the particle in the best position of the individual will be learned.
Specific subgroup types are as follows: Extreme learning subgroup: Compound learning subgroup: Domain learning subgroup: Journal of Healthcare Engineering Random learning subgroups: e advantage of particle swarm optimization is to avoid the phenomenon that the evolved particles will lose their search ability after many iterations, which makes the algorithm have stronger global search ability and save the calculation times of the algorithm. At the same time, the convergence speed of the comprehensive learning particle algorithm will not lead to the decrease of population diversity, so that the algorithm will not fall into premature convergence, especially for peak and multipeak objective functions. rough the construction of learning application program, the group learning behavior is richer and the diversity of population information is increased.
Design of Hybrid Clonal Selection Particle Swarm Optimization
3.1.1. Clone Selection Strategy. In order to improve the PSO, improve the convergence performance, increase the diversity of population, and avoid premature algorithm [20,21], clonal selection strategy can be combined with PSO [22]. e new population Sub is formed by the expansion and growth of the temporary clone group formed by individual extremum, and the ranking of individuals in the new population will be related to the size of affinity, and the clone size of individual extremum will increase with the increase of affinity. e formula of cloning multiple N c is as follows: N is Sub scale, I is the individual affinity value ranking in Sub, β indicates that the cloning coefficient is 0.8, cm indicates that the clone cardinality value is 5, and Round represents a function to round an integer. In Sub population, Cauchy mutation is used to get new mutation individuals, which increases population diversity and improves the global search ability of the algorithm. e mutation operator formula is as follows: where r is the parameter with a value of 10. Cauchy variogram is as follows: Random represents computer-generated random numbers from 0 to 1 and t is the number of iterations. e extreme value of the individual with the highest affinity in the mutated population is compared with the extreme value of the individual in the original population. If the former is higher than the latter, it will be updated and replaced; otherwise, it will remain unchanged. At the same time, the optimal value of individual extremum of population is compared with gbest, and if the former is higher than the latter, it is updated and replaced.
HCSPSO Algorithm Flow.
e flowchart of the HCSSPSO algorithm is shown in Figure 4.
Time Complexity Analysis.
e time complexity of particle swarm optimization and clonal selection strategy synthesizes the time complexity of HCSSPSO algorithm. e parameter is set as C to represent the number of parameters, T (C) to represent the time complexity of test function, and O (C).
Journal of Healthcare Engineering
Maxlter is the maximum number of iterations of the algorithm, PS is the population size, P PSO is the individual mutation probability in the algorithm, the value range (0, 1), and N C is the multiple of particles.
Test Function and Parameter Settings.
According to the selected eight test functions, in which functions 1 to 5 are single modal functions and functions 6 to 8 are multimodal functions, the HCSSPSO algorithm, basic PSO algorithm, and CLPSO algorithm are compared, and their ability and convergence speed are analyzed to verify the effectiveness of HCSSPSO algorithm.
Setting parameters: Gm � 1000, pm � 0.8, Popsize � 40, cm � 5, N C � 30. e 8 standard test functions are as follows: x 2 i −10 cos 2πx i + 10 , −5.12 ≤ x i ≤ 5.12, In this equation, the optimal solution of all functions is set to 0. In comparison, we only compare the overall performance of each algorithm, such as convergence speed, but the overall test of this algorithm is not compared with other algorithms. However, when the whole algorithm has been compared with absolute advantages, and this advantage is in line with the performance of simulation experiments, we think the comparison between the whole test and local advantages can be omitted.
Experimental Comparison and Results.
e performance of the algorithm is represented by the mean value and Table 1. e function includes unimodal function and multimodal function and has a large number of local minima, which can explain the ability of each algorithm to deal with multimodal problems.
It can be seen from the table that the average value and standard value data of HCSSPSO algorithm under 8 evaluation functions are better than the other two algorithms. It can be seen that clonal selection strategy can improve the performance of PSO algorithm and is higher than other optimization algorithms. e convergence curves of the three algorithms for function 1 are shown in Figure 5. e convergence curves of the three algorithms for function 2 are shown in Figure 6. e convergence curves of the three algorithms for function 3 are shown in Figure 7. e convergence curves of the three algorithms for function 4 are shown in Figure 8. e convergence curves of the three algorithms for function 5 are shown in Figure 9. e convergence curves of the three algorithms for function 6 are shown in Figure 10. e convergence curves of the three algorithms for function 7 are shown in Figure 11. e convergence curves of the three algorithms for function 8 are shown in Figure 12.
From the above table, we can see that, by comparing the convergence curves obtained from eight classical evaluation functions selected by PSO algorithm, CLPSO algorithm, and HCSSPSO algorithm, we can see that the convergence speed and stability of HCSSPSO algorithm are optimized compared with the other two algorithms and have better optimization ability.
Establishment of Objective Function.
Community smart sports service for the elderly follows the principle of maximizing benefits, and the elderly judge their satisfaction with smart sports service. erefore, the use of swarm intelligence algorithm should minimize the allocation cost of community intelligent sports services for the elderly, optimize the service facilities, maximize the satisfaction of the elderly, and maximize the population served. In this experiment, the region is set to N rows and M columns, and K smart sports service types are set at the same time. i and j represent cells (i, j), N represents total space, Suit represents suitability, and ω represents satisfaction. e objective function is as follows: Service configuration fee: Suitability of service facilities: Satisfaction of the elderly: Number of serviced population: N ijk determine whether the service facility type on cell (i, j) is equal to k, the equal value is 1, and the opposite is 0. dis x (c) represents the Euclidean distance from C to P communities. dense(c) is the Population density on c. D area is the size of area occupied by service facility type. exp[−r × dis x (c)] is the population attraction of P community locations to C. C dense is the objective function coefficient.
Simulation Experiment and Result
Analysis. In order to analyze the reasonable feasibility and optimization performance of HCSSPSO algorithm, an optimization model of community intelligent sports service was designed. Firstly, according to the data published by the network, the population density of the elderly in the community is calculated, and all relevant data are converted into parameter data. Journal of Healthcare Engineering Assuming that the community service area is 50 × 60 units, according to the survey, the initialization data of community sports service are as follows: service A: 324, service B: 361, service C: 655, service D: 904, service E: 518, and service F: 238. After the algorithm is optimized, the data are as follows: service A: 250, service B: 473, service C: 705, service D: 964, service E: 386, and service F: 222. Publish data through the network, and the evaluation index is the objective function Table 2.
e following is a comparison between the predicted configuration cost, service suitability, satisfaction of the elderly, and service population and the actual values after optimization by PSO algorithm, CLPSO algorithm, and HCSSPSO algorithm, taking a certain 20 days as sampling points in 2020. e data comparison is shown in Table 3. e actual value of facility cost is compared with the predicted values of three algorithms, such as Figure 13. e actual suitability values are paired with the predicted values of the three algorithms, such as Figure 14. e actual value of satisfaction of the elderly is compared with the predicted values of the three algorithms, such as Figure 15. e actual value of the service population is paired with the predicted values of the three algorithms, such as Figure 16. erefore, HCSSPSO algorithm has a higher reasonable degree of resource allocation for community intelligent sports services for the elderly and has higher cost performance, suitability, satisfaction, and even population. Compared with the original data, HCSPOS algorithm greatly optimizes the configuration of community service and brings higher and more advanced community service. Compared with other algorithms, HCSSPSO algorithm is more excellent. Compared with the optimized data, the data obtained by HCSSPSO algorithm is obviously higher than other algorithms. e HCSSPSO algorithm proposed in this paper has more advantages. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
Conclusion
Because PSO algorithm has some shortcomings, it may bring premature problem and cannot guarantee population diversity, so it is not suitable as an algorithm for optimizing intelligent sports. erefore, this paper proposes HCSSPSO algorithm, which combines PSO algorithm with clonal selection strategy. Compared with PSO algorithm and CLPSO algorithm, it has better convergence speed and stability and is more suitable for resource allocation and optimization of community intelligent sports services for the elderly.
Data Availability
e experimental data used to support the findings of this study are available from the corresponding author upon request. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | 2021-12-23T16:07:31.331Z | 2021-12-21T00:00:00.000 | {
"year": 2021,
"sha1": "939e4c29c62a937c5f81e16714b43c4e5f19dcc3",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jhe/2021/1185533.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "25dbc3ba34b20e0913504e1f3ec13e49d01ea8c4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247810372 | pes2o/s2orc | v3-fos-license | Mechanical and microstructural properties of yttria-stabilized zirconia reinforced Cr3C2-25NiCr thermal spray coatings on steel alloy
29, 2022 Abstract In this research work, nano yttria-stabilized zirconia (YSZ) reinforced Cr3C2-25NiCr composite coatings were prepared and successfully deposited on ASME-SA213-T-22 (T22) boiler tube steel substrates using high-velocity oxy-fuel (HVOF) thermal spraying method. Different nanocomposite coatings were developed by reinforcing Cr3C2-25NiCr with 5 and 10 wt.% YSZ nanoparticles. The nanocomposite coatings were analysed by scanning elec-tron microscope (SEM)/Energy-dispersive X-ray spectroscopy (EDS) and X-ray diffraction (XRD) technique. The porosity of YSZ- Cr3C2-25NiCr nanocomposite coatings was found to be decreasing with the increase in YSZ content, and hardness has been found to be increasing with an increase in the percentage of YSZ in the composite coatings. The coating of 10 wt.% YSZ-Cr3C2-25NiCr showed the lowest porosity, lowest surface roughness, and highest microhardness among all types of coatings. This may be due to the flow of YSZ nanoparticles into the pores and gaps that exist in the base coatings, thus providing a better shield to the
Introduction
Traditional steels used in thermal power plants are susceptible to corrosion [1,2]. In the recent past, researchers have applied several types of coatings to improve the erosion and corrosion resistance of these steels [3][4][5]. Thermal spray coating techniques are a key tool for developing coatings that improve component performance and longevity [6,7]. In recent years, these coatings have become more important. Coatings with high anti-corrosion qualities have been developed as a result of advancements in powder manufacture and innovations in thermal spraying techniques [8][9][10]. In terms of substrate material chemical composition, these procedures have no special material limitations [11]. On boiler tube steel components, flame spraying, plasma spraying, arc spraying, and high-velocity oxygen fuel (HVOF) methods can generate coatings of a few millimeters thickness with a high microhardness value [12][13][14][15]. Because of its cost-effectiveness and versatility, the HVOF method has been classified as an adaptable technique [16][17][18]. The qualities of the substrate material are unaffected by the HVOF coating procedure [19].
In the recent past, various researchers have used thermal spraying techniques to develop various types of coatings on boiler steels to increase their properties. The coatings produced by the thermal spraying method are porous in nature and have many local micro-cracks or through pores [20][21][22]. Corrosive fluids and chemicals attack the substrate steels through these pores and micro-cracks. Therefore, there is still scope for improvement in the mechanical and microstructural properties of these coatings [23][24][25]. Many researchers have compared conventional coatings to nanostructured coatings, and many improvements in mechanical and microstructural properties of as-sprayed materials were observed, including an increase in microhardness, a decrease in porosity, a decrease in surface roughness, and a decrease in erosion rate, among other things [26][27][28][29]. Many authors have reported the development of Cr3C2-25NiCr coatings on steel alloys, but literature related to nano yttria-stabilized zirconia (Y2O3/ZrO2) (YSZ) reinforced composite coatings is not available. Therefore, there is scope to develop new nano yttria-stabilized zirconia (YSZ) mixed Cr3C2-25NiCr nanocomposite coatings and subsequently deposit and investigate the microstructure, porosity, and microhardness of these newly developed composite coatings on boiler tube steel.
In this research work, HVOF sprayed 5 and 10 weight percent YSZ-Cr3C2-25NiCr nano-coatings were developed and deposited on T22 boiler tube steel. The microstructure, porosity and microhardness of these newly developed composite coatings have been investigated. HVOF thermal spraying technique was used in this research work because the coatings produced with the HVOF method have high adhesive strength with the base material and also individual splats have high cohesive strength [30][31].
Ksiazek et al. [20] observed that this spraying process provides homogeneous coatings having a low value of porosity along with high hardness.
Substrate material
The measured and nominal compositions of T22 steel are shown in Table 1. The samples with dimensions of 22 15 5 mm were manufactured from the boiler tube. Silicon carbide paper was used to polish the cut samples. Before applying different coatings, the samples were shot blasted with alumina powder of grit 45.
Coating powders
Commercially available blend Cr3C2-25NiCr powder was mixed with 5 and 10 wt.% YSZ using low energy ball milling to prepare different coating powders and the composition of different powders is shown in Table 2. To prepare Cr3C2-25NiCr mixed with 5 wt.% YSZ (Y2O3/ZrO2) mixture, 950 g of Cr3C2-25NiCr was mixed with 50 grams of YSZ. Other compositions were prepared in a similar manner. The mixed powders were rolled for four hours ceaselessly at a speed of 200 rpm [32,33].
Formulation of coating
The conventional Cr3C2-25NiCr, 5 and 10 wt.% YSZ reinforced Cr3C2-25NiCr nanocoatings were deposited on T22 boiler steel substrates with the HVOF process at Metallizing Equipment Company Limited, Jodhpur, India. The spraying process was carried out using a commercial HIPOJET-2100 device. Before deposition of the coatings, the samples were grit blasted with alumina powder. Coatings with a thickness of around 250 µm were deposited. The process parameters of the HVOF spraying method are shown in Table 3. During the spraying procedure, these process parameters were kept constant.
Characterization of nanocomposite coatings
The uncoated samples were cut into sections and mounted in epoxy. Before the metallurgical inspection, the mounted samples were polished. The coatings were examined using XRD, SEM/EDS, and cross-sectional elemental analysis. The microhardness of the cross-section of all coated samples was assessed using a Mitsubishi microhardness tester. On the coating-substrate interface, the microhardness was measured at specified intervals along the cross-section.
LEICA Image analyser software was used to evaluate the porosity of conventional Cr3C2-25NiCr, 5 and 10 wt.% YSZ reinforced Cr3C2-25NiCr nanocomposite coatings. Before evaluating porosity, the specimens were polished. The pore area size is calculated using a computer-based porosity analysis technique that converts grey-level areas (pore areas) into a background that is different from the rest of the microstructure. The porosity value is then calculated by counting the number of pixels of background colour. For each type of coated specimen, the average of five porosity measurements was calculated.
Coating thickness measurement
The thickness of conventional Cr3C2-25NiCr, 5 and 10 wt.% YSZ reinforced Cr3C2-25NiCr nanocomposite coatings was measured with the help of Minitest-2000 thin film thickness gauge (Make: Elektro-Physik Koln Company, Germany, precision ±1 µm) during the spraying process and are shown in Table 4. The thickness of the coatings was evaluated along the cross-section of the specimens and the thickness has been found in the desired range [34][35][36].
Porosity analysis
Thermal spray coatings are porous, and porosity has a significant impact on coating qualities. Less porous coatings provide superior corrosion protection, according to the literature. The apparent porosity measurements of Cr3C2-25NiCr, 5 and 10 wt.% YSZ reinforced Cr3C2-25NiCr nanocomposite coated T22 steel specimens are shown in Table 4. For all samples, the porosity values of the HVOF-sprayed Cr3C2-25NiCr coating were less than 2 %. The numbers in Table 4 show that as the YSZ concentration in the nanocomposite coating increased, the porosity value also decreased. It is obvious that 10 wt.% YSZ-Cr3C2-25NiCr coating has the lowest porosity value. The surface roughness values for Cr3C2-25NiCr, 5 and 10 wt.% YSZ reinforced Cr3C2-25NiCr nanocomposite were found to be 3.75, 3.14 and 2.43 µm, respectively.
Microhardness measurement
The microhardness profiles across the cross-section of Cr3C2-25NiCr, 5 and 10 wt.% YSZ reinforced Cr3C2-25NiCr nanocomposite coated specimens are shown in Figure 3. The microhardness values of T22 steel were in the range of 242-318 HV. The microhardness measurements for Cr3C2-25NiCr, 5 and 10 wt.% YSZ reinforced Cr3C2-25NiCr nanocomposite coated specimens were in the range of 918-978 1018-1093 and 1088-1168 HV, respectively. It is clearly seen in Figure 1 that with the increase of YSZ in the Cr3C2-25NiCr matrix, the micro-hardness value increased. The nano YSZ particles were able to increase the microhardness of HVOF sprayed Cr3C2-25NiCr coatings. The microhardness profiles clearly show that the hardness through the coating cross-section is nearly uniform for all coated specimens.
X-ray diffraction analysis
The X-ray diffraction analysis for the surfaces of Cr3C2-25NiCr, 5 and 10 wt.% YSZ reinforced Cr3C2-25NiCr nanocomposite coated specimens was done, and the XRD patterns are shown in Figure 2(a-c). XRD profile of the HVOF sprayed Cr3C2-25NiCr coated T22 boiler tube steel sample showed chromium as the main phase, along with traces of Ni. The XRD profile of the Cr3C2-25NiCr coating reinforced with and 10 wt. % YSZ nanoparticles revealed that chromium and carbon are present as a major phase, and nickel, yttrium, and zirconium as minor phases. The increase in the formation of non-crystalline amorphous phases occurs because of very fast cooling during the spraying process. The presence of different phases and their proportion mostly depend on the process conditions at the time of the depositing of the coating powder on the base material.
FE-SEM/Energy dispersive X-ray spectroscopy
FE-SEM micrographs with energy-dispersive X-ray spectroscopy analysis for HVOF sprayed Cr3C2-25NiCr, 5 and 10 wt.% YSZ reinforced Cr3C2-25NiCr nanocomposite coatings on T22 boiler steel are shown in Figure 3. The microstructure of Cr3C2-25NiCr coating is dense, consisting of interlocked particles with regular shape, as shown in Figure 3(a). In the microstructure of the coating, several oxide stringers can also be visible. The YSZ nanoparticles have been uniformly diffused in the Cr3C2-25NiCr matrix, as shown in Figures 3(b) and 3(c). The dense and uniform layer of the coating was obtained by the reinforcement of nanoparticles of YSZ. The microstructures reveal that uniform coalescence of nano YSZ has occurred with base Cr3C2-25NiCr matrix in composite coating. As demonstrated in Figures 3(b) and 3(c), energy dispersive spectroscopy examination revealed the elemental composition of the various coatings that were found to be comparable to that of the feedstock powder. EDS analysis of Cr3C2-25NiCr coating revealed the presence of Fe and Si in the composition, which may be due to the diffusion of Fe and Si from the substrate to the coating matrix due to porosity in conventional coating.
The microstructure of the cross section of Cr3C2-25NiCr, and 10 wt.% YSZ reinforced Cr3C2-25NiCr nanocomposite coatings on T22 boiler steel is shown in Figure 4. The cross-sectional images indicate splat-like morphology of coatings, which might be due to the re-solidification of molten droplets.
Discussion
The coating thickness of all coatings was measured along the cross-section of the coated specimens. The coating thickness was in the range of 251-258 μm, which was found to be in the desired range as reported in the previous work for HVOF coatings [37]. The porosity of the conventional Cr3C2-25NiCr coating on T22 boiler tube steel was found to be 1.81%, which further decreases with the addition of nano YSZ to the Cr3C2-25NiCr coating matrix. The porosity values of 5 and 10 wt.% YSZ reinforced Cr3C2-25NiCr nanocomposite were observed as 1.57 and 1.25 %, respectively. The porosity of nanocomposite coatings decreased as the nanoparticles of nano YSZ filled the pores and interlocked the grains of the Cr3C2-25NiCr coating matrix. Improvement in surface roughness value was observed by the addition of the nanoparticles of YSZ in the Cr3C2-25NiCr coating powder. The surface roughness values for the Cr3C2-25NiCr, 5 and 10 wt.% YSZ reinforced Cr3C2-25NiCr nanocomposite coated samples were found to be 3.75 µm, 3.14 and 2.43 µm, respectively. Better surface characteristics were observed for 10 wt.% percent YSZ reinforced Cr3C2-25NiCr nanocomposite coating as compared to the conventional Cr3C2-25NiCr coating, as the surface roughness decreased because of the addition of YSZ nanoparticles. The decrease in porosity with the addition of carbon nanotubes has also been reported in the literature by Khesri et al. [38] and Guo et al. [39]. Goyal and Goyal [40] also reported that carbon nanotubes interlocked the particles of Cr3C2-20NiCr and improved the mechanical and microstructural properties of conventional Cr3C2-20NiCr coating.
In comparison to the base material, the microhardness values of all coated specimens were found to be extremely high, as shown in Figure 1. Microhardness values for typical Cr3C2-25NiCr ranged from 918 to 978 Hv. With an increase in nano YSZ weight percent in the Cr3C2-25NiCr matrix, microhardness values improved even further. The inclusion of nanoparticles in the coating matrix improved indentation resistance. The high heat conductivity of YSZ may result in increased melting and, as a result, an increased microhardness of YSZ reinforced coatings. The nano YSZ particles were uniformly scattered in Cr3C2-25NiCr matrix, filling the pores in the coating matrix, which resulted in a decrease in the porosity of the matrix. According to Tian et al. [41], the increase in hardness can be attributed to a decrease in porosity of the coating matrix and may also be due to dispersion hardening.
Cr and Ni are the main phases and Ni is the minor phase as was identified by X-ray diffraction analysis of Cr3C2-25NiCr coated T22 boiler tube steel specimen. The identification of small peaks of Fe and Si in XRD spectra might be due to the diffusion of these elements from substrate alloy to the coating matrix due to porosity in the matrix. The XRD spectra of 5 wt.% and 10 wt.% YSZ reinforced Cr3C2-25NiCr nanocomposite coated samples revealed major phases of Cr and Ni, along with YSZ in the coating matrix. The increase in the formation of noncrystalline amorphous phases occurs because of very fast cooling during the spraying process. Stewart et al. [42] reported that the presence of different phases and their proportion mostly depend on the process conditions at the time of the depositing of the coating powder on the base material.
FE-SEM with EDS analysis of conventional Cr3C2-25NiCr, 5 and 10 wt.% YSZ reinforced Cr3C2-25NiCr nanocomposite coatings on T22 boiler steel showed that coatings obtained are dense, uniform and a proper coalescence of YSZ with Cr3C2-25NiCr have been taken place and YSZ particles have been distributed uniformly within the coating matrix. Energy dispersive spectroscopy examination of all coatings revealed that the elemental composition of the various coatings was found to be comparable to that of the feedstock powder. EDS analysis of Cr3C2-25NiCr coating revealed the presence of Fe and Si in the composition, which may be due to the diffusion of Fe and Si from the substrate to the coating matrix due to porosity in conventional coating. Diffusion of base elements through pores/voids in the coating matrix has also been reported by various authors [34][35][36][37]. The cross-sectional morphology of all coatings indicated dense and uniform misconstrue. All the coatings had a uniform and intact misconstrue. Nano YSZ coatings indicated that YSZ particles had uniformly mixed in the base matrix throughout the matrix. Reinforcement of nano YSZ particles in Cr3C2-25NiCr coating had filled the gaps, reducing porosity which prevented the diffusion of base elements to the coating matrix, thereby increasing the microhardness of the coatings.
The present study revealed that adding YSZ nanoparticles to conventional coatings improved bonding at the substrate-coating interface, filled voids/pores in the coating matrix, enhanced microhardness, and resulted in dense and uniform coatings on boiler steel samples.
Conclusions
The following conclusions are made from this experimental work: • The thickness of HVOF sprayed Cr3C2-25NiCr, 5 wt.% and 10 wt.% YSZ reinforced Cr3C2-25NiCr nanocomposite coatings was found to be in the range of 250-260 μm. • With the increase in YSZ concentration in nanocomposite coating, the porosity value decreases.
The 10 wt.% YSZ-(Cr3C2-25NiCr) coating was discovered to have the lowest porosity value of 1.25%. A decrease in porosity resulted in an improvement in surface roughness values. • The highest microhardness was found for 10 wt.% YSZ reinforced nanocomposite coating and was in the range of 1088-1068 hv. This might be due to the filling of pores/voids in the coating matrix by nano YSZ particles. • XRD spectra of all nanocomposite coatings indicated the formation of non-crystalline amorphous phases due to very fast cooling during the spraying process. • SEM/EDS analysis of Cr3C2-25NiCr coating indicated the presence of Fe and Si, which might be due to the diffusion of these elements through pores in the coating matrix. SEM/EDS analysis of YSZ reinforced nanocomposite coatings indicated a uniform and dense coating surface and there was no diffusion of base elements because of the filling up of voids/pores by nanoparticles. | 2022-03-31T15:30:25.910Z | 2022-03-29T00:00:00.000 | {
"year": 2022,
"sha1": "269251e87f1a5864ef5e6c9c2994340c0cb170da",
"oa_license": "CCBY",
"oa_url": "https://pub.iapchem.org/ojs/index.php/JESE/article/download/1278/1444",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c81f22b5a8adb1b2279755f1e43efb9994378b52",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": []
} |
238475528 | pes2o/s2orc | v3-fos-license | Stent implantation of an unusual morphology patent ductus arteriosus via
Received: 15.03.2021 Accepted: 31.05.2021 Correspondence: Merve Maze Aydemir, M.D. Department of Pediatric Cardiology, University of Health Sciences İstanbul Mehmet Akif Ersoy Thoracic and Cardiovascular Surgery Training and Research Hospital, İstanbul, Turkey Tel: +90 212 692 20 00 e-mail: maze_zabun@hotmail.com © 2021 Turkish Society of Cardiology Summary– The procedure of stenting the patent ductus arteriosus (PDA) is a palliative procedure applied as an alternative to surgery in newborns with ductus-dependent pulmonary circulation. However, it is still a very challenging method in patients with aortic arch anomalies. We describe our experience with a newborn with right atrial isomerism and dextrocardia, complete atrioventricular septal defect, aortic outlet right ventricle with pulmonary atresia, right aortic arch, and a PDA from the left innominate artery. Because the PDA was long and tortuous, we preferred placing three short stents instead of a single long stent. The procedure applied the femoral artery approach with a Glidesheath Slender to decrease arterial injuries. PDA stenting in challenging morphologies can be performed successfully using multiple short stents and via Glidesheath Slenders. Özet– Patent duktus arteriozusa (PDA) stent implantasyonu işlemi, duktus bağımlı pulmoner dolaşımı olan yenidoğanlarda cerrahiye alternatif olarak uygulanan palyatif bir işlemdir. Ancak aortik ark anomalili hastalarda halen oldukça zorlu bir yöntemdir. Bu yazıda sağ aortik arkusu olan ve PDA’sı sol innominate arterden köken alan pulmoner atrezili bir yenidoğandaki deneyimimizi paylaştık. PDA uzun ve kıvrımlı olduğu için tek bir uzun stent yerine üç kısa stent yerleştirmeyi tercih ettik. Prosedür, arter yaralanmalarını azaltmak için Glidesheath kılıf ile femoral arter yaklaşımı ile uygulandı. Atipik morfolojilerdeki PDA’larda stent implantasyonu, birden fazla kısa stent ve Glidesheath kılıf kullanılarak başarıyla gerçekleştirilebilir. 588
588
M aintaining pulmonary blood flow in infants with ductal-dependent pulmonary circulation is vital. A surgical systemic-pulmonary shunt can be a palliative solution; however, it has a significant risk of mortality and morbidity in the neonatal period. [1,2] Ductal stenting gained popularity in the early 1990s as an alternative to shunt procedures because of several advantages such as the absence of risk of phrenic or vagal paralysis, chylothorax, surgical adhesions, reduced hospital stay, and the number of reoperations. [1,3,4] Because of the significant technological advances such as smaller delivery sheaths and the pre-mounted stents designed for coronary arteries, the technique has become more preferred. However, the ductus in different morphologies can complicate the procedure, and the selection of patients, techniques, and outcomes may vary. [4,5] This brief report described our approach to ductal stent implantation in a newborn with an infrequent PDA morphology with a right aortic arch and a PDA from the left innominate artery. To reduce vascular complications, we applied the procedure using a Glidesheath Slender, which gives the option of choosing the outer diameter one French smaller.
CASE REPORT
A female neonate was referred to our hospital with cyanosis on the first day of life. On physical examination, the nondysmorphic infant was cyanotic with oxygen saturations 74% on room air, 50 cm height, weighed 2.9 kg, and had a heart rate of 140 beats/ min. On cardiovascular examination, the apex beat palpated at the right side of the thorax with regular S1 and S2 rhythm; a harsh grade 3/6 systolic ejection was also detected. A transthoracic echocardiogram revealed right atrial isomerism and dextrocardia, a complete atrioventricular septal defect (right dominant, unbalanced), an aortic outlet right ventricle with pulmonary atresia, atrioventricular valve insufficiency (moderate-severe), and a right aortic arch mirror-image branching. Moreover, confluent pulmonary arteries were retrogradely filled through a PDA originating from the left innominate artery. Prostaglandin E1 infusion was commenced, and a computed tomography angiography (CTA) scan was performed to evaluate the PDA before the invasive procedure. The CT result showed that the ductus was suitable for the femoral artery approach in terms of its location and shape ( Figure 1A and B).
Catheterization was performed on day 5 after birth. PGE1 infusion was stopped 24 h before the stent implantation. General anesthesia was achieved with endotracheal intubation, and a 5F Glidesheath Slender (Terumo, Tokyo, Japan), which has an inner lumen of 5F and an outer diameter of a standard 4F sheath (which is generally 6F), was placed in the right femoral artery. A 5F sheath was placed in the right femoral vein by the percutaneous method. A dose of 200 IU heparin was administered intravenously to the patient. Antibiotic prophylaxis was performed with cefazolin for 24 h. A 5F right Judkins catheter was advanced retrogradely up to the descending aorta and PDA. An angiogram confirmed the right aortic arch mirror-image branching, and an inverted C-shaped long PDA orig-inated from the left innominate artery and supplying confluent pulmonary arteries ( Figure 1C).
The PDA diameter was 3 mm distally and 4.5 mm proximally; the length was estimated at 27 mm. A 5F Guiding right Judkins catheter was fed through the duct over a 0.035 in hydrophilic guidewire, and an extra support 0.014 in coronary guidewire (Iron-Man or Extra Support Abbott, Santa Clara, CA) was placed in a distal right pulmonary artery branch. The suggested stent size for patients weighing 3 kg is 4 mm. [6] Our patient was nearly 3 kg, and the ductus diameter was 3 mm at the pulmonary artery side. A 4 mm × 16 mm long coronary stent (Boston Scientific REBEL) was deployed distally in the PDA with an inflation pressure of 14 atm (Figure 2A). However, the proximal part of the ductus was larger; so, to avoid stent embolization, we preferred 4.5 mm stents for this part. A second 4.5 mm × 16 mm ( Figure 2B) and a third 4.5 mm × 12 mm stents were delivered into the proximal PDA ( Figure 2C). Adrenaline and milrinone infusions continued during the procedure. Immediately after stent implantation, prostaglandin infusion was stopped. In the postprocedure control injection, it was observed that the stents covered the PDA, and there was no stenosis in the distal and proximal parts ( Figure 2D).
Aortic saturation increased from the previous high of 75 to 92%. Heparin infusion was commenced and A B C administered for 24 h; aspirin and clopidogrel were started the next day and then continued with both. Echocardiography ( Figure 3A) and a chest radiograph ( Figure 3B) performed one day later showed the stent's unobstructed aortic and pulmonary sides.
On the first day after stent implantation, we had to deal with pulmonary overflow. Saturated oxygen (SpO 2 ) was nearly 100%. However, decreasing the fraction of inspired oxygen levels helped us in handling this problem. The patient was extubated after 4 days of the procedure. Unfortunately, during follow-up in the intensive care unit, first necrotizing enterocolitis and then sepsis was observed. Despite effective antibiotherapy, we could not save the baby, and she died.
DISCUSSION
Newborns with duct-dependent pulmonary circulation or inadequate pulmonary blood flow have traditionally been treated with a surgical shunt or have undergone early primary repair. Early primary repair is less preferred owing to the high risk of morbidity and mortality. However, systemic-pulmonary artery shunt operation may progress with significant complications, especially in premature babies. Shunt thrombosis, shunt stenosis, pulmonary or systemic arterial distortion, diaphragmatic paralysis, pleural effusion, and excessive or asymmetric pulmonary blood flow are among the most common complications. [3,7,8] Prevention of occlusion of the ductus arteriosus was considered a reasonable alternative to surgical aortopulmonary anastomosis at the end of the 1970s. In 1992 Gibbs et al. [9] described PDA stenting technique as an alternative to the systemic-pulmonary shunt. Since then, stenting of PDAs has gained increased popularity. [7] Usually, the PDA arises from the proximal descending aorta or the underside of the aortic arch. In patients with a right aortic arch, the PDA may originate from the innominate artery. In these patients, the PDA is generally long and tortuous; so using the closest and straightest vascular entry to the duct is critical in settling the stents. For this unusual morphology, some operators have reported ductal stenting from femoral venous approaches [4,7,10] or an arterial approach. [4,8,11] When the procedure is performed with the venous approach, the ascending aorta must be reached via the right atrium through the right ventricle through the ventricular septal defects pathway. However, especially in babies with low birth weight, hypotension and bradycardia may occur, and this situation can complicate the procedure. In this paper, we described a newborn with right atrial isomerism and dextrocardia, a complete atrioventricular septal defect (right dominant unbalanced), an aortic outlet right ventricle with pulmonary atresia, a right aortic arch, and a PDA from the left innominate artery. We implanted the ductal stent using the femoral artery approach. The arterial pathway to the PDA in this group of patients is preferable because catheter navigation is relatively easy. Nevertheless, vascular complications are the main possible concerns about the mid-term results of the procedure. [3] Hence, to prevent permanent arterial damage, we recommend the use of the smallest diameter introducer sheaths. The Glidesheath Slender is an innovative sheath with a thinner wall and hydrophilic coating. The inner diameter is compatible with 5F catheters, whereas the outer diameter is similar to that of a regular 4F sheath, which is used today in newborns for arterial access during diagnostic or interventional catheterizations. Although there are limited data presenting results of using Glidesheath Slender in small children, the initial experience with this sheath in obtaining radial access in adults is promising and shows that Glidesheath Slender does not kink more easily compared with the regular sheaths. [12] Gendera et al. [13] reported that performing percutaneous interventions in small children using the Glidesheath Slender is safe and effective. It allows for the reduction of outer sheath diameter by one French, which creates a difference in this patient group and reduces the risk of vessel complications (stenosis, occlusion). A 4F long sheath (Cook Inc, Bloomington, IN, USA) may also be preferred in patients who require PDA stenting through the femo-ral artery. [8] However, especially in PDAs originating from the innominate artery, as it will be difficult to advance the long sheath to that region, it can be performed through the 5F guiding catheter. Another advantage of using a guiding catheter is that it is easier to manipulate than a long sheath when more than one stent is placed.
It is essential to cover the entire length of the ductus in the aortic stump to prevent ductal stenosis and avoid potential injury from the stent edges. So, multiple stents may be used to establish a curve concordant to the ductus course. [1,2,7] In short and straight PDA cases, stent length is relatively easy to determine by angiography; however, it can be challenging in a tortuous ductus where the length is not precisely predictable. In these cases, choosing a moderately longer stent than the ductal size and placing multiple short stents instead of long stents is recommended. [1] Choosing a single long stent can cause technical difficulties. Roggen et al. [4] reported three unsuccessful ductal stent cases with ductus arteriosus from the innominate/subclavian artery. Kinking of the long stent was the reason for failure in the first case. The second failure was because of the long stent, which could not be advanced into the PDA. In the last one, PDA straightening occurred during balloon inflation and caused the stent to shift in the ductus because of its long length. Three stents were placed in our patient, starting at the pulmonary side by the telescoping technique; thus, the ductus line without stenosis could be established.
In long and tortuous PDAs, such as the one in our brief report, it will be safer to place multiple short stents instead of a single stent. This process can be an alternative to aortopulmonary shunt surgery. Besides, interventions through the Glidesheath Slender in small patients are safe and feasible even when using an arterial approach.
Informed consent: Informed consent was obtained from the patient for the publication of the case report and the accompanying images. | 2021-10-09T06:17:20.268Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "20fa40e6fc3b041d85aad322fea9b07bc475ce4f",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5543/tkda.2021.21038",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7323dd714c6a7ad0b61b24f4fa2710173fa3d6a8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119163581 | pes2o/s2orc | v3-fos-license | Sums of fractions modulo $p$
Let $\F_p$ be the field of residue classes modulo a large prime $p$. The present paper is devoted to the problem of representability of elements of $\F_p$ as sums of fractions of the form $x/y$ with $x,y$ from short intervals of $\F_p$.
Introduction
Throughout the paper ε is a small fixed positive constant, p is a prime number sufficiently large in terms of ε. As usual, F p denotes the field of residue classes modulo p. The elements of F p we will frequently associate with the set {0, 1, . . . , p − 1}. Given an integer x coprime to p (or an element x from F * p = F p \ {0}) we use x * or x −1 to denote its multiplicative inverse modulo p. Let λ ∈ F p be fixed and let I and J be two intervals in F p . We assume that I and J are nonzero, that is I = {0}, J = {0}. Motivated by the recent work of Shparlinski [7], we consider the equation where x i , y j are variables that run through the intervals I and J respectively. Using exponential sum estimates Shparlinski obtained an asymptotic formula for the number of solutions of general linear congruences. In the case of (1) his results imply nontrivial estimates under some conditions imposed on the cardinalities of I and J (see Lemma 4 below). In particular, if n ≥ 3 and |I| = |J | > p n/(3n−2)+ε , then the asymptotic formula obtained by Shparlinski becomes nontrivial for any fixed constant ε > 0 (here and below, for a given set X we use |X | to denote its cardinality).
In the present paper we consider the problem of solvability of (1). Our results are based on combinatorial and analytical tools. Although we do not get an asymptotic formula for the number of solutions, our results give the solvability of (1) under weaker conditions on the sizes of |I| and |J |.
From Theorem 2 it follows, in particular, that for any ε > 0 there is δ = δ(ε) > 0 such that if I and J are intervals of F p with |I| > p 9/40+ε , |J | > p 1/2−δ , then any element λ ∈ F p can be represented in the form (3) for some (x 1 , . . . , x 12 ) ∈ I 12 and (y 1 , . . . , y 12 ) ∈ J 12 . Theorem 3. Let k be a fixed positive integer constant, I and J be intervals of F p such that Then for any λ ∈ F p the equation In particular, for any ε > 0 there is δ = δ(ε, k) > 0 such that if I and J be intervals of F p with then any element λ ∈ F p can be representable in the form for some (x 1 , . . . , x 4k ) ∈ I 4k and (y 1 , . . . , y 4k ) ∈ J 4k . It is to be mentioned that if the interval J starts from the origin and |J | > p ε , then there is a positive integer n = n(ε) such for any element λ ∈ F p the equation has a solution with y i ∈ J , see Shparlinski [8]. However, the problem is still open for intervals J of arbitrary positions.
Lemmas
Given sets X ⊂ F p and Y ⊂ F p , the product set X Y is defined by For a positive integer k, the k-fold sum of X , is defined by We also use the notation From the results of Glibichuk [5] it is known if |X ||Y| > 2p then 8X Y = F p . Here we need its version given by Garaev and Garcia [3] (see also Garcia [4] for even a more general statement).
We remark that the constant 2 + √ 2 that appears in the condition of the lemma can be substituted by a smaller one, but we do not need it here.
Next, we need the following result from Cilleruelo and Garaev [2] which is based on the idea of Heath-Brown [6].
Lemma 2. Let J be an interval in F p and λ ∈ F * p . Then the number W λ of solutions of the congruence Observe that for λ ∈ F * p the equation Hence, we have the following consequence of Lemma 2.
Corollary 1. Let J be an interval in F p and λ ∈ F * p . Then the number W λ of the solutions of the congruence We recall that (4) is equivalent to the claim that for any ε > 0 there exists c = c(ε) > 0 such that We also need the following result of Bourgain and Garaev [1].
Lemma 3. Let J be an arbitrary nonzero interval in F p . For any fixed positive integer constant k the number T k of solutions of the congruence satisfies Finally, we state the result of Shparlinski [7] which will be used to deal with Theorem 2 for relatively small intervals J . Lemma 4. Let I and J be two nonzero intervals in F p . Then the number R = R(λ, I, J ) of solutions of (1) with x i ∈ I and y i ∈ J satisfies R − |I| n |J | n p < |I||J | |I| n−2 + (p|J |) (n−2)/2 p o(1) .
Such an interval obviously exists. Let W λ be the number of solutions of the congruence x −1 + y −1 = λ, x ∈ J , y ∈ J .
It follows that
We have Thus, the condition of Lemma 1 is satisfied. Therefore, we get Since 2I 0 ⊂ I, the result follows.
Proof of Theorem 2
Let R be the number of solutions of the congruence (3) with x i ∈ I, y j ∈ J . There are three cases to consider. Case 1. p 5/119 < |J | < p 15/37 . In view of Lemma 4 applied with n = 12, the number R satisfies From the condition of the theorem it follows that |I| 11 |J |p 0.1ε < 0.1|I| 12 |J | 12 p , |I||J | 6 p 5+0.1ε < 0.1|I| 12 |J | 12 p .
Therefore, R > 0 and the result follows in this case. Case 2. |J | > p 5/8 . We fix a nonzero element x 0 ∈ I and denote by R 1 the number of solutions of the equation It suffices to show that R 1 > 0. Let J 1 = J \ {0}. Expressing R 1 via exponential sums and following the standard procedure, we get Here and below, we use the abbreviation e p (z) = e 2πiz/p . By the well-known estimate for incomplete Kloosterman sums we have max gcd(a,p)=1 y∈J 1 e p (ay * ) ≪ 2p 1/2 log p.
We also have 1 p Therefore, Since |J 1 | ≥ |J | − 1 > 0.5p 5/8 , we get that R 1 > 0 and the result follows in this case. Case 3. p 15/37 < |J | < p 5/8 . Following the notation of Lemma 3, we denote by T k the number of solutions of the congruence (5). From the well-known application of the Cauchy-Schwarz inequality it follows that From Corollary 1 we easily obtain that Since |J | > p 15/37 > p 1/3 , we get that Furthermore, by Lemma 3 and the condition |J | < p 5/8 , we have (1) .
Combining this estimate with (7) and (8), we obtain that From the relationship between the number of solutions of a symmetric equation and the cardinality of the corresponding set, we have implying that |3J −1 | ≥ |J | 21/20 p 1/4−0.1ε .
As in the proof of Theorem 1, let I 0 ⊂ F p be an interval such that |I 0 | > 0.3|I|, 2I 0 ⊂ I.
We have Thus, the condition of Lemma 1 is satisfied. Therefore, we get Since 2I 0 ⊂ I, this concludes the proof of Theorem 2.
Hence, denoting by I 0 ⊂ F p an interval such that |I 0 | > 0.3|I|, 2I 0 ⊂ I we verify that the the condition of Lemma 1 is satisfied with A = C = I 0 and B = D = kJ −1 . Thus, we get that I(2kJ −1 ) + I(2kJ −1 ) = F p which finishes the proof of Theorem 3. | 2015-10-27T16:45:12.000Z | 2015-10-27T00:00:00.000 | {
"year": 2016,
"sha1": "2c57dbefa1d8e926ded4fbacb784ef712d925143",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2c57dbefa1d8e926ded4fbacb784ef712d925143",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
79471007 | pes2o/s2orc | v3-fos-license | Comprehensive evaluation of manikin-based airway training with second generation supraglottic airway devices
Background Supraglottic airway devices (SADs) are an essential second line tool during difficult airway management after failed tracheal intubation. Particularly for such challenging situations the handling of an SAD requires sufficient training. We hypothesized that the feasibility of manikin-based airway management with second generation SADs depends on the type of manikin. Methods Two airway manikins (TruCorp AirSim® and Laerdal Resusci Anne® Airway Trainer™) were evaluated by 80 experienced anesthesia providers using 5 different second generation SADs (LMA® Supreme™ [LMA], Ambu® AuraGain™, i-gel®, KOO™-SGA and LTS-D™). The primary outcome of the study was feasibility of ventilation measured by assessment of the manikins’ lung distention. As secondary outcome measures, oropharyngeal leakage pressure (OLP), ease of gastric tube insertion the insertion time, position and subjective assessments were evaluated. Results Ventilation was feasible with all combinations of SAD and manikin. By contrast, an OLP exceeding 10 cm H2O could be reached with most of the SADs in the TruCorp but with the LTS-D only in the Laerdal manikin. Gastric tube insertion was successful in above 90% in the Laerdal vs 87% in the TruCorp manikin (P<0.009). Insertion times differed significantly between manikins. The SAD positions were better in the Laerdal manikin for LMA, Ambu, i-gel and LTS-D. Participant’s assessments were superior in the Laerdal manikin for LMA, Ambu, i-gel and KOO-SGA. Conclusions Ventilation is possible with all combinations. However, manikins are variable in their ability to adequately represent additional functions of second generation SADs. In order to achieve the best performance during training, the airway manikin should be chosen depending on the SAD in question.
Introduction
Supraglottic airway devices (SADs) have become of increasing importance in recent years. Apart from their frequent use during elective surgery, SADs have become an essential part of difficult airway algorithms. When tracheal intubation and face mask ventilation fail, a temporary insertion of an SAD can enable ventilation, thus protecting the patient from hypoxemia. 1 Moreover, the current European Resuscitation Council Guidelines recommend SADs as the device of choice for airway management by health care providers without expertise in tracheal intubation. 2,3 In the recent years second generation SADs have been introduced with claims that ventilation and airway protection are improved. Second generation SADs include a lumen for insertion of a gastric tube in order to prevent gastric insufflation and pulmonary aspiration. 4 Furthermore, several second generation SADs can be used as a conduit to facilitate tracheal intubation. 1,5,6 Regarding the safe handling of an SAD, frequent airway trainings on SADs are essential, particularly for staff with limited experience. 7 Effective airway training requires suitable airway manikins; however, due to limited experience, it is not well known which airway manikin is best suited for training of a specific skill related to ventilation and advanced techniques on second generation SADs. 8,9 The most important characteristic of an airway manikin is to enable an authentic response to the trainee's ventilation efforts after the insertion of the device. In a straightforward approach the device's fit to the manikin's airway can be estimated by the possibility of positive pressure ventilation and visual inspection of the expansion of the artificial lungs. A more sophisticated method for determining the seal of a respiratory system and thus to prove the possibility for clinically sufficient ventilation, is the oropharyngeal leakage pressure (OLP) test. Secondary requirements on airway manikins include the ability of accurate positioning of SADs as a prerequisite for tracheal intubation and the ability to insert a gastric tube. Airway manikins intended for airway training with second generation SADs should provide as many of these capabilities as possible. We hypothesized that the feasibility of training of the specific skills the second generation SADs allow for depends on the type of manikin.
In order to evaluate the qualities of two airway manikins intended for use with SADs we performed a series of skill tests using different second generation SADs with each manikin. Eighty anesthesia providers volunteered for the study. They were asked to insert five different types of second generation SADs in both the TruCorp AirSim ® and the Laerdal Resusci Anne ® Airway Trainer™. The primary outcome measure of the study is the feasibility of ventilation depending on the manikin type and type of SAD. As secondary objectives, the manikins' qualities were evaluated by means of the seal to the SAD using the OLP test, the feasibility to insert a gastric tube and the accuracy of the SADs position relatively to the manikins' airways. Additionally the insertion times and participants' subjective ratings and preferences were evaluated.
Material and methods study institution and ethics
The study was conducted at the Department of Anesthesiology and Critical Care Medicine, University Medical
airway manikins and supraglottic airway devices
The two evaluated manikins were TruCorp AirSim ® Advance (TruCorp, Belfast, UK) (TruCorp) and Laerdal Resusci Anne ® Airway Trainer™ (Laerdal, Stavanger, Norway) (Laerdal) (Figure 1), both of comparable pricing. The Tru-Corp features an anatomically correct airway created from Digital Imaging and Communications in Medicine (DICOM) data. The Laerdal shows a complete anatomy of the vocal cords with elastic skin and tissue properties. According to the manufacturers' recommendations, both airway manikins were dedicated for training with SADs and insertion of laryngeal masks. The chosen second generation SADs were LMA ® Supreme™ (Teleflex, Athlone, Ireland), Aur-aGain™ (Ambu, Kopenhagen, Denmark), i-gel ® (Intersurgical, Wokingham, UK), KOO™-SGA, a second generation prototype (KOO Medical Equipment, Tsuen Wan, China), and LTS-D™ (VBM GmbH, Sulz a.N. Germany) ( Figure 2). Each device used was the model available in June 2016. In preliminary tests, we evaluated each SAD's size to find the sizes that fitted best in both the included airway manikins. To that end, three anesthesiology consultants (blinded to the purpose of the study) were asked to use each SAD in each of the included airway manikins. The quality of fit between the SAD and airway manikins was evaluated by two independent experienced anesthesiologists with regard to the efficiency of ventilation, position, and ease of insertion of a gastric tube into the esophagus. Based on the experts' ratings the LMA size 3, the AuraGain size 3, the i-gel size 5, the KOO-SGA size 4 and the LTS-D size four were included in the study. Oral cavities of the airway manikins were lubricated according to the manufacturers' instruction manuals ahead of each individual test procedure.
Participants
Eighty anesthesia residents, all staff of the Department of Anesthesia and Intensive Care of the University of Freiburg, gave written informed consent for voluntary participation. The inclusion criterion was a clinical experience of more than 100 SAD insertions. The participants' individual demographic data, professional experience, and experience with airway management, in both numbers of SAD insertions and tracheal intubations, were recorded.
study protocol
Every participant inserted each of the five SADs in both airway manikins. The sequence of airway manikins and SADs was randomised by drawing lots ahead of the study.
The study's protocol is shown in Figure 3. First, participants were asked to insert the respective SAD. After insertion, the participants were asked to apply a standardized cuff pressure of 60 mbar with a cuff pressure gage (Covidien, Plymouth, MN, USA). Ventilation was achieved by connecting a selfinflating bag (Ambu SPUR II, Ambu, Copenhagen, Denmark) and confirmed by visual assessment of the manikins' lungs' distention. Following successful ventilation, the SAD's position was determined by a bronchoscopic view of the hypopharynx. Thereafter, oropharyngeal leakage pressure was determined. Finally, participants were asked to insert a 14 Charrière gastric tube (Dahlhausen, Cologne, Germany) into the manikins' esophagus through the additional lumen of their SAD. In all cases, the success of ventilation, the OLP, the evaluation of the position, and success of gastric tube insertion were assessed by the same individual, an experienced anesthesiologist.
Oropharyngeal leakage pressure (OlP) measurement
After insertion of the SAD the OLP was tested by occlusion of the breathing system and subsequent inflation of the closed system with a gas flow of 3 L min -1 . 10 Under continuous gas inflation the pressure in the breathing system was expected to exceed a threshold of 10 cm H 2 O, according to existing literature. 11,12 If this objective has been reached we rated the OLP test passed and the ventilation to be clinically sufficient. If the threshold of 10 cm H 2 O could not be reached, ie, the leakage of the breathing system exceeded 3 L min -1 , the OLP was rated failed. The pressure was measured using a differential pressure sensor (PasCal PC 100, Hoffrichter, Schwerin, Germany). Pressure signals were visualized and
insertion time measurement
The participants' performances were video-recorded using a camera (LifeCam Studio, Microsoft™, Redmond, WA, USA) giving a complete top-down view of the test setup. Using these recordings, insertion time was assessed as time from hands on SAD until onset of manual ventilation.
accuracy of position
The positions of the SADs were evaluated for accuracy using a flexible bronchoscope (Ambu ® aScope™ 3, Ambu, Kopenhagen, Denmark) which was introduced into the SAD through the mask's oropharyngeal aperture. The bronchoscopic view was recorded and scored according to Brimacombe and Berry 13 : only vocal cords seen (4), vocal cords plus posterior epiglottis seen (3), vocal cords plus anterior epiglottis seen (2), vocal cords not seen but adequate function (1) and vocal cords not seen and failure to function (0).
success of gastric tube insertion
The positioning of the gastric tube was evaluated by direct visual inspection in the Laerdal manikin and by palpation in the TruCorp manikin.
Participants' subjective ratings and preferences
After completion of the test protocol, participants were asked to rate the handling quality of each SAD depending on the airway manikin using a five-point Likert scale, ranging from excellent handling (5) to poor handling (1). Beyond that, the participants were asked to indicate their preferred SAD in use with the respective airway manikin.
Data processing and statistical analysis
Data were collected in an Excel™ (Microsoft, Redmond, WA, USA) sheet and transferred to SPSS™ (version 25; IBM Corp., Armonk, NY, USA) for statistical processing. Statistical testing included basic descriptive statistics, Student's t-test, Mann-Whitney U-test and Wilcoxon signed-rank test to compare non-parametric data (participants' subjective rating). Normal distribution of continuous variables was tested using the Kolmogorov-Smirnov Test.
Results
All of the 80 anesthesia providers gave written informed consent to the study's purpose. The participants' characteristics are shown in Table 1. Initial ventilation was feasible in all combinations of SADs and airway manikins. The results of the secondary outcome measures are given in Table 2.
Oropharyngeal leakage pressure
The highest rate of OLP exceeding 10 cm H 2 O could be observed in the combination of LTS-D and the TruCorp manikin (79%). With the i-gel the OLP could not be reached in any of the two manikins. In the Laerdal manikin OLP did not exceed 10 cm H 2 O in use with LMA, Ambu, KOO and i-gel; only with the LTS-D did the OLP exceed the threshold of 10 cm H 2 O, but this was seen only in a few cases (6%).
insertion time
Insertion time was significantly shorter in the Laerdal manikin for all (all P,0.02) but the LMA SAD (P=0.315). Notes: Data are based on self-assessment by the participants and given as median.
accuracy of positioning (Brimacombe score) In all SADs, except for the LTS-D, the recorded Brimacombe score was significantly higher in the Laerdal manikin. For the LTS-D the Brimacombe score was significantly higher in the TruCorp manikin.
success of gastric tube insertion
The rate of successful gastric tube positioning was heterogeneous in the TruCorp manikin ( Figure 4). The lowest rate of success was found with the i-gel (45%) and the highest rate of success with the LTS-D (86%). By contrast, in the Laerdal manikin the participants showed a success rate exceeding 90%, regardless of the SAD used.
Participants' subjective ratings and preferences
Participants rated handling of all SADs better in the Laerdal manikin than in the TruCorp manikin ( Figure 5). The ratings given as median (25%-75% quartiles) are stated in Table 2.
In both manikins Ambu was the preferred device ( Figure 6).
Discussion
The most important finding of our study is that ventilation is possible with all SADs, regardless of the manikin in use. However, we performed an oropharyngeal leakage pressure test to further evaluate the possibility of alveolar ventilation using an SAD with regard to clinical conditions. The oropharyngeal leakage pressure, a technical approach to evaluating the breathing system's ability to be ventilated, gives the level of airway pressure which is considered necessary for clinically sufficient ventilation. 14 the LTS-D device only. We assume the overall better fit of the SADs in the TruCorp to be attributable to the more flexible material and apparently tighter hypopharynx in this type of manikin. The superior performance of the TruCorp manikin, however, could not be attributed to a better laryngeal position of the SADs. The Brimacombe scores were better in the Laerdal compared to the TruCorp manikin for all SADs except the LTS-D. Moreover, the overall easier to perform ventilation with the TruCorp manikin was not associated with a better subjective rating of the participants. They rated the SADs' handling more convenient in the Laerdal manikin.
With the i-gel clinically sufficient ventilation, measured by means of the OLP, was possible in neither manikin. This was most likely due to the non-inflatable thermo-sensitive submit your manuscript | www.dovepress.com Dovepress Dovepress 374 schmutz et al tightening system, unique in this type of SAD. However, our findings demonstrate that the insufficient ventilation with the i-gel does not negate the other aspects of training with this device. In accordance with previous studies evaluating SADs on airway manikins, 15,16 we found the i-gel to be superior regarding ease of handling and time of insertion. Furthermore, the excellent performance of the i-gel in clinical and emergency settings is well documented. [17][18][19][20] The evaluation of insertion of a gastric tube may gain importance during the training of additional capabilities of second generation SADs. Our results demonstrate that all combinations of SADs with the Laerdal manikin enabled an easy insertion of a gastric tube. Similar success rates could be found in studies evaluating the extended capabilities of second generation SADs in humans. 21,22 In these studies, gastric tube insertion was found to be successful in more than 91% of all cases. Therefore, we think that the high success rates found for gastric tube insertion are not due to the artificial situation of training with an airway manikin. However, gastric tube insertion seems more difficult in the TruCorp manikin.
Regarding the times required from hands on SAD to onset of manual ventilation, the differences between the TruCorp and the Laerdal manikin were statistically significant in four of the five SADs. The time required for insertion may support the provider's decision on a preferable type of SAD. However, with respect to the small differences ranging from 3 to 12 seconds in our study and the fact that times required for SAD insertion in manikins are poorly correlated to those found in anesthetized patients, 23 we consider our findings regarding the insertion time to be of limited clinical relevance.
The main findings of our study demonstrate that the airway models' suitability for training with second generation SADs depends on the task in question. The TruCorp AirSim Advance manikin's strength lies in its ability to be reliably ventilated with most of the SADs, particularly with the LTS-D. The strength of the Laerdal Resusci Anne Airway Trainer manikin is the overall good acceptance by the providers. Neither of the manikins is suitable for training with the i-gel concerning the performance of positive pressure ventilation. Insertion of a gastric tube is possible in both types. The potential clinical implications of these results rest on the benefits an ideal airway model can have on the feasibility of training. The most important quality of a manikin is the ability to simulate the real-world conditions and thus to give the trainee an authentic feedback. Moreover a working connection of a manikin and an airway device may motivate the trainee toward an achievable goal. Likewise, a manikin's limitations should be known in order to avoid frustration due to multiple unsuccessful attempts. Although there is only weak evidence in support of simulation-based technical skill training on patient safety, 24 demonstrable benefits in select clinical outcomes have been shown. 25,26 We therefore assume that the selection of a suitable airway manikin based on the knowledge of the respective pros and cons can improve the quality of training and thus the clinical performance of the provider and potentially the patient's safety.
limitations Some of our measurements may be subjective and open to bias in terms of familiarity and preference. However, the regular airway training of our department's staff uses other manikins than those included in our study. Moreover, our findings are in consensus with those of previous studies, including those by health care providers with different experience. [27][28][29] Therefore, we do not feel that the experience of our participants biased the findings of our study to a relevant extent.
We used a TruCorp AirSim Advance Manikin ordered in 2016. We are not aware of modifications applied to the inlay of the manikin by the manufacturer since the evaluation by Silsby et al. 8 Furthermore, other newly developed airway manikins are unknown to us. We have compared how both airway training manikins perform with each second generation SAD. This study does not claim equivalent findings in clinical patient care.
Conclusion
Our study proves the existence of favorable combinations of manikins and second generation SADs, with regard to airway training. We therefore suggest selecting an appropriate manikin depending on the SAD and the training task in question. If, however, the training comprises multiple SADs or tasks, the trainer should be aware of the limitations of the respective pairings. In the light of the increasing spectrum of available SAD types and associated functions, it appears desirable that the development of airway manikins keeps pace with this technical progress.
Data sharing statement
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Publish your work in this journal
Submit your manuscript here: http://www.dovepress.com/therapeutics-and-clinical-risk-management-journal Therapeutics and Clinical Risk Management is an international, peerreviewed journal of clinical therapeutics and risk management, focusing on concise rapid reporting of clinical studies in all therapeutic areas, outcomes, safety, and programs for the effective, safe, and sustained use of medicines. This journal is indexed on PubMed Central, CAS, EMBase, Scopus and the Elsevier Bibliographic databases. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. | 2019-03-19T13:02:28.874Z | 2019-03-01T00:00:00.000 | {
"year": 2019,
"sha1": "347edf67c9eed452a0e447ed7c5aa2cae9cf5e3d",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=48373",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7fb5b13364d533f5267dd9d7f38cd56a657b81b7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
214681716 | pes2o/s2orc | v3-fos-license | A critical review of graphics for subgroup analyses in clinical trials
SUMMARY Subgroup analyses are a routine part of clinical trials to investigate whether treatment effects are homogeneous across the study population. Graphical approaches play a key role in subgroup analyses to visualise effect sizes of subgroups, to aid the identification of groups that respond differentially, and to communicate the results to a wider audience. Many existing approaches do not capture the core information and are prone to lead to a misinterpretation of the subgroup effects. In this work, we critically appraise existing visualisation techniques, propose useful extensions to increase their utility and attempt to develop an effective visualisation approach. We focus on forest plots, UpSet plots, Galbraith plots, subpopulation treatment effect pattern plot, and contour plots, and comment on other approaches whose utility is more limited. We illustrate the methods using data from a prostate cancer study.
| INTRODUCTION
Investigating target populations that potentially benefit from an innovative intervention is essential in clinical trials. Even if efficacy is established in the overall population, a complete benefit/risk assessment of subgroups should be undertaken before deciding whether the treatment is administered to the whole population or targeted to specific subgroups. 1 Such investigations pose numerous challenges such as recruiting patients with diverse baseline characteristics, which may create a large number of subgroups. The presence of promising results in subgroup analyses can be attributed to small sample sizes or to the fact that many potential subgroups are explored, which affects the credibility of the findings.
Subgroup analyses might be prospective or post-hoc in different settings of clinical trials. Their primary purpose could be to establish efficacy claims, subgroup discovery and/or consistency assessments across subgroups. Many researchers have proposed novel analysis approaches and trial designs for different types of subgroup analysis. [2][3][4] Subgroups have further received extensive attention in recent clinical research for the development of stratified medicine.
Visualisation techniques, when properly used, are powerful tools. It is argued that graphics allow a more direct interpretation of results than tables. 5 There is extensive literature on principles for good graphics in general [6][7][8][9][10][11][12] particularly in visualisation of healthcare data. [13][14][15] It is also true that good graphics require careful crafting 16 and there is scope to improve when it comes to figures found in clinical trial reports. 17,18 Graphical approaches are routinely employed in subgroup analysis, typically for describing treatment effect sizes of subgroups. Such visualisations encapsulate subgroup information and boost the clinical decision-making process. However, current literature does not adequately provide solutions to producing effective graphics in subgroup analyses. Existing approaches still have inherent drawbacks and their use may lead to misinterpretations of subgroup effect sizes. 2 In this article, we critically evaluate and refine effective visualisation approaches for subgroup analysis. Our considerations apply mainly to exploratory settings. Some of these visualisations have previously been proposed for subgroup analysis and were refined in this work. There are existing alternative techniques primarily developed for other applications which we have applied and/or extended to provide visual insight of subgroup information.
The remainder of the article is structured as follows. In Section 2 we describe: the framework for assessment, the dataset we use for illustration, and the graphical approaches for displaying subgroup information. We focus on graphics that allow a direct comparison of subgroup treatment effects. We summarise the findings in the case study and the assessments and features of all graphical approaches in Section 3. Section 4 provides a conclusion with final remarks.
| Framework to assess the properties of the graphical displays
It is fundamental that graphics in subgroup analysis display treatment effects for the subgroups under considerations. There are several other desirable characteristics for graphical approaches as initial subgroup analysis tools. Displaying sample sizes and uncertainty measures underpins the credibility of promising and adverse findings within subgroups. While many subgroup analysis techniques consider subgroups that are defined based on each baseline factor separately (univariate subgroups), it is also important to reveal information on those defined based on multiple factors (multivariate subgroups). For example, instead of looking at the subgroups defined by gender (male/female) and bone metastasis (yes/no) separately, it may be of interest to look at the intersection of the marginal subgroups: male with bone metastasis, male without bone metastasis, female with bone metastasis, and female without bone metastasis. These characteristics can certainly constitute sensible criteria for assessment. Our framework to assess the properties of the graphical displays consists of the criteria outlined in Table 1.
Each graphical approach is judged according to whether it meets the criteria set out. Even if a criterion is met, the information may be represented or encoded differently. For example, some graphics show the treatment effects in the subgroups using a colour scale while others represent them with the position of a point along a common scale. We discuss different levels of information in each of the graphics.
| Case study: The prostate cancer dataset
To illustrate the different graphical approaches, we use data from a prostate carcinoma clinical trial 19 which is available on the web 20 and has previously been used to demonstrate subgroup selection methods. 21 The trial included 506 subjects that were randomised to either a placebo group or one of three dose levels of diethylstilbestrol. In line with previous work, we combine the placebo and the lowest dose level of diethylstilbestrol to give the control arm, and the higher doses to give the experimental arm. Only 475 subjects with complete data are used in our illustration. We aim to describe the estimates for treatment effect across the different subgroups of patients. To illustrate the graphics, we consider six pre-treatment covariates, four of which are binary and two continuous: existence of bone metastasis (bm: 0, no; 1, yes), disease stage (3 or 4), performance rating (pf: 0, normal; 1, limitation of activity), history of cardiovascular events (hx 0, no; 1, yes), age, and weight index (wt: weight in kg − height in cm + 200). The considered endpoint in this analysis is death from all causes combined, and the log-hazard ratio for treatment vs control is used as the treatment effect measure.
| Visualisation methods
In this subsection, we present the graphical approaches that are best suited for subgroup analysis based on our review. The first three approaches, Galbraith, forest, and UpSet plots, apply to both binary and categorical subgroup-defining covariates. We also include two methods, subpopulation treatment effect pattern plot (STEPP) and contour plots, that allow exploring changes in the treatment effect over one or two continuous variables, respectively, as it is suggested in the current EMA guideline. 1 These five approaches represent or provide a measure of the treatment effect and therefore allow direct comparison across subgroups. Additional graphics that we found less practical are deferred to the Appendix while other approaches that may be used to describe subgroup composition but do not fulfil the criterion C1 (effect size) are presented in the Supporting Information.
In most of the graphics, for simplicity, the treatment effect is estimated by merely partitioning the dataset and using only subjects from the considered subgroups. We acknowledge that there are more advanced approaches that make more efficient use of the data, 1 but these are not required to fulfil the purposes of this article (see also Reference 4,22, and 23). The graphics evaluated in this article can be used to display the treatment effect estimates resulting from such techniques.
All graphics are created using the R statistical software 24 and the code is publicly available as an R package for reproducibility. 25 In most of the cases, we draw the plots using functions from the grid and graphics packages which are part of the base R language. For some of the plots, we use additional packages that are cited in each section accordingly.
Although we acknowledge that the choice of colours is an essential and challenging task when producing graphics, we do not discuss this topic in our work as it is discussed elsewhere. 26,27 Several of the considered plots make use of colour coding to represent the magnitude of the treatment effect across subgroups, for which we use a divergent colour palette generated by the colorspace R package. 28 We follow Tufte's principles 29 to enhance graphical integrity. This is particularly relevant when we depict sample sizes with two-dimensional areas/shapes which are proportional to one-dimensional sample sizes. Numerical quantities are then properly represented and comparisons can be made accurately. Additionally, we take into account research on graphical perception 30 to judge the graphics.
| Forest plot
Although forest plots are a common graphic used in meta-analysis, 31 they are also extensively used for subgroup analysis. 32,33 Figure 1 shows its application for the prostate cancer dataset considering the four binary covariates. The middle panel displays the subgroup treatment effect estimates with their confidence intervals. The squares in the centre of each error bar are proportional to the subgroup sample sizes. A vertical line at the overall treatment effect level is added to facilitate seeing if a subgroup confidence interval differs significantly from the overall effect. 32 Additional information in a table format are usually included to provide the magnitude of the estimates. The text in the left panel shows the estimates of the treatment effects, lower/upper bounds of the 95% confidence intervals and subgroup sample sizes (further divided into treatment and control arms). When using continuous endpoints, it is appropriate to display the mean response for each treatment arm in an additional panel. In our implementation for a survival endpoint, we include the Kaplan-Meier estimate for each subgroup. The summary statistics in the left panel and the survival curves on the right may be dropped if additional space is required. The Kaplan-Meier estimates are drawn with the ggplot2 package. 34 Forest plots are popular because they are simple and effective. In the main panel, they allow a direct comparison of the treatment effect estimates with low cognitive effort. According to our assessment, forest plots meet C1 and C2 displaying treatment effects and confidence intervals. Criteria C3 and C5 are also met as the subgroup sample sizes are depicted through the area of the treatment effect and many subgroup-defining covariates can be easily displayed. A downside of forest plots is that as subgroup intersections (C4) are not shown.
In Figure 1, it is quite clear that the subgroup defined by a positive outcome for bone metastasis is the subgroup with the largest benefit from the treatment since the log-hazard ratio is negative. Interestingly, its upper confidence interval does not cover the average treatment effect, therefore suggesting treatment effect heterogeneity. The Kaplan-Meier curves also allow to rapidly recognise the differential survival pattern for the subgroup with bone metastasis: patients with bone metastasis in the control group have shorter survival when given the control treatment, while those in the treatment group have a survival pattern that is similar to patients without bone metastasis. For the rest of the subgroups, their treatment effect estimates are closer to the estimate in the overall population. While we observe a positive loghazard ratio for some subgroups suggesting the experimental treatment is worse than control, all their confidence intervals cover the average treatment effect, which implies that no treatment effect heterogeneity is present.
| UpSet plot
UpSet plots are a novel visualisation technique for the quantitative analysis of sets and their intersections. 35 It was proposed to overcome the restriction to a small number of sets of Venn diagrams. In Figure 2, we use the UpSetR R package 36 to create the plot with four binary subgroup-defining covariates. The sizes of the univariate subgroups for these We extend the UpSetR R package to display effect sizes in an extra panel ( Figure 3). While the log-hazard ratio and its confidence interval for each subgroup are shown as in a forest plot, the UpSet plot provides the advantage of displaying intersections of sets. If one were to use a statistical model with treatment-by-covariate interactions to derive the treatment effect estimates, then each row would correspond to a linear combination of the coefficients in the model.
Our extension of the UpSet plot also allows displaying lower level intersections as compared to the original UpSet proposal. We implement a new icon for the matrix panel: a "+" symbol if a variable is equal to 1 or "yes," a "−" if a variable is equal to 0 or "no," and empty if this variable is not considered for the subgroup definition. For example, the first bar of the plot corresponds to the overall population (no subgroup division), which has a size of 475. The second bar with a size of 428 corresponds to the subgroup of normal performance rating (pf = 0), irrespective of the values of the other two variables. Since the number of subgroups to consider increases dramatically in this modification (3 p subgroups when considering p binary covariates), only three covariates are used. One could include more covariates and filter the number of subgroups according to different criteria, such as total subgroup sample size or sample size per treatment. Finally, the bar plot on top of the matrix panel indicates the marginal subgroup sizes with the black region corresponding to the 1 or "yes" category and the white region corresponding to the 0 or "no" category.
The UpSet plot loses the simplicity observed in forest plots and requires the beholder to be familiar with the graphical approach before drawing conclusions. Nevertheless, the UpSet plot has some advantages. Effect sizes (C1) and confidence intervals (C2) are displayed as in a forest plot and many covariates (C5) can also be used. Compared to a forest plot, subgroup sample sizes (C3) are displayed in a panel as a bar plot. This is a more effective way to display the information in contrast to the proportional areas in the forest plot. Another advantage is that the UpSet plot shows subgroup intersections (C4) and allows inferring relations among the subgroups. In our example, we order the subgroups in terms of their sizes, but it is also possible to arrange the subgroups according to their effect sizes or the number of subgroupdefining covariates involved in their composition. As the overall treatment effect and its confidence interval are also included, it allows to compare treatment effects and check for treatment effect heterogeneity. However, unlike a forest plot, it does not show the mean response for treatment and control arms in each subgroup.
| Galbraith plot
A Galbraith plot 37,38 is an alternative to a forest plot for examining heterogeneity among studies or subgroups in a meta-analysis. The variant that is shown in Figure 4 exhibits the estimation of treatment effect sizes for K = 8 subgroups defined by the four binary covariates. The xy-coordinates correspond to the points: whereδ F is the treatment effect estimate in the full population andδ i is the treatment effect estimate in subgroup i, i = 1, …, K. The grey band can be used to detect effect heterogeneity. Points outside the band show larger than expected heterogeneity. The slope of the line from the origin through each subgroup point corresponds to the effect size estimatê δ i of the corresponding subgroup. An additional radial axis is drawn to depict the subgroup effect sizes which are F I G U R E 3 Improved UpSet plot for subgroups defined by performance (pf ), bone metastasis (bm), and history of cardiovascular events (hx). The panel on the left (matrix) displays how the subgroups are formed by assigning a "+" if the variable is equal to 1 and a "−" if the variable is equal to 0. The bar plot on top of the matrix panel indicates the marginal set sizes in relation to the total sample size, with the black region corresponding to the 1 or "yes" category and the white region corresponding to the 0 or "no" category. Treatment effect sizes and their confidence intervals are displayed in the panel in the middle and the subgroup sizes in the horizontal bar plot on the right represented with the red tick marks. The central line at y = 0 points to the average treatment effect for the full population. This plot was drawn using the ggplot2 R package together with ggrepel to avoid overlapping labels. We note that, asδ F is itself a random variable, it might better to consider its variance. This can be achieved by considering the xy-coordinates: The resulting plot is given in the Supporting Information. The drawback of this modification is that the x-axis does no longer represent the standard error of the treatment effect estimates.
The result of the graphical assessment of Galbraith plots is satisfactory, since it displays effect sizes (C1), standard deviations (C2), and a large number of subgroup-defining covariates can undoubtedly be used (C5). On the other hand, this plot does not display sample sizes (C3) nor intersections (C4). Although Galbraith plots might require more effort to be explained and understood, these plots can certainly handle a large number of subgroup covariates, perhaps better than any of the other considered graphics. In this case, special care needs to be paid to the labels of subgroups and the location of red tick marks as they may not be distinguishable.
In terms of our example, we conclude, just as in the forest plot, that treatment effect heterogeneity may be present in the subgroup of patients with bone metastasis since its point is immediately visible outside the grey band.
| Subpopulation treatment effect pattern plot
The STEPP 39,40 gained popularity in breast cancer recently. It is a non-parametric method mainly for examining whether treatment-covariate interactions exist. In Figure 5, we adopted the slide-window fashion of STEPP to represent the estimation of treatment effect size (log-hazard ratio) in overlapping subgroups defined by age. To do so, we form subgroups with sample sizes of around N 11 = 40 with an overlap of N 12 = 30 subjects with immediately neighbouring subgroups. The band bounded by the blue dashed lines is constructed for 95% simultaneous confidence interval. The other band bounded by the orange dashed lines is built based on individual 95% CI (without multiplicity adjustment). The red line is formed by connecting the point estimates of treatment effect (log-hazard ratio) for all formed subgroups. The green line represents the log-hazard ratio estimate for the full patient population. It is worth noting that the point estimates are positioned at the mean value of age for each subgroup for the x-axis. If the green line does not lie in the region formed by simultaneous confidence intervals, it reveals that interaction may exist.
In the original publication, 40 the points were placed equidistantly along the x-axis annotating the median values of the variable for each subgroup as reference. An illustration of this alternative plot is given in the Supporting Information. We believe it is better to use the proper scale to reflect the mean (or median) values of the variable used to define subgroups. This helps indicate whether the values cover a small or large range of the variable of interest. F I G U R E 4 Galbraith plot for subgroups defined by existence of bone metastasis (bm), history of cardiovascular events (hx), stage, and performance rating (pf ) It is a quite common problem in subgroup analysis to define subgroups based on continuous biomarkers. Since it is advised against using arbitrary cutoff points in initial subgroup investigations, STEPP plots are a good way to characterise changes of the estimated treatment effect over the range of the considered continuous covariate. This is the suggestion from the current EMA Guideline on the investigation of subgroups in confirmatory clinical trials. 1 The STEPP approach satisfies C1 displaying effect sizes and C2 for displaying confidence intervals. Here, the subgroup sample sizes (C3) are adopted by design and only annotated in the figure title but are not represented graphically. This plot only considers one continuous covariate and therefore, C4 (intersections) cannot be met. The plot does show intersections of contiguous subgroups, where the total number of subgroups depends on the sample size of subgroups and the overlap proportions.
In some situations, it might not be clear how to determine the subgroup sizes or overlap and sensitivity analyses might need to be conducted for different configurations. The analysis results may further be compared with the results when using fractional polynomials 41,42 or non-parametric methods such as Gaussian processes. 43 In Figure 5, we observe that the treatment effect for subgroups defined by age fluctuates closely around the overall treatment effect. When approaching the ends of the range of the covariate the estimate of the log-hazard ratio departs from the estimate for the full population although the confidence intervals for the subgroup treatment effects still cover the overall effect. This graph may be particularly useful to derive subgroups from a continuous variable.
| Contour plot
While STEPP considers only one continuous biomarker, a contour plot could be regarded as an extension to explore continuous changes in two continuous biomarkers. We propose two different implementations of contour plots for the treatment effects across age and weight.
In Figure 6A, subgroups of sample sizes N 11 are formed by using a horizontal sliding window across the values of age with an overlap of N 12 subjects. Subsequently, each subgroup is further divided into smaller subgroups of sample sizes, N 21 , using a vertical sliding window across the values of weight with an overlap of N 22 . Sample sizes and overlap to form subgroups are adopted by design based on sensible judgement. For example, subgroups should have a considerable sample size to ensure that patients in both treatment and control arms are represented. For each formed subgroup, we then calculate the log-hazard ratio for treatment vs control. The contour areas are obtained through a bivariate interpolation and smooth surface fitting (LOESS) for irregularly distributed data points over the range of values from the subjects under study. A divergent colour scale is used for the effect sizes. A limitation of this approach is that there may be regions of the covariate space in which the treatment effect estimates are not reliable due to small sample sizes or no data points.
We also propose using local regression techniques to calculate the treatment effect at each coordinate. In Figure 6B, a weighted Cox proportional-hazards model is fitted at each combination of weight and age (using a step of 1 unit). A normal kernel with the centre at the coordinate values under consideration is used to assign weights to each subject. If there are less than 20 subjects within two standard deviations, the effect size is not calculated and the area is left blank. This helps to avoid extrapolating the results to areas in which we do not have enough information.
F I G U R E 5 STEPP plot of overlapping subgroups defined by age.
Each subgroup has a sample size of around N 11 = 40 and is controlled to have about N 12 = 30 subjects overlapping with the neighboring subgroups. STEPP, subpopulation treatment effect pattern plot Contour plots match criterion C1 since effect sizes are represented through a colour scale which is one of the least accurate ways to encode information. 30 This is because for a particular coordinate in the plot, it might be hard to decipher what is the precise value of the treatment effect. This plot helps to uncover patterns in specific regions of continuous covariates that might not be visible otherwise. Contour plots also meet C4 as the intersection of two subgroupdefining covariates are used. The uncertainty of the treatment effect estimates (C2) and sample sizes (C3) are not represented in the graphic which is a significant drawback. Contour plots only consider two covariates.
Contour plots are particularly useful when there are enough subjects well distributed over the entire range of the covariates of interest. The interpolated treatment effect sizes may be unreliable in regions where there are no data points. In situations where the values of two covariates are sparsely distributed over the region, it can be unclear how smooth the interpolated surface should be. Note that it is also possible to use other local regression algorithms to calculate the treatment effect at each coordinate or even other modelling strategies such as including a generalised additive model with interactions. 41 Recent proposals that investigate the predicted individual treatment effect can also be applied to estimate the effect of treatment across the covariate space. [44][45][46] We observe that older patients seem not to benefit from the new treatment. However, this interpretation should be cautious as the precision of the estimates is not displayed.
| Additional graphical approaches
We also consider further graphical approaches that may be applied to the subgroup analysis framework: level plot, mosaic plot, Venn diagram, bar chart, tree plot, L'Abbé plot, chord diagram, and Coxcomb plot. Compared with the aforementioned methods, their assessment is less favourable and hence they are only presented in the Appendix. In most of the cases, they convey the information of treatment effect through colour coding. This way of presenting the information is more challenging to decode. Additionally, most of them do not display a measure of uncertainty for the treatment effect estimates which is essential for assessing treatment effect heterogeneity.
The use of auxiliary plots might help to display additional information, such as overlap between subgroups, that might be relevant. The Supporting Information provides an overview of some options. Some of the graphics allow visualising subgroup composition or overlap between subgroups by displaying the relative overlap or dissimilarity measures. Other graphics, such as a mosaic plot with a binary response, an alluvial plot or a coxcomb plot, may complement the analysis by displaying absolute response rates in treatment and control arms across subgroups. Throughout the manuscript, we have analysed the prostate cancer dataset to explore subgroup effects. Here, we present an overall summary of the main findings related to subgroups.
In the forest plot (Figure 1), we explored the marginal treatment effects for subgroups defined by binary covariates. The treatment effect was similar across all the subgroups except for the group of patients with bone metastasis. The graph suggests that patients with bone metastasis might have larger benefit from the experimental treatment because the confidence interval for this subgroup does not cover the line that represents the treatment effect in the overall population. The same pattern is observed using a Galbraith plot (Figure 4), as the only point lying outside the (−2, 2) band is the one corresponding to this subgroup. Figure 3 allows, in addition, to observe subgroups formed by the subgroup intersections. It can be seen that patients without bone metastasis and with a history of cardiovascular events might have been harmed by the experimental treatment.
The variable age was explored alone in Figure 5 and together with weight in Figures 6. In the latter, we find that the treatment appears more beneficial for younger patients with weight index above 90, while for older patients the treatment may have led to worse outcomes than control.
We remind here that these analyses are exploratory and must be interpreted with care. Despite this, they may bring useful insights to plan additional studies and collect more information from subgroups of interest in the future.
| Summary of graphical methods
In this section, we provide a summary regarding the criteria C1 to C5 presented in Table 1. We discuss only the graphics presented in the previous section. The assessment and characteristics of the improved graphical approaches are summarised in Table 2 where we also include the graphics in the Appendix. The plot has been improved or modified to make it available for the subgroup analysis framework.
C1 (effect size): This information is encoded in different ways in the studied graphics. Forest plots, UpSet plots, Galbraith plots, and STEPP allow a straightforward comparison across subgroups as the treatment effect estimates are illustrated along a common axis. This way of encoding information is the most accurate according to theoretical arguments and experimental results on graphical perception. 30 Contour plots use a less accurate encoding that is effective to only give a general overview of the estimated treatment effect over the range of the covariates. Therefore, even if all of the graphical techniques satisfy the primary criterion of displaying subgroup treatment effect sizes, some may be more effective in communicating the results from the analysis than others. The judgement of heterogeneity also depends on the treatment effect estimate in the full population, which is displayed in all the considered graphics. Additionally, forest plots can provide absolute subgroup responses for the treatment and control arms.
C2 (uncertainty): Forest plot, STEPP, and UpSet plot display the confidence intervals of the treatment effects while Galbraith plot shows their standard error. This is important since visualisations that do not adequately demonstrate the uncertainty in the estimates may be misleading and can lead to an over-interpretation of the heterogeneity among subgroup effects.
C3 (sample size): Only UpSet plot and forest plot provide a visual display on subgroup sample sizes. The UpSet plot displays the subgroup sample sizes in an additional panel using a bar plot which allows more efficient and accurate comparison of subgroup sizes in contrast to the forest plot. While one can add a bar plot showing sample sizes to any of the other graphics, the particular assembly of the UpSet plot enables to decode the information quickly and efficiently.
C4 (intersections): This criterion is only met for UpSet plots and contour plots. UpSet plots can display intersections of two or more subgroups remarkably, allowing great flexibility in how the information is presented.
C5 (many covariates): Forest plots, Galbraith plots, and UpSet plots can display a large number of subgroupsdefining covariates. However, Galbraith plots should be highlighted in this criterion as its design makes it more appropriate when considering a large number of covariates.
| DISCUSSIONS AND CONCLUSION
We made use of several graphical approaches and assessed their characteristics for subgroup problems. We also attempted to improve some methods correcting flaws or adapting graphics for the subgroup analysis setting.
It is important to note that the considered graphical approaches are descriptive only and do not adjust for potential selection bias of point estimates, inflated type 1 errors due to multiple testing, or reduced simultaneous coverage probabilities of confidence intervals. These consequences of multiple testing and selective estimation may become substantial as the number of considered subgroups increases. In exploratory settings where the definition and selection of subgroups are post-hoc and may be data-driven, frequentist error rates or coverage probabilities cannot be controlled anyway. In contrast, if the subgroups to be considered are pre-defined (or selected independently of outcome data) there is a broad range of statistical approaches available to account for the associated multiplicity. 3,47 Most of the considered graphical approaches can be used to show multiplicity adjusted treatment effects and uncertainty measures. One can, for example, use simultaneous confidence intervals based on the Bonferroni correction, post-selection confidence intervals, 46 treatment effects estimates after model averaging, 48 bias-adjusted estimates, 21 and so on. Comparative plots showing both the adjusted and unadjusted estimates may also provide valuable insights.
In this article, we provide tools to visualise essential information on subgroups, as effect size estimates and subgroup sample sizes. The considered approaches are descriptive only and serve as exploratory tools for hypotheses generation for future investigations.
The choice of the visualisation method depends on: the type of biomarkers that define the subgroups, the type of outcome variable, the sample sizes, and the objective of the subgroup analysis. For example, we have seen that contour plots and STEPP are only suitable for continuous covariates, while the other plots allow the use of binary or categorical covariates. On the other hand, Galbraith plots might be particularly suited for the case of a very large number subgroups and Forest plots may show not only the treatment effect estimates but also the average response in each treatment arm. As some graphics do not display all information, combining several plots can be advantageous.
In this work, we focused on non-interactive graphical displays. We recognise the usefulness of adding interactivity which can improve the flexibility of the studied graphics. For example, there exist work on interactive mosaic plots 49 which allows easy inclusion of many subgroup-defining covariates avoiding the problem of overlapping labelling. Interactive UpSet plots allow inclusion/exclusion of covariates, ordering them according to different characteristics, and displaying additional variables; which makes this graphic a powerful analysis tool (https://caleydo.org/tools/upset/). Galbraith plots might benefit from interactivity when using a large number of covariates, by using mouse hover over the points to display the corresponding labels and subgroup effect sizes. The recently published subscreen package 50,51 enables the analysis of thousands of subgroups by using a scatter plot and allowing the user to display additional information thanks to interactive tools like the Shiny R package. 52 Existing interactive approaches can be adapted to subgroup analysis, or interactivity can be added to the graphics introduced in this article.
Finally, the dataset we used for illustration contained information on causes of death. However, the considered endpoint in the analysis in this article was death from all causes combined. Additionally, while four treatment options were used to treat the patients, we combined them into two categories. These adaptations allowed us to frame the analysis in the typical situation where an experimental treatment is compared against a control. Modifications to the considered graphics could be explored to enable the comparison of multiple treatments or multiple endpoints. Again, interactivity may help in these situations to explore and understand the data.
SUPPORTING INFORMATION
Additional supporting information may be found online in the Supporting Information section at the end of this article.
A1 | Tree plot
The tree plot for subgroup analysis starts with the full population that branches into two or more items corresponding to the levels of the first subgroup-defining covariate. Each of the items in the new level branch again into two or more levels for the second covariate and so on. If more variables were included, this division procedure is consecutively conducted to form subgroups until all the category combinations of the covariates are considered. Figure A1 shows a tree plot of treatment effect differences for subgroups defined by bone metastasis, performance rating, and history of cardiovascular events. In each level or layer, treatment effect differences and their 95% confidence intervals for the associated subgroups are displayed. An additional horizontal dotted line is added at each level for the overall treatment effect size. In Figure A1a, the y-axis for each level of the plot is drawn independently from the other levels. In Figure A1b, the yaxes are consistent across levels, which help to visualise the difference in variability of the estimates.
Tree plots display effects sizes and their confidence intervals satisfying C1 and C2. This information is encoded through the position on identical but nonaligned scales, which provide a less accurate perception when compared to the forest and UpSet plots. Tree plots allow displaying the intersection of not only two but also more subgroup-defining covariates (C4). However, they do not show the size of the subgroups (C3) and it is not possible to arrange many subgroup-defining covariates (C5).
It is worth pointing out a few features of tree plots. Although we used binary covariates, it is possible to consider covariates with more than two levels. Ideally, the number of covariates and categories should be moderate or we may have subgroups with small sample sizes. In this implementation, the ordering of the covariates needs to be pre-specified. Recent proposals that allow the data to define the ordering and/or cutoff values for continuous variables 54,55 can be used to draw tree plots. Figure A1 allows us to draw additional conclusions regarding the treatment effect sizes. We observe that the treatment effect is more pronounced for subjects with bone metastasis. Additionally, we notice that the subgroup of subjects without bone metastasis but with a history of a cardiovascular event and limited activity (pf is 1) has a positive log-hazard ratio suggesting that the control is better than the experimental treatment for this subgroup.
A2 | Level plot
Level plots are typically used to show geographic surfaces in a plane. In the subgroup analysis setting, two categorical variables are arranged on the axes and the main plot area consists of cells that represent disjoint subgroups. Each subgroup is defined by the corresponding combination of levels of both covariates and a divergent colour scale is used to display the treatment effect in that subgroup. In Figure A2a, we show the implementation of a level plot for treatment effects in terms of log-hazard ratios in subgroups defined by the categorised age and weight for the prostate cancer dataset. For each subgroup, a Cox proportional hazards model with treatment as the independent variable is fitted to obtain the estimate for the hazard ratio. Alternatively, a single multivariate model with treatment by subgroup interactions may be fitted to obtain effect estimates. We also add the point estimate and confidence interval for the overall population in the legend as a reference and include the subgroups' sample sizes inside the cells. The cells on the bottom and the left margins represent the marginal subgroups corresponding to each of the three levels of age and weight, respectively.
This graphical approach satisfies criterion C1 displaying effect sizes. A quick look at the colours allows conclusions such as for which subgroups the treatment is beneficial and for which ones it is harmful. However, this way of encoding quantitative information provides the least accurate visual perception and it is hard to compare between subgroups with similar treatment effect. Additionally, the variability of the subgroup estimates is not represented in this plot (C2) therefore making it impractical to detect treatment effect heterogeneity. Although the addition of the sample sizes in the cells allows a comparison of the subgroup sizes, the sample sizes are not represented by the figure therefore this display meets criterion C3 only partially. Level plots display the intersection of the subgroups formed by the levels of the subgroup defining covariates (C4). It is worth noting that only two covariates can be considered in a level plot. Finally, we remind that because the cutoff points for continuous covariates may be arbitrary, level plots are best suited for categorical covariates. Examining Figure A2, we may conclude that the treatment is worse for older patients and young patients with low weight as the direction of the treatment effect is reversed. Moreover, the treatment seems to be even more beneficial for heavier young patients. These interpretations need to be taken with care as the precision of the estimates is not given and the small sample sizes in some subgroups may lead to highly variable effect estimates.
As a possible improvement, the coloured squares inside each cell are drawn with areas proportional to the subgroup sample sizes ( Figure A2B). This allows comparing subgroup sample sizes more easily. At the same time, it may be difficult to see the colour in each square, particularly in the case of small sample sizes. Perhaps a better way to present the information of the level plot is using a mosaic plot as described in the following section.
A3 | Mosaic plot
Mosaic plots are useful to represent contingency tables by arranging proportional-to-size cells in a grid. There are some variations in which this type of plot may be used in subgroup analysis. First, we devise an improvement of the level plot as in Figure A3A. Although the sample size annotation in each mosaic could be easily added, we omit it here as the sample sizes are depicted through the area of the mosaics. The interpretation of this plot is similar to the level plot presented in Figure A2B.
Mosaic plots offer the advantage that more covariates can be included. In Figure A3B, we use history of cardiovascular events, performance, and bone metastasis to illustrate a mosaic plot with three subgroup-defining covariates. As a drawback, when adding additional covariates, it is no longer possible to show the information on marginal subgroups. Figure A3B allows us to observe that there may be heterogeneity in the treatment effect as some subgroups have effect estimates in the positive direction while others in the negative direction. However, the absence of uncertainty measures for the treatment effects estimates prohibits conclusive interpretation.
A4 | Venn diagram
Venn diagrams are undoubtedly the most widely used tool to visualise sets and their relations. In the subgroup analysis setting, Venn diagrams may be used to display the composition of a dataset. A Venn diagram for subgroups defined by bone metastasis, history of cardiovascular events, and performance is shown in Figure A4A. Each circle defines the subgroup of patients for which the level of the corresponding variable is "yes" or 1. The diagram indicates the sample sizes for all the subsets that are formed by set operations (intersection and complement) on the three subgroup-defining Figure A4B,C considers Venn diagrams with four and three subgroup-defining covariates respectively. Both encode the treatment effect in terms of the log-hazard ratio by colouring the corresponding regions. This feature thus enables the Venn diagram to satisfy criterion C1. The variability of the estimates is not given and therefore C2 is not met.
As seen in Figure A4B, using four ellipses for representing all possible subgroups (formed through intersection and complement) is visually appropriate. Other shapes (such as polygons 56,57 ) can be used but the visualisations may not be easy to understand. In our example, we obtain subgroups with small sample sizes when considering the intersections of the four covariates. The white regions indicate that it is not possible to calculate the treatment effect in the corresponding subgroup. An additional rule may be added to this plot to colour only the areas that attain a pre-specified sample size. Figure A4C considers proportional-area methods where each covariate representative region area is proportional to the respective sample size proportion. The region areas only approximately correspond to the sample size proportions performance rating = 1. B, Venn diagram of four sets defined by presence of bone metastasis, disease stage, performance rating = 1, and history of cardiovascular events with treatment effect sizes in terms of the log-hazard ratios. C, Approximate area-proportional Venn diagram of three subgroups defined by presence bone metastasis, history of cardiovascular events, and performance rating = 1 with treatment effect sizes in terms of the log-hazard ratios because of the limited degrees of freedom for circles. We employ the simple algorithm mentioned in Reference 58.
Other algorithms to display each region area proportional to the sample sizes are available. Recently, an algorithm that can produce accurate area-proportional Venn diagrams using ellipses was developed. 58 However, the algorithm is somewhat sophisticated and only works on three sets. Venn diagrams are implemented using the VennDiagram R package 59 together with the polyclip package. 60 For proportional-area Venn diagrams, we further use the sp package 61 and the rgeos package. 62 Venn diagrams satisfy C3 (sample size) and C4 (intersections) in our assessment. However, as in level and mosaic plots, the encoding is not optimal and the UpSet plot provides a better alternative. Useful extensions to Venn diagrams, such as the Edwards' construction 63,64 are available so that they can accommodate a larger number of covariates. The total number of subgroups including mutually disjoint groups can be 2 p , where p is the number of binary covariates considered. Despite this merit, there is a limit on the number of the sets considered in practice. It may become complicated to interpret a Venn diagram with more than five subgroup-defining covariates. Figure A4 shows that the treatment effect is reversed for those subjects without bone metastasis when they have previous cardiovascular events or limitation of activity (performance rating is 1).
A5 | Bar chart
Another graphical technique to depict treatment effect sizes is a bar chart. They are easy to interpret and allow direct comparison among subgroups. For the subgroup analysis problem, we use subgroups defined by the level categorisation of age and weight variables used in the previous examples and consider the difference in restricted mean survival time (RMST) as treatment effect instead of the hazard ratio. In Figure A5, each covariate is categorised into three levels and the bars represent mutually disjoint subgroups. The levels of age and weight are respectively listed at the top and bottom part of the figure. The height of the bars is proportional to the treatment effect differences between the treatment and control arms, that is, the difference in RMST. The width of the bars is proportional to the subgroup sample sizes. This arrangement has another useful property; the area of the bars is proportional to the restricted mean survival gain or loss in each subgroup when using the experimental treatment in comparison to control. Different variations of grey were used to show which subgroups have the same category level on age.
Based on our assessment, this graphical representation approach holds C3 (sample size), C4 (intersection) but not C2 (uncertainty), and C5 (many covariates). Each bar is the intersection of two subgroups defined by age and weight with their respective levels. Such a graphical approach does not allow examining heterogeneity in treatment effects across subgroups as the overall effect size and the variability in the subgroup effect estimates are not shown.
Few noteworthy characteristics also need to be mentioned. If considering more covariates, one could label all the level combinations of the covariates in the bottom part of the picture or simply to make a legend elsewhere. However, a high number of covariates or levels may be problematic, making it difficult to compare the widths of the bars. Second, as in level plots, the cut-off points for categories in continuous variables may be arbitrary and categorical covariates are therefore preferred for bar plots.
Weight
Although we use a different measure for the treatment effect, the direction of the estimates is maintained compared to the level plot in Figure A2 and the interpretation remains unchanged.
A6 | L'Abbé plot L'Abbé plots 65 are a variant of scatter plots which are useful for examining heterogeneity in a meta-analysis. The graphic is originally intended for binary outcome data to represent risk ratios, risk differences, or odds ratios between treatment and control. For our implementation, we extend this graphical technique to the case of continuous and survival outcomes and also modify points to rectangles ( Figure A6). The xy-coordinates for each subgroup correspond to the estimates of the RMST in the control and treatment arm, respectively. The width and the height of a rectangle (corresponding to a subgroup) respectively indicate the sample sizes of the control and treatment arms in the subgroup. We draw a diagonal dashed line at y = x which represents no treatment effect (equal RMST in both arms) and a solid diagonal line with y-intercept at the overall treatment effect size. Each rectangle has a vertical segment from its centre to the diagonal dash line representing the magnitude of the effect size, that is, the gain (in blue) or loss (in red) in terms of RMST when comparing treatment vs control.
L'Abbé plots satisfy C1 (effect size), C3 (sample sizes), and C5 (many covariates), but they do not show the uncertainty of the treatment effect estimates (C2) nor subgroup intersections (C4). While they may handle many subgroups, it may be difficult to untangle the corresponding rectangles if subgroups have a similar effect estimate for treatment and control groups.
This graphical tool allows us to draw an additional conclusion in our example. The subjects with bone metastasis in the control group have a lower RMST compared to other subgroups. When receiving the experimental treatment, the RMST is closer to that in other subgroups.
A7 | Chord diagram
Chord diagrams are widely used to visualise genomic data. 66 There are several approaches to these diagrams although the main aspect is that they allow representing the relationships between pairs of sets. For our example, we use the categorised variables age and weight ( Figure A7). The categories of each variable are arranged along the circle where each of their corresponding cells has a size proportional to the corresponding subgroup sample size and a colour representing the treatment effect estimate in terms of the log-hazard ratio. The ribbons in the centre of the diagram represent the relative overlap between the categories of the variables. Their width is calculated in correspondence to the proportion of subjects from a subgroup that is also in the subgroup to which the bands connect. We implement this graphic using the circlize R package. 67 The flexibility of this plot is an advantage since many other implementations may be devised, especially when the number of covariates is extremely large as when dealing with genomic data (C5). However, while chord diagrams F I G U R E A 6 L'Abbé plot for subgroups defined by performance (pf), stage, history of cardiovascular events (hx), and existence of bone metastasis (bm). Effect sizes are given in terms of the difference in restricted mean survival time (RMST) display the effect sizes (C1) and sample sizes (C3), other alternatives might be more effective for the analysis of treatment effects of subgroups. The treatment effects for the intersection of subgroups is not displayed (C4) but chord diagrams show the overlap between subgroups which helps in clarifying that we look at the subgroups are not disjunctive. Their main disadvantage is that no uncertainty measures of the treatment effect estimates are displayed (C2). Figure A7 allows us to observe the treatment effects across the subgroups defined by age and weight marginally. Since the direction of the treatment effect changes across the levels of the age covariate, treatment effect heterogeneity may be present. Again, using a colour scale and not displaying variability estimates hinders a definite conclusion.
A8 | Coxcomb plot (Nightingale rose)
A Nightingale coxcomb plot 68 is a type of radial plot that was introduced in 1858 and is usually recommended as an alternative to pie charts. 9 In Figure A8, we arrange the subgroups defined by the categorised age and weight variables along the circle using a combination of bar plot and polar coordinates with the ggplot2 R package. In this plot, the angles that define each sector are kept fixed but the radii vary proportionally to the square root of the sample size in each subgroup to perceive areas adequately. We colour the areas to encode the information on the treatment effect for each formed subgroup.
In terms of the assessment, the coxcomb plot displays the same information as level plots, therefore satisfying only C1 (effect size), C3 (sample size), and C4 (intersections).
F I G U R E A 7 Chord diagram for the subgroups formed by age and weight. The colours along the circle represent the treatment effect in terms of the log-hazard ratio. The ribbons that link the subgroups represent their overlap | 2020-03-29T07:15:49.591Z | 2020-03-25T00:00:00.000 | {
"year": 2020,
"sha1": "38d6bb6fc0ff203539a14c6cbc23e28208eb8c46",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/pst.2012",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "4a41eb4cbe9b059fc17f90bd0be842fef7da7973",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
220614296 | pes2o/s2orc | v3-fos-license | Ranking uncertainties in atmospheric dispersion modelling following the accidental release of radioactive material
– During the pre-release and early phase of an accidental release of radionuclides into the atmosphere there are few or no measurements, and dispersion models are used to assess the consequences and assist in determining appropriate countermeasures. However, uncertainties are high during this early phase and it is important to characterise these uncertainties and, if possible, include them in any dispersion modelling. In this paper we examine three sources of uncertainty in dispersion modelling; uncertainty in the source term, uncertainty in the meteorological information used to drive the dispersion model and intrinsic uncertainty within the dispersion model. We also explore the possibility of ranking these uncertainties dependent on their impact on the dispersion model outputs.
Introduction
In the event of an accidental release of radionuclides into the atmosphere, dispersion models would be used (in conjunction with dose models) to evaluate the consequences and to assist in determining appropriate countermeasures.In order to model the consequences, the dispersion model requires information about the type, quantity, timing and physical characteristics of the release (referred to as the source term) as well as details of the meteorological conditions during the period of release and transport of pollutants.The accuracy of the dispersion predictions (i.e., the difference between the model-calculated and the observed values of variables like concentration in air or deposition on ground of radioactive material) clearly depends (among other factors) on the accuracy of these inputs (i.e., how close these inputs are to their true values).Increasingly there is also pressure (e.g. from decision makers, the scientific community) to provide information on the uncertainty of the dispersion and deposition forecasts.
There are several sources of uncertainty in the dispersion model prediction including those related to the source term information and the driving meteorology as well as physical parametrisations and numerical approximations made in the dispersion model.A useful discussion on the types on uncertainties in dispersion models is given by Rao (2005).In this paper we consider uncertainties that are most prevalent in the early phase of a nuclear accident.The sources of uncertainty are not completely independent.For example, the uncertainty in the timing of the release is linked to the variability and the uncertainty in the meteorological information over the same period.However, as the estimation of uncertainties in the three categories is approached differently it is convenient to examine them separately and a discussion of each can be found in the next three sections.Some comments on combining uncertainties and their relative importance are given at the end of the paper.
Meteorological uncertainty
Meteorological information for dispersion models is usually obtained from Numerical Weather Prediction (NWP) models as 3D or 4D fields of variables such as wind speed and direction, boundary layer height and precipitation.Most dispersion models used in emergency response take information from a single NWP model.However, the atmosphere is a chaotic system meaning that small deviations from the initial conditions can grow quickly.Meteorological modellers typically overcome this by running multiple model integrations where each model integration starts from a perturbed initial state and uses perturbed model physics to represent uncertainty in the atmospheric state and its evolution (Fig. 1) (see Leadbetter et al., 2018 for more details).These are known as "ensemble" models and were first used for weather forecasting in the 1990s.They have been used as inputs to dispersion models in research and post event analysis since the late 1990s (Straume et al., 1998).
Ensemble meteorological systems are computationally expensive; so an alternative method for generating a meteorological ensemble is through the use of successive forecasts from a single meteorological model.These ensembles are called time-lagged ensembles and can be used as input to a dispersion model ensemble (e.g.Geertsema et al., 2019).
Meteorological ensembles have been developed and improved for many years and now demonstrate a good ability to represent the uncertainty in large scale atmospheric variables such as the height of the 500 hPa pressure level (Haiden et al., 2016).However, for near-surface variables, such as those of interest to dispersion modellers, many ensembles show a tendency to be under-dispersive.Studies by Haywood (2008) and Girard et al. (2016) show that wind speed, wind direction and precipitation uncertainties are important for surface releases and a study by Hamburger and Gering (2017) showed that in a stable atmosphere, dispersion model predictions can be very sensitive to small perturbations in atmospheric stability.Furthermore, Descamps et al. (2015) show that Météo-France and the European Centre for Mediumrange Weather Forecasting's global ensemble models are under-dispersive in their prediction of 10 m wind-speed and 24-hour precipitation over Europe.Whilst wind speed and precipitation are of interest to weather forecasters, representing the uncertainty in the meteorological parameters important to dispersion models is not prioritised by the developers of meteorological ensembles.
To improve the representation of uncertainty in nearsurface variables and over short time-periods many national weather centres have developed limited area ensembles focussing on uncertainty within the first 48-hours of the forecast (e.g.Tennant, 2015).A study by Flowerdew (2012) demonstrated that these higher resolution ensembles are more reliable at predicting precipitation than lower resolution ensembles.
To use the meteorological information provided by NWP models, most dispersion models include a meteorological preprocessor.This pre-processor may be a separate model, or it may be integrated into the dispersion model and is typically used to interpolate meteorological information (in space and time) and compute meteorological variables not available in the NWP data set.Uncertainties in the meteorological preprocessor arise from the choice of interpolation scheme and the method of calculation of missing meteorological variables (Andronopoulos et al., 2018).
To mitigate against these additional uncertainties and to include data from meteorological stations in the period between the last meteorological model run and the dispersion model run some dispersion models assimilate surface meteorological observations.For example, Davakis et al. (2007) demonstrated improvements in the simulation of the European Tracer release EXperiment (ETEX) when meteorological observations were assimilated.
Source term uncertainties
To correctly model the dispersion of a release of radionuclides information about source parameters such as the timing and duration of release, isotopic composition, emitted amount (per radionuclide), physiochemical form and height of the release is required.Source terms are usually estimated using one of two methods, both of which can also be used to estimate the uncertainty: 1 Modelling of the reactor physics and potential failure mechanisms.The source term is estimated using tools that consider reactor physics and knowledge of the initial state of the facility, such as the severe accident code ASTEC (Accident Source Term Evaluation Code) (Chatelard et al., 2014), or more simplified approaches such as tools used in case of emergency.Uncertainties in the reactor physics method can be accounted for by assuming uncertainties on different parts of the process, for example the size and location of a break and the behaviour of iodine in its liquid and gaseous forms.An evaluation of this type was carried out as part of the FASTNET (FAST Nuclear Emergency Tools) project (Chevalier-Jabet, 2019a, 2019b).Results from this project were used in the REM (Radiological Ensemble Modelling) case study (Korsakissok et al., 2020).However, there are uncertainties that cannot be accounted for using this method due to a lack of information or human errors; 2 Coupling dispersion modelling and measurements in the environment.In the early phase under investigation in this project this is not feasible.However, experience from historic events can provide valuable insights into uncertainties by using a coupling approach in which the source term is estimated by combining environmental measurements with dispersion model predictions.This may be done through semi-manual reverse techniques (e.g.Katata et al., 2015) or automatic inverse modelling methods (e.g.Saunier et al., 2013) that use mathematical methods to minimize the discrepancy between dispersion results and radiological measurements.Uncertainties in this method can be estimated by considering different measurement data and/or by using ensemble dispersion model output.In Korsakissok et al.
(2020) nine source terms were selected from the literature to represent uncertainty in the Fukushima accident source term.The source terms were constructed using different meteorological data, different dispersion models and different measurement data.
Estimation of source term uncertainties and their use in modelling the dispersion of radioactive material is a relatively new area of research so there are limited examples of its use.
Atmospheric dispersion model uncertainties
The limitations of dispersion models and their input data mean that some processes cannot be resolved explicitly and need to be parameterised.Examples of such processes are turbulent diffusion schemes and wet and dry deposition.A limited number of studies have examined the impact of perturbing these parametrisations on the dispersion model predictions (e.g.Leadbetter et al., 2015;Girard et al., 2016).Parameter ranges are determined from literature reviews and/ or expert elicitation and the results evaluated by comparing model outputs to measurements or other model outputs.
Turbulent diffusion is typically represented by applying deviations to the movement of the plume relative to the mean wind.Different parametrisations are used according to the type of dispersion model: Gaussian, Eulerian or Lagrangian.In a Gaussian model turbulence is usually represented as the standard deviation of the cross-wind, along-wind and vertical motion whereas in a Lagrangian or Eulerian model the turbulence is usually represented by a diffusivity parameter, K.The magnitude of these parameters varies according to the meteorological state, for example as a function of the atmospheric stability, and different values may be applied above and below the top of the boundary layer.A full description of the recommended ranges and methods used to determine these can be found in Bedwell et al. (2018).
Wet and dry depositions are often described by depletion equations because few off-line dispersion models resolve incloud micro-physical properties or the interaction between particles and different surface types (such as buildings and plants).Parametrisations for wet deposition typically consider different types of precipitation (rain or snow or a mixture) and different rain rates.Parametrisations for dry deposition generally consider different particle properties and some also consider different surface types.Several studies investigated the impact of wet scavenging coefficients on the predictions of deposits following the Fukushima Dai-ichi nuclear power plant accident.No consensus was reached on the best scavenging coefficients suggesting that a range of coefficients should be considered as part of an ensemble dispersion model (see for example, Leadbetter et al., 2015;Quérel et al., 2015).
Ranking uncertainties
The uncertainties described above could simply be combined into a huge ensemble.However, this is computationally expensive and may not produce useful results on the short timescales needed in the early phase of an incident.It is therefore useful to consider whether some uncertainties are likely to have a greater or lesser impact and if some uncertainties will only apply to some situations.Wellings et al. (2018) conducted a literature review to investigate the sensitivity of the dispersion model outputs of air concentration and deposition to uncertainty in the input data and internal parameterisations.They concluded that it was not possible to determine a quantitative ranking of the uncertainties due to the wide range of models, parameters and scenarios presented in the literature.Instead, they grouped uncertainties into seven categories according to the influence their uncertainties might have on modelling results.Parameters in categories 6 and 7 are those parameters that were determined to not be relevant to modelling in the CONFIDENCE project or to not be influential in the modelling output in the studies in the literature review.A description of the other categories and the parameters in them is given in Table 1.
Summary
In this paper the sources of uncertainty when modelling the dispersion of an accidental release of radioactive material are discussed.The uncertainties have been separated into three sources, meteorological data, source term and dispersion model due to the different methods of assessing the uncertainty.Uncertainties in the meteorological data are usually represented using data from an ensemble of meteorological models.Uncertainties in the source term may be represented by generating an ensemble of source terms by considering processes in the accident sequence or by using different meteorological models and radiological observations to calculate the source term using reverse or inverse methods.Uncertainties in dispersion modelling are usually estimated from literature reviews and expert elicitation.All these uncertainties can be propagated into an ensemble dispersion modelling system in order to infer their effect on outputs such as dose, air concentration and deposition.It is also useful to consider the most influential sources of uncertainties so that these can be prioritised in the construction of a dispersion ensemble.
Fig. 1 .
Fig. 1.Schematic of an ensemble meteorological forecast.The red circles and line represent the true state and the bold black member represents the control or "deterministic" forecast.
Table 1 .
Astrup and Mikkelsen (2010)e are the most influential parameters.The amount or rate of material released often relies on measurements that are not available in the early phase of an incident and work byAstrup and Mikkelsen (2010)has shown that observed wind directions can differ from the NWP wind directions by up to 25°in flat terrain and more in more complex terrain. | 2020-07-02T10:28:31.250Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "cd262acf2d1d140bc1d1a02cceed447081540f2c",
"oa_license": "CCBY",
"oa_url": "https://www.radioprotection.org/articles/radiopro/pdf/2020/02/radiopro200012s.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "027be05498b84026f6f74b6283f66b47ce4ba7e0",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
269189718 | pes2o/s2orc | v3-fos-license | The feeding siblings questionnaire (FSQ): Development of a self-report tool for parents with children aged 2 – 5 years
Over the last decade, there have been repeated calls to expand the operationalisation of food parenting practices. The conceptualisation and measurement of these practices has been based primarily on research with parent-child dyads. One unexplored dimension of food parenting pertains to the evaluation of practices specific to feeding siblings. This study describes the development and validation of the Feeding Siblings Questionnaire (FSQ) – a tool designed to measure practices in which siblings are positioned as mediators in parents ’ attempts to prompt or persuade a child to eat. Item development was guided by a conceptual model derived from mixed-methods research and refined through expert reviews and cognitive interviews. These interviews were conducted in two phases, where parents responded to the questionnaire primarily to test i) the readability and relevance of each item
Background
Food parenting practices can have profound impact on the development of food preferences and appetitive traits in children (Daniels, 2019).In the literature, food parenting practices have been conceptualised under the three higher order domains of coercive control, structure, and autonomy support or promotion (Vaughn et al., 2015).Coercive control encompasses practices such as pressure to eat and overt restriction that may inadvertently teach a child to eat in response to external factors, rather than their own hunger and fullness cues (Vaughn et al., 2015).They differ from practices within the other two domains that largely encompass responsive feeding, whereby parents encourage eating through role modelling and structured mealtime routines, in a manner that fosters the child's capacity for appetite self-regulation (Vaughn et al., 2015).These practices therefore serve as modifiable behaviours that can shape what and how children eat from a young age, which can have implications for health outcomes including diet quality and weight status (Hernandez et al., 2024;Paul et al., 2018).
The capacity to understand the full implications of food parenting practices relies on the valid and reliable measurement of these constructs in research.To date, the conceptualisation and measurement of food parenting practices in general has been based on research with parent-child dyads.While this research has served the field well, current measurement tools are yet to comprehensively capture the broad scope of food parenting practices used with children from birth to 5 years of age (Heller & Mobley, 2019).One dimension of food parenting yet to be assessed relates to practices specific to feeding siblings.This gap in the literature is despite population-based data in many countries, including Australia, indicating that most parents have two or more children (Australian Bureau of Statistics, 2022).Therefore, new methods are needed to expand the scope of existing measurement tools beyond the parent-child dyad.
Despite recognition of their role in child development and adjustment across a range of disciplinary perspectives (Whiteman, McHale, & Soli, 2011), siblings are often overlooked in child feeding research.However, in early childhood, siblings are typically present at mealtimes in the home (Moding & Fries, 2020).To better understand how children are socialised around food from a young age, future research must therefore position siblings as fundamental constituents of the family unit.This idea aligns with principles of family systems theory, which posits that the family is a complex, interrelated system that must be understood as a whole, rather than as individual components (e.g., parent-child dyads) alone (Broderick, 1993).
Emerging studies in Australia, Europe, and North America have explored similarities and differences in food parenting practices used with siblings by comparing their scores on questionnaire subscales, originally developed for use in parent-child dyads (Ayre, Harris, White, & Bryne, 2023;Ayre, Harris, White, & Byrne, 2022;Kininmonth et al., 2023;Ruggiero et al., 2022 a;Ruggiero et al., 2023a;Ruggiero et al., 2022;Vollmer, 2022).In general, these studies have found that parents may adapt their use of pressure to eat and overt restriction in response to differences in sibling characteristics, such as their weight status and eating behaviours (Ayre et al., 2022).However, questionnaires used in these studies are unable to capture nuanced interactions between a parent and child, in which a sibling is also involved.For example, recent qualitative research revealed that parents may motivate a child to eat by leveraging the competitive nature of their sibling relationship or overtly reward a child for eating with the intention of vicariously conditioning their sibling's behaviour (Ayre, Harris, White, & Bryne, 2023).However, without operationalising these practices, little can be understood about their implications for child dietary, weight, and health outcomes.
The current study aimed to develop and test a theoretically and empirically informed measure to assess food parenting practices involving siblings.Such methods can contribute toward valid and reliable evidence to inform the prioritisation of intervention targets and assessment of intervention effects for promoting responsive feeding in families.Practical research outcomes, such as contributing to healthier diets in children, are in turn dependent on the rigorous conceptualisation and measurement of these practices.While the current study developed and tested this measure in the Australian context, psychometric testing is ideally conducted iteratively over multiple timepoints and samples (Boateng, Neilands, Frongillo, Melgar-Quiñonez, & Young, 2018), providing the opportunity for researchers to modify and expand its application to diverse socioeconomic and cultural contexts in the future.
Participants and procedure
Recruitment and data collection was conducted between August and December 2022, with methods detailed elsewhere (Ayre, Harris, White, & Bryne, 2023).Briefly, digital advertisements were shared via social media (including paid advertising), childcare centres, and emailing lists.Participants included parents with two or more children aged 2-5 years living in Australia.Parents self-enrolled into the online study via REDCap (Research Electronic Data Capture) (P.A. Harris et al., 2019;P. A. Harris et al., 2009), hosted by Queensland University of Technology, and were screened for eligibility criteria at recruitment.Eligible parents were aged 18 years or older and able to read English.Their two children were also born as healthy, term infants (>35 weeks), and living with them full-time.If parents had more than two children within this age range, they were asked to respond to items with reference to their two eldest children.The study comprised two phases: cognitive interviews (n = 5) and a survey (n = 330).While parents who completed an interview were not directly invited to participate in the survey, it is possible that some parents completed both phases.Upon completion of the survey, parents were also invited to partake in a repeated survey two weeks later, which was accessible via a link emailed to them through REDCap.
All procedures were approved by the Queensland University of Technology Human Research Ethics Committee (reference number: 5900).Informed consent was obtained from all parents.To acknowledge their contributions, parents who participated in the cognitive interviews were offered an AU$20 gift voucher, and parents who completed the survey were offered entry into a prize draw to receive one of three AU $200 gift vouchers.An additional prize draw entry was offered to parents who completed the repeated survey.The Checklist for Measure Development and Validation Manuscripts (Holmbeck & Devine, 2009) was used to guide the conduct of the study (see Supplementary Table S1), while the STROBE-nut checklist (Lachat et al., 2016) was used to guide the reporting (see Supplementary Table S2).
Item sources
The Feeding Siblings Questionnaire (FSQ) was designed using systematic methods that correspond with five of the six components of instrument development outlined by Vaughn, Tabak, Bryant, and Ward (2013).These components include: i) conceptualisation of the instrument aims, ii) systematic development of the item pool, iii) refinement of the item pool, iv) validity testing (i.e., factorial and construct validity), and v) reliability testing (i.e., internal consistency and test re-test reliability) (see Fig. 1).The questionnaire is targeted toward parents with children aged 2-5 years.It aims to measure practices in which siblings are positioned as mediators in parents' attempts to prompt or persuade their child to eat.An initial 38-item pool was developed by the first author (SA) using guidelines outlined by DeVellis and Thorpe (2021) to ensure that items were short, simple, and unambiguous.The items align with five hypothesised constructs that were extrapolated from formative mealtime observation and semi-structured interview data (Ayre, White, et al., 2023).The constructs describe mealtime interactions, such as leveraging sibling competitiveness, engaging siblings as active intermediaries, threatening to share food with siblings, modelling sibling behaviour, and vicarious operant conditioning.Definitions for these constructs are outlined in Table 1.
Expert review
The items and associated constructs were independently reviewed by four academics with expertise in food parenting practices and/or scale development, in addition to three authors (RB, MW, and HH).Reviewers provided feedback on the relevance of each item to its associated construct, the readability of each item, and the overall content and structure of the questionnaire.The review resulted in the removal of 11 items, revision of 20 items, and addition of 1 item.Items were removed primarily due to ambiguity (n = 4), repetition (n = 3), complexity (n = 2), use of emotive language (n = 1), and measurement of a different construct (n = 1).Furthermore, items were revised to increase specificity (n = 9), include examples (n = 8), and simplify wording (n = 3).The additional item was an example of parents engaging a sibling as an active intermediary when praising their other child to eat: 'I get [Child A] to agree with me when I praise [Child B] for eating (e.g., "[Child B] is eating well tonight, isn't he/she, [Child A]?").'
Cognitive interviews
Online cognitive interviews were conducted by the first author (SA) using Zoom videoconferencing software (Zoom Video Communication, 2020).These interviews were undertaken with five parents, including four mothers, one father, three born in Australia, and two born overseas.Although the sample size was small due to feasibility constraints, a sample comprising as few as five participants is considered appropriate for cognitive interviews, since these methods form a preliminary phase in questionnaire development and testing (Peterson, Peterson, & Powell, 2017).The interview aims and protocols were modified across two stages of interviewing (see Supplementary Tables S3 and S4).Cognitive interviewing is underpinned by a model introduced by Tourangeau (1984).According to this model, participants engage in four cognitive stages while completing a questionnaire: comprehension, retrieval, judgment, and response selection, with each stage constituting a potential source of error (Tourangeau, 1984).Therefore, verbal probes were designed to target each stage of cognition based on recommendations in the literature (Peterson et al., 2017;Willis & Artino, 2013).
In the first stage of interviews, parents (n = 3) provided concurrent feedback on the readability and relevance of items as they completed the questionnaire.During these interviews, the 'think aloud' method was used, whereby parents were asked to verbalise their thoughts as they read and responded to each item (Peterson et al., 2017).In the second stage of interviews, parents (n = 2) were asked to complete the questionnaire uninterrupted at their own pace, whilst noting items that they perceived as difficult to comprehend or respond to with the designated options.Upon completion, parents provided retrospective feedback, with a primary focus on the overall feasibility of the questionnaire.Verbal responses from all interviews were consolidated into a document and reviewed by the authors (SA, RB, MW, and HH), resulting in the revision of 11 items.Items were revised to increase the relevance and applicability of the examples provided (n = 6), reduce repetition (n = 3), remove emotive language (n = 1), and capture additional dimensions of a construct (n = 1).
Survey
The revised 28-item questionnaire was administered in the current
Table 1
Examples of parent-sibling interactions identified from mealtime observation and semi-structured interview data (Ayre, White, et al., 2023).(Jansen, Williams, Mallan, Nicholson, & Daniels, 2016) were used to measure food parenting practices for each sibling.This questionnaire has been validated in samples of Australian mothers (Jansen et al., 2016;Jansen, Harris, Mallan, Daniels, & Thorpe, 2018) and fathers (Jansen et al., 2018).The subscales included four non-responsive (i.e., coercive control) practices, including persuasive feeding (6 items, e.g., 'Do you say something to show your disapproval of [Child] for not eating?';α = 0.84 and α = 0.86 for earlier and later-born children, respectively), reward for eating (4 items, e.g., 'When [Child] refuses food he/she usually eats, do you encourage him/her to eat by offering a food reward (e.g., dessert)?';α = 0.86 and α = 0.91 for earlier and later-born children, respectively), reward for behaviour (4 items, e.g., 'I reward [Child] with something to eat when he/she is well behaved'; α = 0.80 and α = 0.85 for earlier and later-born children, respectively), and overt restriction (4 items, e.g., 'I have to be sure that [Child] does not eat too many sweet foods (e.g., lollies,1 ice-cream, cake, pastries)'; α = 0.72 and α = 0.78 for earlier and later-born children, respectively).Two structure-related (i.e., responsive) practices were also measured, including structured meal timing (3 items, e.g., 'I decide when it is time for [Child] to have a snack'; α = 0.65 and α = 0.69 for earlier and later-born children, respectively) and structured meal setting (3 items, e. g., 'How often are you firm about where [Child] should eat?'; α = 0.78 and α = 0.81 for earlier and later-born children, respectively).In addition, the single-item indicator measured family meal settings ('[Child] eats the same meals as the rest of the family').All items were scored on a 5-point Likert scale (e.g., ranging from 'never' to 'always' or 'disagree' to 'agree'), with higher scores indicating greater endorsement of the practice (Jansen et al., 2016).The items were averaged to determine mean subscale scores for each child.Difference scores were then calculated for each sibling pair based on the absolute differences between these scores.
Children's Eating Behaviour Questionnaire (CEBQ)
Four subscales of the Children's Eating Behaviour Questionnaire (CEBQ) (Wardle, Guthrie, Sanderson, & Rapoport, 2001) were used to measure eating behaviours in siblings.This questionnaire has been validated in a comparable sample of Australian mothers (Mallan et al., 2013).The subscales included food fussiness (6 items; e.g., '[Child] decides that he/she doesn't like a food, even without tasting it'; α = 0.94 and α = 0.93 for earlier and later-born children, respectively), slowness in eating (4 items, e.g., '[Child] eats slowly'; α = 0.74 and α = 0.82 for earlier and later-born children, respectively), satiety responsiveness ( 5items, e.g., '[Child] cannot eat a meal if he/she has had a snack just before'; α = 0.75 and α = 0.81 for earlier and later-born children, respectively), and food responsiveness (5 items, e.g., 'If allowed to, [Child] would eat too much'; α = 0.76 and α = 0.84 for earlier and later-born children, respectively).Items were scored on a 5-point Likert scale ranging from 'never' to 'always', with higher scores indicating more frequent observation of the behaviour.Consistent with the methods outlined above, mean subscale scores were then calculated for each child, along with difference scores for each sibling pair.
Sociodemographic and anthropometric measures
Sociodemographic data included parental age, gender, ethnicity, education, employment status, marital status, and number of children in the household.Postcodes were also collected to determine Index of Relative Socioeconomic Advantage and Disadvantage (IRSAD) scores (Australian Bureau of Statistics, 2018).For children, data were collected on age and gender, in addition to weight and height.Child body mass index z-scores (BMIzs) were calculated from parent-reported data using SAS Version 9.4 software (SAS Institute Inc., 2020).An accompanying program, developed according to the Centres for Disease Control and Prevention (CDC) age and sex-adjusted growth charts (Centers for Disese Control and Prevention, 2022;Kuczmarski et al., 2002), was employed for this purpose.Due to the reliance on parental reports and the absence of additional anthropometric measurements to validate the data, it was necessary to identify and exclude biologically implausible values (BIVs).Consistent with recommendations in the literature (Freedman et al., 2015(Freedman et al., , 2016)), BIVs were identified based on cut-off points of < -4 and >8 for the modified BMIzs integrated into the program.
Data analysis
Data analysis was conducted in SPSS (Statistical Package for the Social Sciences) Version 27.0.1.0software (IBM Corp, 2020).Available baseline data from 359 participants were screened, and overall, 2 (0.6%) participants were excluded due to incomplete responses (each with 50% of FSQ items missing) and 27 (7.5%)participants were excluded due to invalid responses, resulting in a total sample of 330 participants at baseline.Descriptive statistics were performed on this sample and compared with the subsample of participants who completed the repeated survey at two weeks (n = 133, 40.3%).The validity of the item used to determine the framing of children's names within the FSQ was assessed by comparing the mean subscale scores on the CEBQ for siblings who were rated as the "better" eater versus those who were not.In line with the profile of a 'non-fussy' eater identified in a latent profile analysis (Tharner et al., 2014), it was expected that children perceived as the "better" eater would score lower on the food avoidant subscales (food fussiness, slowness in eating, and satiety responsiveness) and higher on the food approach subscale (food responsiveness).
Factorial validity
Preliminary assessment of the FSQ items revealed no deviations from the assumption of linearity.However, univariate and multivariate outliers were detected via assessment of box plots and Mahalanobis distances, respectively.While the univariate skewness and kurtosis coefficients were within an acceptable range (Curran, West, & Finch, 1996), Mardia's multivariate skewness and kurtosis coefficients were also statistically significant (p < 0.001), indicating non-compliance with the assumption of normality (Mardia, 1970).Therefore, exploratory factor analysis (EFA) was performed on the items using principal axis factoring with oblique rotation (direct oblimin).Using the POLYMAT-C program (Lorenzo-Seva & Ferrando, 2015), a polychoric correlation matrix formed the basis of the EFA due to its increased robustness with ordinal and non-normally distributed variables (Watkins, 2018).Communalities were estimated using squared multiple correlations.The factorability of the data was confirmed via assessment of the correlation matrix, Bartlett's test of sphericity, and Kaiser-Meyer-Olkin (KMO) statistic (Tabachnick & Fidell, 2018).
In line with methods outlined by Watkins (2018), the number of factors retained in subsequent analyses was determined via consultation of the eigenvalues, scree plot, parallel analysis estimates, and minimum average partials (MAPs).Pattern coefficients ≥0.32 were considered salient (Tabachnick & Fidell, 2018).Criteria for assessing factor adequacy included the loading of ≥3 items with salient pattern coefficients, an acceptable internal consistency estimate, and convergence with the hypothesised factor structure (Watkins, 2018).In favour of a simple solution, items that complicated the factor structure (i.e., loaded inadequately onto all factors or cross-loaded onto multiple factors with a loading difference <0.20) were deleted one at a time, until each item retained in the model loaded saliently onto one factor only (Howard, 2016;Watkins, 2018).
Internal consistency and test re-test reliability
For each factor, internal consistency was estimated using Cronbach's alpha coefficients, with a value of ≥0.70 considered acceptable (Johnson, 2018).Subscale scores were calculated by averaging the scores for items that loaded onto similar factors.As estimates of two-week test re-test reliability, intraclass correlation coefficients (ICCs) were calculated for the subscale scores based on a single measure, absolute agreement, two-way mixed-effects model.ICCs ≥0.50, ≥0.75, and ≥0.90 were indicative of moderate, good, and excellent test re-test reliability, respectively (Koo & Li, 2016).
Construct validity
As the FSQ describes instances in which parents feed siblings differently (i.e., one child is used as a mediator in parents' attempts to prompt or persuade the other child to eat), comparing this scale to other relevant measures of sibling discordance would enable assessment of its convergent construct validity.With the absence of other measures that target parent-sibling triads, sibling difference scores on the FPSQ and CEBQ subscales were used as a proxy, with larger scores indicating that one child scored comparatively higher on that subscale compared to their sibling.
Spearman's correlation coefficients were first used to examine how FSQ subscale scores were correlated with FSPQ subscale difference scores.It was predicted that higher scores on each FSQ subscale would be correlated with larger difference scores for coercive control (i.e., persuasive feeding, reward for eating, reward for behaviour, overt restriction) and structure-related food provision (i.e., family meal settings, structured meal timing, and structured meal setting).Second, independent samples t-tests were used to compare FSQ subscale scores between sibling pairs who were discordant and non-discordant on each CEBQ subscale.In line with methods reported elsewhere (Ayre, Harris, White, & Bryne, 2023;Kininmonth et al., 2023), sibling pairs were defined as discordant if they had a difference score ≥1 standard deviation of the mean difference score for that subscale.It was predicted that parents of sibling pairs discordant on food fussiness, slowness in eating, satiety responsiveness, and food responsiveness subscales would have higher scores on each FSQ subscale compared to parents of non-discordant sibling pairs.
Results
The characteristics of the parent-sibling triads who completed the survey are reported in Tables 2 and 3.The survey was repeated by 40.3% of parents after a median of 2 weeks (range: 2-9 weeks).The proportion of parents who repeated the survey was different between education groups (X 2 1 = 5.78, p = 0.016).The highest proportion was evident among parents who had completed a university degree (44.1%), and the lowest proportion was evident among parents who had completed Year 12 or below (17.6%).No other differences were observed in sociodemographic characteristics between the samples at baseline and two weeks.
In response to the single item, most parents (89.4%) were capable of differentiating their children based on their eating behaviours, with more than half (57.3%) of these parents rating their later-born child as the "better" eater.Relative to their sibling, the child who was perceived as the "better" eater scored, on average, lower for food fussiness (t 294 = − 16.91, p < 0.001), slowness in eating (t 294 = − 6.61, p < 0.001), and satiety responsiveness (t 294 = − 10.67, p < 0.001), and higher for food responsiveness (t 294 = 3.37, p < 0.001) (see Table 4).However, based on their effect sizes, only differences in food fussiness and satiety responsiveness were considered practically significant (Cohen's d = 0.98 and 0.62, respectively; see Table 4) (Ferguson, 2009).Differences between mean subscale scores for the eating behaviours were not significant for sibling pairs for whom parents could not differentiate on this basis (ps≥0.114;results not reported).
Factorial validity
Preliminary examination of the correlation matrix for the FSQ revealed that Items 17 and 24 were highly correlated (r = 0.830), and as such, the R determinant was indicative of potential multicollinearity (<0.00001) (Tabachnick & Fidell, 2018).To minimise these effects, Item 17 was excluded from the analysis as this item had stronger correlations with other variables.It was hypothesised that a 5-factor solution would fit the data; however, the eigenvalues and MAPs both indicated that four factors should be retained, while parallel analysis indicated that only three factors were required.In addition, the scree plot demonstrated two points of inflexion that justified the retention of either four or six factors.Therefore, 6, 5, 4, and 3-factor solutions were sequentially examined.
The 6-factor solution was inadequate with only two items loading onto the sixth factor.All other solutions were adequate; however, the 5 and 3-factor solutions had fewer items that cross-loaded onto multiple factors.Due to its higher convergence with the theoretically driven and hypothesised factor structure, the 5-factor solution was accepted as the final model.To simplify the model, Item 15 was first deleted as it failed to load onto any factor with a salient pattern coefficient.Item 2 was then deleted as it cross-loaded onto Factors 2 (− 0.333) and 4 (0.362) with the highest loading ratio.After the deletion of this variable, Item 1 failed to load onto any factor with a salient pattern coefficient.Therefore, this item was also deleted.Finally, Items 12 and 19, which cross-loaded onto Factors 2 (− 0.379) and 4 (0.327), and 1 (0.365) and 5 (− 0.439), respectively, were deleted in this order (highest to lowest loading ratio).
After five iterations, the final 22-item, 5-factor model explained 72% of the total variance (after rotation) and was deemed to reflect the following constructs: sibling competitiveness, active sibling influence, threatening unequal division of food, sibling role modelling, and vicarious operant conditioning.Table 5 includes the descriptive statistics, factor loadings, and communalities for all items.Participant scores for each item ranged from 1 to 5. All items had salient factor loadings (pattern coefficients ≥0.388) and a reasonable proportion of variance within each item was explained by the factor on which it loaded (h 2 ≥ 0.608).The inter-factor correlations ranged from r s = 0.351 to 0.698 (see Table 6).The Bartlett's test was significant, χ 2 (231) = 6450, p < 0.001, confirming sufficient intercorrelation between the items for EFA (Bartlett, 1954).Sampling adequacy for factor analysis was also evidenced by a KMO statistic of 0.938 (with values for each item ≥0.871) (Kaiser, 1974).
Internal consistency and test re-test reliability
The internal consistency and test re-test reliability estimates for the five subscales are reported in Table 7. Cronbach's alpha coefficients were acceptable, ranging from 0.84 to 0.92.ICCs ranged from 0.76 to 0.88 (ps < 0.001), indicating good to excellent test-retest reliability.
Construct validity
Table 8 presents correlations between the FSQ subscale scores and FPSQ subscale difference scores for siblings.Construct validity testing revealed that scores for four of the five FSQ subscales were significantly correlated with mean difference scores on at least two FPSQ subscales.However, the strength of these correlations was often small (see Table 8).For example, positive correlations were observed for sibling competitiveness with differences in persuasive feeding (r=0.114,p = 0.039), reward for eating (r = 0.160, p = 0.004), and reward for behaviour (r = 0.201, p < 0.001).Similarly, positive correlations were evident for sibling role modelling and differences in persuasive feeding (r = 0.134, p = 0.015), reward for eating (r = 0.229, p < 0.001), and reward for behaviour (r = 0.217, p < 0.001), in addition to differences in overt restriction (r = 0.113, p = 0.040) and family meal settings (r = 0.120, p = 0.030).Parents who differed more in the extent to which they used reward for eating and reward for behaviour also scored higher for threatening unequal division of food (r = 0.133, p = 0.015 and r = 0.138, p = 0.012, respectively) and vicarious operant conditioning (r = 0.197, p ≤ 0.0001 and r = 0.264, p < 0.001, respectively).No significant correlations were found for differences in structured meal timing and structured meal settings.Although not reported, differences in FSQ subscale scores between discordance groups are provided in Supplementary Table S5 for completeness.Table 9 presents differences in FSQ subscale scores between sibling pairs discordant and non-discordant on each CEBQ subscale.Construct validity testing revealed at least one significant difference between the groups across all five FSQ subscales.On average, parents scored higher for threatening unequal division of food if their children were discordant on any of the four eating behaviours, including food fussiness (t 328 = 2.17, p = 0.030), slowness in eating (t 184 = 4.06, p < 0.001), satiety responsiveness (t 137 = 2.15, p = 0.033), and food responsiveness (t 328 = 2.98, p = 0.003).In addition, parents scored higher for sibling role modelling if their children were discordant on food fussiness (t 328 = 3.65, p < 0.001) and slowness in eating (t 328 = 2.66, p = 0.008).Differences in sibling competitiveness (t 328 = 3.02, p = 0.003), active sibling influence (t 328 = 1.99, p = 0.048), and vicarious operant conditioning (t 194 = 2.88, p = 0.004) were only evident between sibling pairs discordant and nondiscordant on slowness in eating, with higher scores observed in the discordant group.However, the effect sizes were generally small, meaning that these results may not represent practically significant differences between the groups (see Table 9) (Ferguson, 2009).Correlations between the FSQ subscale scores and CEBQ subscale difference scores for siblings were not reported; however, these results are provided in Supplementary Table S6 for completeness.The final questionnaire is available as supplementary material (see Supplementary Table S7).
Discussion
This study aimed to develop an instrument to measure food parenting practices with siblings in early childhood, and provide an initial assessment of its validity and reliability in an Australian sample of siblings aged 2-5 years.A 22-item, 5-factor structure was determined through a systematic process of questionnaire development and refinement, using a compilation of methods.Results from the psychometric analyses indicate that the Feeding Siblings Questionnaire (FSQ) may be considered a robust and parsimonious measure for examining mealtime interactions beyond those confined to the parent-child dyad.To the authors' knowledge, this instrument was also the first to measure food parenting practices with two children comparatively within single FSQ, Feeding Siblings Questionnaire; SD, standard deviation.Salient pattern coefficients (≥0.32) are bolded to indicate the primary factor on which the item loads.h 2 refers to the communality value.
a Child A refers to the name of the child nominated as the "better" eater and Child B refers to their sibling (alternatively, if parents could not differentiate their children based on their eating behaviours, Child A refers to the name of the earlier-born child and Child B refers to their sibling).
b Item excluded to minimise multicollinearity.c Item excluded due to low loading (pattern coefficients <0.32) on all factors.d Item excluded due to cross-loading (pattern coefficients ≥0.32) on multiple factors.items, thus expanding the scope of existing parent-report tools (Vaughn et al., 2013).
The framing of the children's names in the FSQ was determined based on responses to a single item, which should be used in conjunction with the 22 items.In previous research, the use of single-item indicators to assess parents' perceptions of their child's eating behaviours (e.g., 'Do you think your child is a picky or fussy eater?') have demonstrated, to some extent, validity in predicting differences in child behavioural, dietary, and anthropometric measures (Byrne, Jansen, & Daniels, 2017;Carruth, Ziegler, Gordon, & Barr, 2004;Jacobi, Agras, Bryson, & Hammer, 2003).However, the current study was novel in that parents were asked to differentiate one sibling as the "better" eater, thereby forming a broad assessment of how parents evaluate these behaviours.Although parents were prompted to consider behaviours such as food refusal when responding to this item, the child who was regarded as the "better" eater typically scored lower than their sibling not only for food fussiness, but also for slowness in eating and satiety responsiveness, and higher for food responsiveness.This finding is consistent with a latent profile analysis of eating behaviours in 4-year-old children, where the profile of a 'non-fussy' eater was characterised by lower scores for food fussiness, slowness in eating, and satiety responsiveness, and higher scores for food responsiveness and enjoyment of food (Tharner et al., 2014).Therefore, the current study provides validation for the use of this item.
Responses to the single item also highlighted discrepancies between community and public health concerns, whereby children perceived as the "better" eater tended to have lower responsiveness to internal appetite cues and heightened sensitivity to external food cues.Although these are adaptive traits from an evolutionary point of view, they may increase risks of overweight and obesity in the modern environment which is no longer dominated by food scarcity but abundance (Kininmonth et al., 2021).In addition, the "better" eater tended to exhibit lower food fussiness.Despite this perception among parents, food fussiness is regarded as a developmentally normal and transient trait in toddlers (Cardona Cano et al., 2015).As concerns about fussiness can motivate parental use of persuasive feeding strategies (Burnett, Russell, Lacy, Worsley, & Spence, 2023;H.A. Harris, Jansen, Mallan, Daniels, & Thorpe, 2018), there is a need to normalise food avoidant behaviours, not as acts of deviance, but as developmentally appropriate expressions of appetite and food preferences, in order to minimise undue stress and the use of counterproductive food parenting practices in families (Walton, Kuczynski, Haycraft, Breen, & Haines, 2017).Moreover, the perception of a "better" eater was not related to other child characteristics such as age.Behaviours like food fussiness tend to be overt in nature; therefore, it is evident that parents may be more attuned to differences in these types of characteristics (Ayre, Harris, White, & Bryne, 2023).
Preliminary validity and reliability of the FSQ
The FSQ comprised five factors that measured practices reflecting sibling competitiveness, active sibling influence, threatening unequal division of food, sibling role modelling, and vicarious operant conditioning.While other factor solutions also demonstrated adequate fit to the data, the final model was consistent with constructs extrapolated from formative mealtime observations and interviews in a comparable sample (Ayre, White, et al., 2023).Hence, the factors directly correspond with the constructs defined in Table 1.However, one exception is Factor 2 (active sibling influence), which expands on the original definition of the construct.Items 7 and 11, originally conceptualised as sibling role modelling practices (Factor 4), loaded more saliently onto Factor 2. These items describe practices in which parents actively ask or direct their child to role model desired eating behaviours.Thus, the revised factor encompasses practices in which siblings exert influence, whether it be verbal (e.g., praise, encouragement) or non-verbal (e.g., role modelling).A second exception is Factor 5 (vicarious operant conditioning), which originally included the use of both tangible (e.g., dessert) and non-tangible (e.g., social praise) rewards.However, with Item 21 loading more saliently onto Factor 4 (sibling role modelling), the focus of Factor 5 was narrowed to include tangible rewards only.Although the factors were conceptually distinct, two items were observed to cross-load onto Factors 2 (active sibling influence) and 4 (sibling role modelling).There was also a relatively strong correlation between these subscales, suggesting that future research should test whether sibling role modelling may be conceptualised more suitably as a subtype of active sibling influence.
The resulting scale describes food parenting practices in which siblings are positioned as mediators in parents' attempts to prompt or persuade their child to eat.Similarities in parents' motivations for applying these practices are indicated in the positive correlations between the subscales.However, as shown in Fig. 2, the five subscales are presumed to fall under different domains of food parenting practices as described by Vaughn et al. (2015), including coercive control, structure, and autonomy support or promotion, with one subscale spanning across two domains.Therefore, this study not only contributes a novel tool for measuring food parenting practices in the field, but also expands on the broader conceptualisation of this construct in the literature.
Comparing responses on the FSQ with existing measures of food parenting practices and child eating behaviours provided some evidence of its construct validity.The findings partially support the hypothesis that parents would score higher on the FSQ subscales if there were greater differences in their use of coercive control (i.e., persuasive feeding, reward for eating, reward for behaviour, and overt restriction) and structure-related food practices (i.e., family meal settings, structured meal timing, structured meal settings) between siblings.For example, parents used sibling competitiveness and sibling role modelling practices more often when they differed more in their use of persuasive and instrumental (i.e., reward for eating and behaviour) feeding for each child.This finding may demonstrate an overall attempt by parents to prompt or persuade one child to eat.
In addition, sibling role modelling was used more often by parents when they differed more in their use of overt restriction and family meal settings (i.e., providing family foods).It is plausible, for example, that when a child refuses to eat, parents may provide them with alternative foods, but continue to reinforce expected or preferred behaviours using their sibling as a positive role model.This example aligns with other research indicating that many parents are reluctant to cater to individual preferences, yet often resort to this practice when faced with food refusal (Fraser, Markides, Barrett, & Laws, 2021).In the current study, greater use of vicarious operant conditioning and threatening unequal division of food were also reported when parents differed more in their use of instrumental feeding.It is possible, for example, that differences in instrumental feeding are the direct result of vicarious operant conditioning.For instance, parents may reward the behaviour of one sibling in an attempt to teach the other child that by eating, they too can obtain the same reward (Ayre, White, et al., 2023).
In contrast to other food parenting practices, no significant correlations were observed between the FSQ subscales and differences in structured meal timing and structured meal settings.This finding may be partly due to parents in the current sample reporting, on average, a relatively high degree of mealtime structure for both children (mean subscale scores ≥3.43).However, this finding also serves as evidence of discriminant validity, as greater differences in these subscale scores indicate that siblings tend to eat separately more often, with fewer opportunities for parents to directly or indirectly use sibling dynamics to influence child eating behaviours.
Differences in the FSQ subscale scores were also observed in relation to child eating behaviours, including food fussiness, slowness in eating, satiety responsiveness, and food responsiveness.In the current sample, parents of siblings discordant on any of the four eating behaviours threatened to give their child's food away more often, compared to parents of non-discordant siblings (determined by a higher score on threatening unequal division of food).By using this practice, parents were favouring their "better" eater by offering the food to them.This observation aligns with the notion that threatening to serve a child's unwanted food to their sibling can serve as motivation for that child to eat (Ayre, White, et al., 2023).Research demonstrates that young children are often willing to take costs to avoid being at a perceived disadvantage to others (Sheskin, Bloom, & Wynn, 2014).However, the effectiveness of this practice may also rely on the willingness of their sibling to eat the food.Therefore, there may be increased opportunity and motive for parents to implement this practice if siblings are discordant on their eating behaviours (e.g., if one child is comparatively slower at eating).
In the current study, parents of siblings discordant on food fussiness and slowness in eating used sibling role modelling more frequently (i.e., by modelling the behaviour of the "better" eater), compared to parents of non-discordant siblings.Social learning, which involves observing and imitating others, is a widely recognised process through which children are socialised around food in early childhood (Bandura, 1977).Siblings, particularly if older in age, can serve as prominent role models for children during mealtimes (Ayre, White, et al., 2023;Ruggiero, Moore, & Savage, 2023).Therefore, parents may leverage on this dynamic by directing attention toward the sibling who is eating, when the other is not, as a source of modelling and reinforcement (Ayre, White, et al., 2023).Parents of siblings discordant on slowness in eating also used other practices, including sibling competitiveness, active sibling influence, and vicarious operant conditioning more often, compared to parents of siblings non-discordant on this behaviour.With multiple children, discordance in eating pace may serve as a constraint on coordinating mealtimes in the context of competing schedules and routines (Brannen, O'Connell, & Mooney, 2013).Hence, in waiting for a child to finish eating, parents may implement various practices in an attempt to execute the mealtime more efficiently, whilst keeping their sibling engaged (Ayre, White, et al., 2023).Contrary to this finding, however, research shows that children may consume more fruits and vegetables when mealtimes are longer in duration (Dallacker, Knobl, Hertwig, & Mata, 2023).
Implications for research and practice
Within the last decade, there have been repeated calls to expand the current operationalisation of food parenting practices in the literature (de Lauzon-Guillain et al., 2012;Heller & Mobley, 2019;Vaughn et al., 2013).In their seminal paper, Vaughn et al. (2015) invited researchers in this field to continue refining their conceptualisation of food parenting practices, toward establishing a comprehensive and cohesive model.The FSQ, which has been mapped onto this model, has found preliminary support in the current sample as a valid and reliable measure of food parenting practices with siblings.In this sample, there was a skewed distribution of the subscale scores, with most parents implementing these practices relatively infrequently.However, due to their nature, these practices may also indicate broader dimensions of family functioning, for example, the presence of favouritism or bias towards a particular child.Therefore, it is necessary to explore these practices within the context of family system processes, such as sibling comparison and differentiation (McHale, Updegraff, & Whiteman, 2012).This area of research may be particularly relevant in families where one child lives with obesity, considering that parents are often encouraged to adopt a whole-of-family approach to treatment.While additional psychometric testing of the FSQ is needed, further research is needed to examine how these practices may be implicated in the trajectories of eating behaviours and growth among children over time, in addition to their overall development and adjustment.If relevant, this knowledge can then be integrated into public health guidelines and interventions on responsive feeding.For example, if the practices are associated with increased overweight and obesity risk in children, they could serve as potentially relevant intervention targets and outcome measures in responsive feeding interventions.
Limitations
There are several limitations to note.Firstly, the proposed factor structure could not be verified using confirmatory factor analysis (CFA) due to limitations in sample size.To the authors' knowledge, there are also no existing measures that capture parents' responses for two children within a single item to offer direct comparison with the FSQ.Therefore, construct validity of the proposed factor structure was assessed using sibling difference scores on the FPSQ and CEBQ.It is recognised, however, that instrument evaluation is a systematic process that requires multiple points of data collection (Boateng et al., 2018;Vaughn et al., 2013).Further psychometric testing is therefore needed before the instrument can be effectively used in practice.Testing should include, but is not limited to, verification of the factor structure using CFA and assessment of criterion validity using comparisons with observational data (Vaughn et al., 2013).Exploring how these subscales relate to food parenting practices and child eating behaviour scores for children separately also provides a different angle through which construct validity may be assessed.As proposed by Vaughn et al. (2013), the sixth component of instrument developmentresponsiveness testing should also be undertaken to ascertain the extent to which the FSQ can detect changes in food parenting practices, to inform subsequent power and sample size calculations.
Another limitation was that the sample was relatively homogenous, with a large proportion of participants identifying as female, Australian, university educated, and married.University educated participants were also more likely to complete the repeated survey at two weeks, compared to participants with lower educational attainment.Therefore, estimates of test re-test reliability were subject to attrition bias.Hence, it is necessary to test the applicability of the instrument in more diverse samples.There was also a risk of response bias due to parents selfreporting their data.For example, responses may have been affected by the ordering of the children's names within the FPSQ and CEBQ, with items referring to the earlier-born child always listed first.Additionally, parents may have been sensitive to social desirability bias, particularly when asked to disclose partiality towards one child.Finally, while the FSQ demonstrated validity and reliability in the current sample, its scope is limited in that it focuses only on two children, and captures behaviours potentially only relevant to parents of siblings who differ in their eating behaviours.
Conclusion
This study describes the systematic development and testing of the FSQ, a potentially robust and parsimonious measure of food parenting practices with siblings.In the current sample, a 22-item, 5-factor structure demonstrated adequate fit, and provided an interpretable solution that mapped onto constructs identified in mealtime observation and interview data.The instrument was reliable and provided some evidence of construct validity.While its factor structure should be verified using CFA in a different sample, the FSQ offers a novel tool for assessing, monitoring, and evaluating feeding interactions with siblings beyond those confined to the parent-child dyad.
Fig. 1 .
Fig. 1.Flowchart for the development and psychometric testing of the Feeding Siblings Questionnaire (FSQ).
Fig. 2 .
Fig. 2. Sibling-specific food parenting practices mapped onto the conceptual model by Vaughn et al., 2015.a a The figure represents a modified and simplified version of the original conceptual model.b Factor identified within the current study describing a sibling-specific food parenting practice.
Table 2
Sociodemographic characteristics of the parents participating in the online survey at baseline (n = 330) and two weeks (n = 133).
IRSAD, Index of Relative Socioeconomic Advantage and Disadvantage; IQR, interquartile range.a In Sample 1, 12 (3.6%)parents had missing or invalid data on residential state and IRSAD score; 7 (2.1%) parents had missing data on marital status; (1.5%) parents had missing data on gender and work and study status; and (1.2%) parents had missing data on cultural identity, indigenous status, education, marital status, and number of children.b In Sample 2, 3 (2.3%)parents had missing data on marital status; and (0.8%) parent had missing or invalid data on residential state and IRSAD score.c Calculated based on 2016 Statistical Area Level 1 (SAL1).
Table 3
Demographic and anthropometric characteristics of siblings reported by parents participating in the online survey at baseline (n = 330).
a Determined based on the age gap between siblings.bForearlier-bornsiblings, data were missing for 19 CEBQ, Children's Eating Behaviour Questionnaire; CI, confidence interval; FF, food fussiness; FR, food responsiveness; SD, standard deviation; SE, slowness in eating; SR, satiety responsiveness.aSingleitem:'At this point in time, which of your children would you generally consider to be the "better" eater?' b Excludes parents who were unable to differentiate siblings based on their eating behaviours (n = 35).S.K.Ayre et al.
3I ask [Child A] to convince [Child B] that he/she will like the food (e.g., "Tell [Child B] how yummy the sauce is").[Child A] about ways to convince [Child B] to eat (e.g., "How can we get [Child B] to eat his/her dinner tonight?").c I use [Child A] as a positive example when encouraging [Child B] to eat (e.g., "Look, [Child A] is eating up all his/her vegetables!").d I look to [Child A] for backup when trying to encourage [Child B] to eat (e.g., "It's delicious, isn't it [Child A]?").d When [Child A] eats most of his/her meal, I reward him/her with something other than food (e.g., sticker, toy, screen time) to try and convince [Child B] to also eat more.
Table 7
Cronbach's alpha coefficients for subscale scores at baseline (n = 330) and intraclass correlation coefficients (ICCs) between the subscale scores at baseline and two weeks (n = 133) for the Feeding Siblings Questionnaire (FSQ).
Table 8
Spearman's correlations between the Feeding Siblings Questionnaire (FSQ) subscale scores and Feeding Practices and Structure Questionnaire (FPSQ) subscale difference scores for siblings (n = 330).
Table 9
Independent samples t-tests comparing subscale scores on the Feeding Siblings Questionnaire (FSQ) between sibling pairs who were discordant and non-discordant on the Children's Eating Behaviour Questionnaire (CEBQ) subscales (n = 330).Discordant sibling pairs were defined as those with a difference score >1 standard deviation of the mean difference score for that particular subscale.
a S.K.Ayre et al. | 2024-04-18T13:20:28.680Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "eebc0f21ec77634b6e5c47eba1a0ec8047da8563",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.appet.2024.107363",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "253d7eabf12bc3497808a0f79d2ee85a453d78a4",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55397102 | pes2o/s2orc | v3-fos-license | Experimental Investigation of Sandstone under Cyclic Loading : Damage Assessment Using Ultrasonic Wave Velocities and Changes in Elastic Modulus
This laboratory study investigated the damage evolution of sandstone specimens under two types of cyclic loading by monitoring and analyzing changes in the elastic moduli and the ultrasonic velocities during loading. During low-level cyclic loading, the stiffness degradation method was unable to describe the damage accumulations but the ultrasonic velocity measurements clearly reflected the damage development. A crack density parameter is introduced in order to interpret the changes in the tangential modulus and the ultrasonic velocities. The results show the following. (1) Low-level cyclic loading enhanced the anisotropy of the cracks. This results from the compression of intergranular clay minerals and fatigue failure. (2) Irreversible damage accumulations during cyclic loading with an increasing upper stress limit are the consequence of brittle failure in the sandstone’s microstructure.
Introduction
The accumulated damage from dynamic (cyclic) loading is highly detrimental in many engineered structures like mine openings [1][2][3][4], petroleum and natural gas boreholes [5], tunnels [6,7], foundations [8], and underground chambers [9,10].The sources of cyclic or repetitive loading can be roughly divided into two types: (1) periodic operations including drilling, blasting, and mining; (2) sporadic vibrations including earthquakes and traffic loads.Repetitive loading-unloading opens and closes the micropores and microcracks within the rock and induces the growth of cracks.Consequently, the accumulated damage can become a potential trigger for the rapid and violent failures of largescale engineered structures.
Because this kind of damage is the cumulative destruction of bonds in the rock's microstructure, it cannot be directly measured by macroscopic scale tests [11].The most common techniques and analytical methods used to investigate rock damage are acoustic emissions (AE), the damage energy dissipation method, and the stiffness variation method [12][13][14][15][16][17][18].Acoustic emissions can be used as a highly sensitive detector for the microseismic events and energy release that accompany the initiation and growth of cracks in geomaterials.However, the AE technique has an inherent limitation in that it cannot quantify the damage to the material because the seismic events and energy do not directly reflect the ratio of structurally damaged portion to integrity portion.For this reason, AE studies mainly focus on qualitative analyses such as where the damage is taking place.
Xie et al. [19] originally attempted to use the rock energy dissipation theory under cyclic loading conditions to quantify damage.They proposed the damage variable defined as where is the dissipated damage strain energy during the loading-unloading cycle; * represents an energy dissipation per unit volume.The two parameters and are related to the mechanical properties of the rock.According to the data presented by Xie et al. [19] and Liu et al. [16], the damage variables estimated by (1) agreed with their experimental results.Nevertheless, it should be pointed out that the values of the two key parameters, and , are highly dependent on the numbers returned by the stiffness variation calculation.
The stiffness variation method is quite useful for describing dynamic damage in rocks.It is based on the strain equivalence hypothesis proposed by Lemaitre [20]; the damage variable is expressed as where is the elastic modulus of the undamaged material; is the elastic modulus of damaged material.This method quantifies the changes in degree of damage by measuring variations in the properties that are the consequence of temporal changes in the rock's microstructure.Therefore, to reveal the underlying mechanisms of damage initiation and development, an appropriate monitoring technique is required.One technique, ultrasonic velocity testing, is an attractive technique because it can provide spatial and temporal information on the inner structure of the rock.The stiffness variation method is superior compared to AE and damage energy dissipation for two notable reasons: (1) the damage variable () can be directly quantified by the two elastic moduli and (2) the moduli are directly measurable.This study investigates the evolution of damage in sandstone under two types of cyclic loading by monitoring both the variations in the elastic modulus and ultrasonic responses.We adopted the stiffness variation method for damage assessment and compared it with the results from dynamic ultrasonic velocity measurements.Finally, the influences from two different types of cyclic loading on the microstructure of the rock and the subsequent damage are discussed.
Mechanical Behavior of Rock Subjected to Cyclic Loading.
Rock is a type of natural material characterized by heterogeneity.The intrinsic physical and mechanical properties can be determined indirectly by performing cyclic loading tests.The stress-strain behavior of rock under cyclic loading can provide useful information for theoretical analysis and for numerical calculations for rock engineering purposes such as estimating long-term stability, creep behavior, and response to fatigue [21,22].
As an effective way to quantify rock deformation, uniaxial and triaxial cyclic loading tests on laboratory-scale rock samples have been used extensively by numerous researchers.Costin and Holcomb [23] suggested that stress corrosion is a time-dependent mechanism that is most sensitive to the mean stress level, whereas cyclic fatigue is most sensitive to the amplitude of the stress cycles.Tao and Mo [24] proposed that the total deformation from cyclic loading consists of initial deformation (induced by static loading), creep deformation, and damage deformation (produced by the cyclic loading itself).Martin and Chandler [25] used a repetitive loading-unloading test to investigate the progressive failure of Lac du Bonnet granite and their results describe the influence of crack damage on crack damage stress and crackinitiation stress.Ray et al. [26] found that failure strength increased with an increase in strain rate and, furthermore, they observed an abrupt increase in strength at a strain rate of 2.5/s.Jafari et al. [27] evaluated the effects of cyclic shearing on the strength of rock joints and found that the increase in shear strength results from an increase in confining pressure.By combining the plasticity theory and the self-organization theory of cellular automata, Feng et al. [28] developed an elastoplastic cellular automaton model to numerically investigate the influence of cyclic loading on the complete stress-strain curve and on AE emission from a rock specimen under uniaxial compression.The results indicated that their numerical simulation reproduced some of the well-known phenomena observed by other researchers.Liu and He [9] performed a series of laboratory tests to assess the effects of confining pressure on the mechanical properties and fatigue damage evolution of sandstone samples subjected to cyclic loading.They found that, with an increase in the number of cycles, the samples gradually became plastic and irreversible deformation occurred along both the axial and lateral directions.Khaledi et al. [29] proposed an elastoviscoplasticcreep model to predict the stress-strain relationships around a rock salt cavern during cyclic loading.The constitutive model Khaledi et al. developed in that paper combined three existing models, with some modifications, to combine the positive features of the three models for the specific purpose of their investigation.This allowed their new model to be applied in different simulations with different types of loading conditions as well as different time scales.
A pronounced feature of rocks subjected to cyclic loading is fatigue.It is known that fatigue in rocks is influenced by a number of factors, for example, the confining stress, the loading rate, the loading amplitude (the maximum stress), the type and frequency of the loading cycle, and the number of cycles [10,30].Bagde and Petroš [31] demonstrated that the quartz content, texture, and microstructure of a rock had a huge influence on its fatigue strength.Xiao et al. [32] conducted a laboratory-scale investigation on the fatigue behaviors of granite under cyclic loading.They found that determining initial fatigue damage was vital in order to establish a unified critical damage parameter.Furthermore, their result also indicated that, in most cases, the loading induced damage was continuously amplified.By comparing the results from conventional fatigue tests with those from interval fatigue tests, Fan et al. [33] demonstrated that combined cyclic stresses can significantly influence the fatigue response of rock salt.
Ultrasonic Wave Velocities in Rock.
Extensive laboratory measurements of ultrasonic wave velocities in rock samples have demonstrated that elastic wave velocities can provide valuable information about the rock's internal structure.Ultrasonic velocity in a rock sample is largely dependent on the water content, density, composition, and boundary conditions [34].Gupta [35] measured the P-and S-wave velocities along three mutually perpendicular directions in a limestone cube under uniaxial compression and found that prior to failure, both P-and S-wave velocities decrease in all three directions but by different amounts.The P-to S-wave velocity ratio remained nearly constant along the loading direction, decreased slightly along the direction parallel to the shear plane, and dropped considerably along the direction perpendicular to the shear plane.By performing uniaxial cyclic loading tests on granite specimens, Rao and Ramana [12] monitored the variations in the compressional wave velocity in the direction perpendicular to the applied stress.They observed a steady rise in the compressional wave velocity with an increase in the load of up to 30% of the compressive strength.However, when the rock was loaded again to 80% of its compressive strength, the compressional wave velocity fell rapidly, indicating the development of microcracks.Stanchits et al. [36] found that P-wave velocity in basalt was about 3 km/s at atmospheric pressure but increases by more than 50% when the hydrostatic pressure was increased to 120 MPa.In his granite samples, initial Pwave velocity was 5 km/s but increased by less than 20% under increased pressure.Stanchits et al. proposed that the pressure-induced changes of elastic wave speed indicated dominantly compliant low-aspect ratio pores in both the basalt and the granite.Xiao et al. [32] proposed a mathematical model to quantify damage evolution in granite under cyclic loading and observed an obvious three-stage behavior reflecting the evolution of fatigue damage.
Given its high sensitivity to the inner defects in rock materials, the ultrasonic velocity technique can be used to accurately and quantitatively monitor the changes in rock fractures.This makes it possible to observe the mechanical changes in the rock.Cyclic loading tests typically produce continuous initiation and closing of cracks and these cracks cannot be adequately monitored by conventional methods and, so far, very few results from the application of the ultrasonic velocity technique to cyclic loading tests have been reported.Therefore, this work employed the ultrasonic velocity technique to investigate changes in the inner structure of rock specimens during cyclic loading.
Experimental Work
The sandstone blocks from which cores were drilled for this study were collected from an open-pit mine located in Kittanning, Pennsylvania.Dry cylindrical cores were drilled from the same large sandstone block and prepared as 50 mm diameter by 100 mm long specimens.Both ends of the specimens were ground and polished according to the ISRM standards [37] before the tests were run; prepared specimens are shown in Figure 1.In order to eliminate the influence of water, the specimens were oven-dried until their weight remained constant.Uniaxial compressive strength (UCS) and cyclic loading tests were conducted using a hydraulic servo-testing machine with a single loading rate of 50 N/s for all the experiments.The machine had a load frame of stiffness 6000 kN/mm and a compression load capacity of 1000 kN with a resolution of 0.5%.Its stroke was ±50 mm with a resolution of 1 m.Commercially available piezoelectric transducers with a 200 kHz main frequency were used for both transmitting and receiving the ultrasonic signals.As illustrated in Figure 2, one piezoelectric transducer was placed between the top face of specimen and the upper, fixed platen of the hydraulic testing machine; the receiver was inserted between the bottom face of the specimen and the lower (loading) platen.The acquisition rate was set to 20 MHz and honey was used as a couplant.Before each test, the transducers were placed against each other to determine the face-to-face arrival time so that the ultrasonic velocity could be properly determined by subtracting the face-to-face arrival time from the arrival times measured during the test.During the cyclic loading tests, the ultrasonic signals were transmitted and recorded at 4 MPa intervals.
In order to obtain the mechanical properties of the sandstone specimens, three preliminary UCS tests were conducted.Based on those UCS data, the elastic limit and compressive strength for these specimens were determined.Then two types of cyclic loading tests were carried out, Type 1 and Type 2. In Type 1 tests, the loading was cycled between zero and a prescribed stress.This type of test was designed to investigate the effect of low-level cyclic stress on specimen damage.Because the elastic limit is widely recognized as a threshold beyond which the mechanical damage increases rapidly [16,[38][39][40], the prescribed stress was 16 MPa lower than the elastic limit.With this 16 MPa margin, the possibility of rapid damage resulting from specimen-to-specimen variations in the elastic limit could be avoided.The loading rate was 50 N/s; the slow rate made each experiment fairly time-consuming, so only four cycles were conducted.For Type 2 tests, the prescribed stress was progressively increased from one cycle to the next until the specimen ruptured.This test focused on damage in a high-rate damage environment; therefore the initial prescribed stress for cyclic loading was set 4 MPa higher than the elastic limit and increased by 4 MPa per cycle.Three replicate tests were conducted for each test type.
Experimental Results and Discussion
The stress-strain curves for three sandstone specimens tested under uniaxial loading are shown in Figure 3.As expected, the stress-strain curves can be divided into four stages: consolidation/compaction, elastic deformation, plastic deformation, and failure/postfailure.The overall mechanical behavior of the sandstone specimen follows that of classic brittle materials.The mechanical properties for the three specimens are listed in Table 1; the average peak strength, elastic modulus, and elastic limit are 55.51 MPa, 7.41 GPa, and 47.14 MPa, respectively.
Cyclic Stress-Strain Curves and Stiffness.
Based on the specimen's mechanical properties, we conducted the Type 1 and Type 2 cyclic loading experiments.Typical stress-strain curves for the two types of cyclic loading tests are shown in Figure 4.The figure shows that the hysteresis loops of tests become narrower as the number of cycles increases.The essence of the loading-unloading cycle is energy conversion in the rock specimen tested.Specifically, the area under the loading curve represents the external work absorbed by the rock and the area under the unloading curve represents the elastic strain energy released [19].Therefore, the area of the hysteresis loop is the dissipated energy due to damage development and plastic deformation.In Figure 5, the first cycle of Figure 4(a) is shown alone to illustrate how dissipated energy and released strain energy are calculated.In Figure 5, the grey portion represents the released strain energy during the unloading phase and the white portion represents the dissipated energy.Tables 2 and 3 list the dissipated energy for the tested specimens under Type 1 and Type 2 conditions.It is clear that dissipated energy decreases as the cyclic number increases for the Type 1 test but it remains relatively constant for Type 2 test.The decrease in energy for Type 1 test can be attributed to the plastic closure of preexisting pores and cracks under cyclic loading below the specimen's elastic limit, which results in a continuous reduction of open pores and cracks.The closure of pores and cracks is also reflected by an increase in the specimens' stiffness, as shown in Table 4 where the average increase in Young's modulus for the three samples is 37.5%.As for Type 2 tests, the relatively stable dissipated energy is the result of the irreversible damage produced after the elastic limit is exceeded.In Table 5, the 11.4% average decrease in Young's modulus also reflects the irreversible damage.The tangential moduli calculated from Figure 4 are plotted versus axial stress in Figure 6.It is clear that the tangential moduli generally increase with increasing stress but as the cycle number increases, the moduli for Type 1 and Type 2 tests show opposite trends.For Type 1 tests, as the cycle number increases, the tangential modulus for each loading cycle also increases and an obvious reduction can be seen in the tangential modulus for each unloading cycle.In contrast, both the loading and unloading tangential moduli for Type 2 tests decrease at higher cycle numbers.In addition, the loading cycle tangential moduli for specimens tested under Type 2 tests show a dramatic decline when the sample is near failure.This is caused by the rapid development of irreversible damage.
In order to further investigate the damage caused by cyclic loading, ( 2) is used to calculate the damage variable for each loading cycle for Type 1 and Type 2 tests.For these calculations, the maximum loading tangential modulus from the cyclic loading tests is selected as the elastic modulus for intact sandstone.This is 9.92 GPa, the maximum stress point for Cycle 1 of a Type 2 cyclic loading test.As illustrated in Figure 7, the preexisting defects (the pores and cracks formed during geological events and specimen preparation) in specimens are deemed the initial damage.Increasing stress gradually closes preexisting defects resulting in a general descending trend for the damage variable, but at higher stresses during Type 2 tests, the damage variable increases, implying that the high stresses have caused cracks to be initiated and propagate.Figure 7 also shows that, with increasing cycle number, the progression of the damage variable for Type 1 and Type 2 tests is different.For Type 1 tests, the damage variable consistently decreases as the number of cycles increases, implying that compaction is taking place.On the other hand, the damage variable for Type 2 tests clearly increases at higher cycle numbers indicating continuous damage accumulation.
Responses of Ultrasonic Wave Velocities to Cyclic Loading.
During each cyclic loading test, the P-and S-wave signals at an ultrasonic frequency of hundreds of thousands of Hz were acquired using the ultrasonic transducers shown in Figure 2(b).Figures 8 and 9 show graphs of the ultrasonic velocities for Types 1 and Type 2 tests plotted against stress.Similar to the tangential moduli shown in Figure 6, both the P-and S-wave velocities generally raise with increasing axial stress; however the wave signals do show some different responses to the number of test cycles.
Figure 8 demonstrates that as the cycle number increases, the P-wave velocity gradually decreases but the S-wave velocity remains relatively constant.This implies that cyclic loading below the elastic limit does cause damage but the form of the damage leads to differences in the velocities.The elastic modulus 0 and the shear modulus can be expressed as [41] where is the density, p is the P-wave velocity, and s is the S-wave velocity.Therefore, the decreasing P-wave velocity and the steady S-wave velocity indicate that the damage in the specimen reduced the dynamic Young's modulus but had only a slight influence on the dynamic shear modulus.The decrease in dynamic Young's modulus is contrary to the stiffening trend from loading shown by Figure 6(a) but consistent with the softening trend of unloading in Figure 6(b).
It is well known that the P-wave velocity is more sensitive to the development of cracks oriented perpendicular to the direction of wave travel (in this case, wave travel is parallel to vertically applied load), whereas the S-wave velocity is more sensitive to the cracks oriented parallel to the direction of wave travel [42][43][44].Therefore, the decrease in P-wave velocities indicates that low-level cyclic stress induces the initiation and development of cracks oriented horizontally or subhorizontally, and the more stable S-wave velocity means that few irreversible microstructural changes are oriented in a vertical direction.Considering the stiffening trend during loading and the softening trend during unloading suggested by Figures 6(a) and 6(b), it is reasonable to attribute the development of horizontal cracks to the collapses of pores.The collapses would cause the relocation of sandstone grains and result in a densification of the rock.The denser structure means the specimen would stiffen during loading, but the rebound during unloading would be limited.Furthermore, the gradual decrease in P-wave velocity in Figures 8(a This progressive crack formation is a clear indication of fatigue damage. Similar to the trends of the tangential moduli with increasing axial stress in Figures 6(c) and 6(d), Figures 9(a) and 9(c) both show obvious velocity declines at high stress values.The decline indicates that, at these stress levels, damage develops rapidly in the rock's microstructure and this clearly indicates brittle behavior.During Cycle 3 loading, the sudden failure of the specimen at 58.3 MPa resulted in no data being collected at higher stresses; otherwise there would be a similar velocity decline at the right end of the Cycle 3 curves in Figure 9.This ultrasonic velocity behavior conforms to the Kaiser Effect [45] and reveals that the rate of damage accumulation during Type 2 loading rapidly accelerates when the former maximum applied stress is exceeded.This rapid accumulation of damage can also be demonstrated by the change in P-wave velocities during the whole loading-unloading test.As shown in Figure 10 for Type 2 tests, the P-wave velocity for any single stress value is higher during loading than it is for unloading, but just the opposite is true for Type 1 tests.The decrease in velocity shows that the rock's microstructure had suffered considerable damage at the higher stress levels.Nevertheless, the increase in velocities during Type 1 testing is still the result of the densification described above.
In order to quantify the deformation of the microstructure, a crack density parameter (developed by Ayling [46]; Ayling et al. [47]) is introduced that uses three dimensionless parameters (q 1 , q 2 , q 3 ) to describe cracking anisotropy.Cracking anisotropy can be expresses as 2 ) , In ( 4), q 3 (equal to q 2 for uniaxial compression) denotes the crack density for cracks aligned parallel to the loading direction, q 1 denotes the crack density for cracks aligned perpendicularly to loading direction, and P0 and S0 are, respectively, the P-and S-wave velocities of the noncracked solid.In this case, P0 and S0 are the maximum velocities measured under Type 1 loading (meaning that the pores and cracks in the rock are supposed to be closed).Because Type 1 loading did not exceed the elastic limit, the highest stress condition under Type 1 loading (where the maximum velocity values were recorded) was expected to close the vast majority of cracks and not initiate any new cracks.The changes in the crack density parameter during cyclic loading have been determined using (4) and are shown in Figure 11.The general trends in crack density are shown by the density parameter at three stress values, values that represent low, medium, and high stresses.From Figures 11(a) and 11(b) it can be seen that the microstructural evolution during Type 1 cyclic loading exhibits different tendencies in the directions parallel with and perpendicular to the loading axis.The reduction in q 3 as the number of loading steps increases indicates that the applied stress tends to close the cracks that are aligned parallel to the loading axis.This is counterintuitive.We attribute this to the compaction of clay minerals in the sandstone; specifically, the compression squeezes the quartz grains against the intergranular clay minerals forcefully closing the microcracks.This compression stiffens the sandstone but lessens its capacity for rebound.This is confirmed by the increase in tangential moduli from cycle to cycle shown in Figure 6(a) and their reduction in Figure 6(b).In contrast, the step-wise escalation of q 1 in Figure 11(b) suggests that the loading stage gradually increases the number of cracks oriented perpendicular to the loading axis.As described previously, a continuously applied stress would close the horizontal cracks, but the cyclic loading-unloading causes the cracks to repeatedly close and open, leading to fatigue failure and elongation of the existing cracks.This is reflected by the decrease in P-wave velocities as the number of cycles increases, as shown in Figure 8(a).Crack densities during Type 2 loading, illustrated in Figures 11(c) and 11(d), show an increase of both q 3 and q 1 , implying that, for this style of loading, crack density and damage increase continuously.
Conclusions
The influence of the two types of cyclic loading on the damage evolution of sandstone was investigated.Based on the differences in elastic moduli and ultrasonic wave velocities, attributes of microstructural deformation were identified.The main findings are as follows: (1) Low-level cyclic loading caused the specimens to stiffen during loading but soften during unloading.The stiffness degradation method does not describe the accumulated damage satisfactorily, but P-wave velocities clearly reflect the damage development.Figure 11: Graphs showing changes in the crack density parameter at three selected stresses representing low, medium, and high stress values during loading and unloading.Crack density parameter q 3 denotes cracks parallel to the load direction, parameter q 1 denotes perpendicular cracks.(a) q 3 for Type 1 tests, (b) q 1 for Type 1 tests, (c) q 3 for Type 2 tests, and (d) q 1 for Type 2 tests.The units on the -axis are the loading-unloading stages (e.g., C2U stands for Cycle 2 unloading).
(3) Low-level cyclic loading enhances the anisotropy of cracking.This anisotropy results from the compression of intergranular clay minerals and cracks developed by fatigue failure.(4) The irreversible damage accumulated during cyclic loading with an increasing upper stress limit is the consequence of brittle failure in the sandstone's microstructure.
of Jiangsu Province under Contract no.BK20151145, and the National Natural Science Foundation of China under Contract no.51704277.
Figure 2 :Figure 3 :
Figure 2: Illustrations showing the ultrasonic measurement testing system.(a) Schematic diagram.(b) Photograph of the hydraulic testing machine.
Figure 7 :
Figure 7: Graphs showing damage variables versus axial stress for the loading cycles of (a) Type 1 tests; (b) Type 2 tests.
) and 8(b) indicates a progressive development of horizontal cracks.
Figure 8 :
Figure 8: Graphs showing P-and S-wave velocities versus axial stress for Type 1 tests.(a) Loading stage P-wave velocities; (b) unloading stage P-wave velocities; (c) loading stage S-wave velocities; (d) unloading stage S-wave velocities.
Figure 9 :
Figure 9: P-and S-wave velocities versus axial stress for Type 2 tests.(a) Loading stage P-wave velocities, (b) unloading stage P-wave velocities, (c) loading stage S-wave velocities, (d) and unloading stage S-wave velocities.
Figure 10 :
Figure 10: P-wave velocity evolution during loading and unloading.(a) Cycles 1 and 3 from Type 1 testing; (b) Cycles 1 and 2 from Type 2 testing.
( 2 )
The energy dissipated by damage and plastic deformation decreases as the number of loading cycles increases for Type 1 loading, but energy dissipation remains relatively constant for Type 2 loading.The decrease during Type 1 tests can be attributed to the plastic closure of preexisting pores and cracks caused by the low-level cyclic loading.This loading results in a continuous reduction of the number of open pores and cracks.The relatively stable dissipated energy from Type 2 loading results from the accumulation of irreversible damage.
Table 2 :
Dissipated energy for sandstone specimens under Type 1 cyclic loading tests in J/m 3 .
Table 3 :
Dissipated energy for sandstone specimens under Type 2 cyclic loading tests in J/m 3 .
Table 4 :
Young's modulus for sandstone specimens under Type 1 cyclic loading tests in GPa.
Table 5 :
Young's modulus for sandstone specimens under Type 2 cyclic loading tests in GPa. | 2018-12-12T12:40:43.849Z | 2018-04-11T00:00:00.000 | {
"year": 2018,
"sha1": "b26045055d4c4aad0178183bb0417dc680f4806a",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/sv/2018/7845143.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b26045055d4c4aad0178183bb0417dc680f4806a",
"s2fieldsofstudy": [
"Engineering",
"Geology"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
258277628 | pes2o/s2orc | v3-fos-license | Legal Regulation of Internet Platform Banning Behaviors
: There are frequent banning behaviors in the field of Internet platforms in China, which harm the interests of other operators, harm market innovation, and damage the rights and interests of consumers. The market self-healing function in traditional economics does not have a realistic basis and cannot play its practical effect. The regulation of the banning behavior not only will not produce economic damage, but also is conducive to reduce the damage of rights and interests in the competitive market, so it is urgent to regulate this behavior through the anti-monopoly law.The exsiting anti-monopoly law has the dilemma of insufficient explanation and limited application to the regulation of the banning behavior. It is feasible and necessary to introduce the ex ante supervision mode of the comparative law, namely the platform gatekeeper theory, which can effectively suppress the banning behavior of the platform. Therefore, the gatekeeper theory should be perfected from the aspects of gatekeeper's definition standard, gatekeeper's obligation and gatekeeper's platform's defense.This paper provides implications for improving the regulation of banning behaviors in China.
Introduction
With the continuous advancement of the information revolution, super platforms have emerged and banned other operators from entering the platform by virtue of the advantages of traffic and channels.Similar improper practices occur frequently in China's law enforcement practices, which have aroused widespread concern.How to analyze and review such behavior from the perspective of competition law needs to start from the legal definition of platform and banning behavior. First, about the definition of the platform, according to Anti-Monopoly Guidelines for the Platform Economy (hereinafter "Platform Guidelines") promulgated by China, platform refers to a business organization form that enables interdependent bilateral or multilateral entities to interact under the rules provided by a specific carrier through network information technology, so as to jointly create value. This indicates that the core business of the platform is the information intermediary service that satisfies the transaction of bilateral or multilateral entities. [1] Second, regarding the nature of the banning behavior or blocking behavior, Chinese scholars have not reached a consensus on its scope: Yin Jiguo argues that the specific content of the platform banning includes banning accounts, blocking contents, refusing direct links and closing API interfaces;Chen Bing, Chen Qing and other scholars believe that the scope of platform banning behavior is broader, and should also include exclusive dealing and self-preferencing treatment behavior. Exclusive dealing means that the relevant market operators, through some technical measures or contractual arrangements, make the trading objects "trade with themselves, but not with other operators". [2] Therefore, exclusive dealing involves three parties: the operator within the platform and the two platforms with the possibility of competition.At present, the banning behavior discussed in the academic circle are between the platform and competitors, and do not involve other subjects.In addition, self-preferencing behavior overlaps with platform banning in some respects, and there are many ways of self-preferencing treatment, not limited to the banning behavior. Therefore, platform banning behavior should not include exclusive dealing and selfpreferencing behavior. In practice, what has a greater impact on market competition and is more controversial are refusing direct links and closing API interfaces. This also constitutes the main context of the discussion of banning. In the existing literature, the research on banning behavior focuses on two questions: whether to regulate platform banning, and if so, which law should be applied to the regulation? First, whether the platform ban should be regulated, that is, whether the behavior is illegal.Scholars who advocate that the banning behavior is legal and should not be regulated believe that Internet platforms have enough operational autonomy and the right to refuse to open the existing platform traffic channels to competing platforms.If the platform does not violate relevant regulations, but is forced to open resources, the platform will be unjustly harmed. On the contrary, scholars who argue that banning behavior is illegal and should be adjusted argue that platform banning behavior deviates from the basic principles of Internet interconnection and damages the basic rights and interests of consumers and the normal competition order in the Internet field. At present, scholars mainly focus on E-commerce Law, Anti-Unfair Competition Law and Anti-Monopoly Law on how to regulate platform banning. Ye Ming, Zhang Jie and other scholars believe that from Article 12 of the Anti-Unfair Competition Law (" Internet Special Article "), the banning behavior can be identified as the malicious implementation of incompatibility with the network products or services legally provided by other operators. Duan Honglei believes that relevant regulations can start from the provisions of Article 35 of the E-commerce Law, and determine that e-commerce platform operators cannot impose unreasonable restrictions on the transactions of other operators through their dominant position or some technical means. Scholars such as Guo Chuankai support the recognition of platform blocking as a refusal to deal under the Anti-Monopoly Law. However, the above regulatory mechanism are controversial to some extent.First of all, the application scenario of Article 12 of the Anti-Unfair Competition Law is aimed at the vicious incompatibility in the PC era at that time. Whether it can be applied to the external chain blocking in the platform still lacks the support of legal precedent and the clear expression of judicial position.Secondly, the regulation object of the Ecommerce Law is limited to the online sales platform with trading activities, while the most frequent scenes of banning behaviors are social networking platforms, which are different from each other in context. In the application of the Anti-Monopoly Law, the theory does not specify the content of competitive damage caused by the banning behavior, and there is still a lack of systematic and consistent understanding of the definition of relevant market, the identification of market power, and the definition of abuse behavior.From the standpoint of protecting the competitive process, how to implement a proper regulatory mechanism for the platform banning behavior is not only related to the protection of consumer interests, but also closely related to market innovation. This paper will conduct an in-depth analysis on this issue.
The Competitive Harm of Platform Banning
The competitive harm caused by platform banning behavior is the core of the analysis of competition illegality, which is discussed in this section.
The Status of Competitive harm in Chinese Law
The category of monopoly closely related to platform banning behavior is abuse of market dominant position.
Whether competitive harm can be regarded as an independent element of this behavior in China has not reached a consistent conclusion in practice and theory. From the perspective of comparative law, according to the development status of Internet platforms and the legislative purpose of the Anti-Monopoly Law, it is necessary for competitive harm to be an independent constitutive element. In the eyes of China's law enforcement practice, the "Provisions of the Supreme People's Court on Several Issues Concerning the Application of Law in the Trial of Civil Disputes Caused by Monopoly Behavior" issued by the Supreme People's Court in 2012 confirmed in Article 8 that the abuse of dominant market position was determined by the following three elements: Dominant market position, abusive conduct and defense of just cause, excluding and limiting the effect of competition or competitive harm is not the content that must be proved.In the practice of law enforcement in China, there are three positions on the evaluation of competitive harm in abusive conduct, and no unity has been formed. [3] First, the constitutive elements of independence, namely supporting the effect doctrine model, affirm the significance of independent evaluation of competitive harm;Secondly, the non-independent constitutive elements, that is, the competitive harm is integrated into the abuse behavior to investigate, acquiesce to its secondary attribute of the constitutive elements function;Third, non-constitutive requirements, that is, without examining the existence of competitive harm, directly according to the three requirements to judge. From the perspective of comparative law, the constitution of exclusive abuse in EU law can be summarized into four elements: market dominance, abuse, competitive harm and justification defense. Competitive harm clearly exists as an independent constituent element.The Vertical Merger Guide of the United States makes it clear that the definition of relevant market should be based on the determination of competitive harm.It can be seen that competitive harm as an independent evaluation requirement has a solid foundation of comparative law. Therefore, although the requirement of competitive harm cannot be identified from existing Chinese legal norms.However, considering the historical process and legislative objectives of the emergence and development of the anti-monopoly law, it can be seen that the competitive harm requirement is indispensable. [4] If competitive harm is not considered as an independent component, this provision is likely to be abused, thus affecting the normal market order.
Damage to the Interests of Competing Platforms
The implementation of the banning behavior infringes on the fair competition right of the competing platforms.The super platform makes use of its user traffic advantage to prohibit the normal operation behavior of the competitive platform under the pretext of independent management rights, which makes the competitive platform cannot normally absorb traffic and attract consumers through the super platform, and the competitive ability in the relevant market is seriously impaired.Some scholars support the freedom of contract and believe that the blocking behavior is a normal means of competition, and the behavior of the competitive platform to absorb traffic through the super platform is a "free rider" behavior, and forcing the platform to provide channels will damage the platform's own interests and affect the normal competition order. However, the operational autonomy of incumbent platforms also has its limits.First, the super platform has strong technology, capital and data aggregation effect, and has become the main place for market transactions and resource allocation at the present stage. Therefore, it forms high market access barriers, and other operators cannot establish similar platforms by themselves at the present stage.Second, the public nature of the super platform formed by the free mode determines that denying competitors access to the existing platform without justifiable reasons violates the principle of connectivity.
Harm Innovation in Competitive Markets
Innovation is an important institutional pursuit of competition law, and the banning behavior will damage innovation to some extent. [5] First, the innovation motivation of competitive platforms is frustrated.The ban will block the channel expansion of competitive operators and deprive competitive enterprises of the opportunity to innovate. Due to limited resources, small and medium-sized operators can only rely on super platforms to obtain user traffic and data in the early stage of entrepreneurship.In order to eliminate the impact of the ban, competitive platforms need to forcibly expand the investment of capital and build similar platforms, which will cause a waste of resources and seriously damage the innovation funds of small and medium-sized operators.The prohibitions imposed by the super platform on some start-ups will also make investors lose confidence, thus affecting the normal financing of the start-ups. Second, the innovation motivation of existing platforms is insufficient.Super platforms implement prohibitions to block the development of competing platforms, thereby solidifying their dominant position in the market.Therefore, super platforms do not need to adopt risky innovative behavior to maintain good revenue and growth.In the meantime, failure to punish platform blocking is an incentive to continue abusing platform power, which goes against the innovative spirit of the market.
Harm the Interests of Consumers
The dominant service model for super platforms is zeroprice service, which is often used to gain market power and impose prohibitions. [6] Some scholars believe that without price, there would be no market power, let alone harm to consumer welfare.This view ignores the special situation in the Internet field, where the traditional price-centric approach to damage analysis gradually fails.In addition to explicit price indicators, non-price factors such as consumer attention and choice are also important components of consumer rights and interests evaluation. The implementation of the banning behavior makes the users unable to communicate the relevant links normally. If the users give up the communication links, their choice will be seriously damaged.If users choose to share links in ways other than easy means, their use costs, namely, information costs and attention costs, will rise accordingly.Viewed from this point of view, consumer welfare clearly suffers.
The Economic Harm of No Regulation
Neoclassical economists believe that perfectly competitive market equilibrium models already describe the real world with considerable accuracy.First, this precise model does not require outside intervention, and works on its own, relying on private competitive markets to correct possible problems. [7] Second, they argue that even in the presence of market failures, government intervention will not achieve its intended purpose, but will only undermine the self-correction functions of the market economy and worsen the economic environment.Finally, economists believe that jurists lack a correct understanding of the field of economics and market behavior, and their conclusions are wrong, and they should keep silent about the business behavior that they are not familiar with. However, the neoclassical economic theory system based on perfect competition models have the problems of inconsistent internal logic and poor practice effect.First, perfect competition lacks a solid foundation. In order to make the models fit the reality, the theoretical builders imposed multiple restrictions on it, such as free flow of information, symmetric structure, and balance of negotiation forces, which greatly reduced its application effectiveness. [8] Second, from the practical effect of countries that carry out the macro and micro economic policies of neoliberalism, the "ideal situation" considered by neoclassical economists did not appear, but there was a serious economic recession and even crisis.At the same time, in some developing countries, whether it is microeconomic behavior or macroeconomic behavior, government intervention exists in a wide range, and often produces effective control results. [9] Third, in the field of Internet platform, the market self-repair ability under the business ecological model is limited, and the super platform with the complete ecosystem has enough market power to suppress the competition space of each other.Therefore, although traditional economics believes that the market will heal itself, such ability is extremely limited for these super platforms.
The Benefits of Regulation
Regulation of platform banning behavior can achieve better economic effects.First of all, the economic damage caused by the regulation of blocking behavior is small.The regulation of the blocking behavior only makes the development of similar industries of the incumbent platform may be frustrated in the short term, but it does not mean that it cannot develop. The full competition after the lifting of the blocking behavior will produce better economic effects.And this frustration is not caused by misconduct, but the normal business result of fair competition.Second, the economic benefits of regulation are greater.The regulation will produce greater economic welfare.For the competiting platforms, the regulation is to protect their fair management right, which is conducive to the normal operation and development of these small and medium-sized competitive platforms.The regulation of the banning behavior encourages the banned platform itself and the competitive platform to actively innovate.And it protects the consumer interests, which are composed of non-price factors such as consumer attention and choice.
The Regulatory Model of Gatekeeper Theory
Some scholars support the essential facilities principle as the rule basis, the banning behavior is identified as the refusal of transaction under the Anti-Monopoly Law system, but its application is difficult and controversial.In contrast, the platform gatekeeper institution in the comparative law may be a more reasonable regulatory model.
Contradiction between the Theory of Essential Facilities and Platform Economy
First of all, there are difficulties in applying the principle of essential facilities.The essential facilities doctrine presupposes that the violation of the law is a refusal to trade, so the relevant market, market power and abuse need to be identified.However, in the field of platform, these three aspects are difficult to identify and controversial, the application of law is not clear, and the academic community has not reached a consistent conclusion. [10] Secondly, some scholars believe that the data mastered by the platform may constitute essential facilities, and without access to the data mastered by the super platform, new products cannot be developed. But judging from the development of Chinese and foreign tech giants such as Facebook, Tencent, Alibaba and ByteDance, the difference in their business is not determined by the data itself, but by the further processing of data to meet the development needs of their products and services.In the foreign music media service market, Apple's iTunes has accumulated a huge amount of data before Spotify entered the market. Spotify has surpassed the active amount of iTunes users by optimizing data processing without the accumulation of big data, which proves that data itself does not reach the level of essential facilities.
The Prudent Application of the Theory of Essential Facilities
The U.S. antitrust law first extended the concept of essential facilities through precedents, but there are obvious differences in its application in courts and academic circles. U.S. courts have shown extreme caution in applying the essential facilities theory to the digital economy, declining to comment on the application of essential facilities in both the Linkedin and Facebook cases.Compared with the United States, the European Union applies the essential facilities theory in a broader scale and scope.But in the digital economy, the EU has yet to identify a case for essential facilities.The Commission stressed "careful, case-by-case judgment" in a written response in February 2020 to the question of whether large Internet platform companies could become essential facilities.In short, the major anti-trust law systems outside China are cautious about applying the essential facilities theory to the digital economy.
Considering the damage to market innovation caused by the identification of essential facilities, China's antimonopoly law enforcement agencies have not directly identified "essential facilities" in the handling of relevant cases, and the concept of "essential facilities" only appears clearly in anti-monopoly civil disputes. Therefore, China also holds a cautious attitude towards the theory of essential facilities. The practical value of regulating banning behavior through the theory of essential facilities is not great, and using this way may mean laissez-faire.
Feasibility and Necessity of Transplanting Platform Gatekeeper System
With the rise of the New Brandeis School, [11] the European Union and the United States are no longer limited to the Chicago school's simple analysis of the economic benefits of anti-competitive behavior, but adopt more radical structuralism, and the idea is adjusted from post-cautious review to pre-behavior supervision, the digital market gatekeeper system emerged. [12] The gatekeeper system is currently defined as a key enterprise providing core digital services. On July 18, 2022, the European parliament officially approved the "Digital Market Act" (DMA), in reference to the gatekeeper recognition basis.These criteria of gatekeeper will be met if a company:has a strong economic position, significant impact on the internal market and is active in multiple EU countries;has a strong intermediation position, meaning that it links a large user base to a large number of businesses;has (or is about to have) an entrenched and durable position in the market, meaning that it is stable over time if the company met the two criteria above in each of the last three financial years. Through the gatekeeper system of platform to regulate the abuse of market dominant position including banning behavior, it is an alternative plan for China's competition regulation.
The Difference between Essential Facilities and Gatekeeper Theory
The theory of essential facilities requires the analysis and identification of relevant markets, market power and abuse. There is no clear quantitative identification standard for essential facilities, so it has little practical significance.In the gatekeeper theory, there is no need to define relevant market and other factors, but the obligation of "inaction" can be directly imposed on platform enterprises with significant influence.The gatekeeper system has a clear quantitative standard compared with the law, and the identification cost is low, which is very helpful for the application of the law.This kind of regulation is ex ante. Compared with the ex post remedy model of the essential facilities theory, the gatekeeper system can reduce anti-competitive behaviors from the start, so as to better protect the interests of consumers and competitors.
The Necessity of Transplanting Platform Gatekeeper System
The gatekeeper system is a product of structuralism, and critics argue that it carries a risk of false positives and should not be widely applied.Antitrust enforcement often faces a lot of ambiguity, so it is inevitable to make two types of errors: the false positive error of identifying normal business behavior as anticompetitive behavior, and the false negative error of misidentifying anticompetitive behavior as normal business behavior. [13] In general, the risk of false positives in antitrust enforcement is greater than the risk of false negatives. Therefore, Anti-Monopoly Law enforcement mostly maintain a prudent, post-processing attitude. But false positive risk is greater than the risk of false negative is not widespread, in the field of Internet platforms, economies of scale, synergy, network effects and the effect of the bilateral market makes the structural market failures, the platform has a significant competitive advantage compared with other competitors, so as to have a higher risk of abuse of dominant market position and market difficult to self-correction in a short time. In this case, exante supervision is necessary. This regulation logic is in line with the trend of China's increasing anti-monopoly supervision on Internet platforms, which is helpful to regulate the competition order in the platform economy.
The Feasibility of Transplanting Platform Gatekeeper System
Any reference of comparative law system should be judged to be reasonable and adjusted according to domestic conditions.First of all, the "Platform Guidelines" is the earliest legal document regulating the field of platform anti-monopoly, and China's platform antimonopoly is in the forefront. There is already an institutional basis for the supervision of Internet platforms, and the gatekeeper system can be increased by revising and filling relevant rules under the existing regulatory system.Secondly, China's anti-monopoly policy towards Internet platforms is continuing to advance.Before this, the anti-monopoly law enforcement agencies applied the relevant anti-monopoly rules to the unfair competition behavior of Internet platforms less, and more laws and regulations were not implemented, so there was no institutional burden, and the introduction of gatekeeper system was less resistance.
Compents of Gatekeeper Rules in Chinese Law
On the premise of learning from the mature system of comparative law, the gatekeeper rule in Chinese law should be adapted to the existing domestic anti-monopoly laws and regulations and the development of the Internet platform. Its rule system can be roughly divided into three parts: defining the gatekeeper's standard, the gatekeeper's obligations, and the justifiable defense reasons.
Definition of Gatekeeper Standards
In the EU gatekeeper system, clear identification basis has been given, supplemented by corresponding quantitative criteria.The turnover criteria are defined as an annual turnover of at least 6.5 billion yuan or a market capitalization of at least 65 billion yuan in the previous year, and the user traffic criteria are defined as a minimum of 45 million monthly active end users who must meet the criteria for three years. These quantitative indicators are too small for China's Internet platforms, so the application of the defined gatekeeper system requires the antimonopoly law enforcement agencies to make adaptive adjustments on the basis of collecting relevant market data, and define the gatekeeper according to the situation of turnover and user traffic, so as to conform to the Chinese context.
Gatekeeper's Obligations: Competitive Obligations
Although various obligations of gatekeepers are stipulated in the gatekeeper system of the European Union, they cannot be well targeted against existing banning behavior.On the basis of learning from the gatekeeper system, the concept of competitive obligation can be introduced. Scholars believe that the chaotic competition phenomenon of Internet platform stem from mutual exclusion of private interest and publicity, so the key to regulate the platform is return to the public. Therefore, competitive obligations should be set for the Internet platform and incorporated into the gatekeeper system. That is to say, the platform should adopt the general principles of openness, neutrality and connectivity, and take the refusal to open and neutrality to specific operators as exceptions. Such exceptions should be justified. [14] The content of its opening must include public services, that is, operators should open the services that meet the two standards of universality and scarcity, and can selectively open the services that do not meet the two standards.
Justifiable Defense Reasons
An operator's refusal to open neutrality to a particular operator requires a valid defense.First, if other operators' links exist illegal, against public order and good custom content, the platform has the right and obligation to implement the ban.Second, if the banned link is suspected of inducing the collection of user information, the platform can block it in order to protect the legitimate rights and interests of users. If the platform can prove that opening the platform to a particular operator will increase the maintenance cost of the platform and seriously damage its own interests, it can be banned accordingly.Finally, if the platform proves that the open link causes other platforms to "free ride", the damage of market innovation drive will affect the public interest, and the ban can also be implemented.
Conclusion
The introduction of gatekeeper rules for super platforms can effectively reduce the difficulty of ex-post review after the occurrence of monopoly behavior, directly enshrine special obligation requirements for platform enterprises, avoid the illegal behavior of Internet platform enterprises abusing platform power, and easily strengthen the control of platform enterprises.However, the definition and scope of platform gatekeeper is still a complex and challenging regulatory work, which will depend on the special review and continuous evaluation and adjustment of professional institutions. The relevant work will be carried out soon. It is believed that the regulation of platform gatekeeper will have positive effects in the future. | 2023-04-23T15:11:01.475Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "8e2748fdfb6fb204d7de60eb93ed8265145173ec",
"oa_license": "CCBY",
"oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2023/11/shsconf_adcs2023_01036.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "321fe6d4f3602684841caeddb2485741e963736e",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": []
} |
119094359 | pes2o/s2orc | v3-fos-license | Four new active galaxies with steep soft X-ray spectra
We have discovered four AGN in the ROSAT all-sky-survey data with very steep X-ray spectra. We apply several models to these X-ray spectra with emphasis on warm absorber models which give an adequate description of the data. We report on the follow-up optical and radio observations which allow the identification of three of these objects as Narrow Line Seyfert 1 galaxies, and the fourth as BL Lac object. We have measured small-FWHM Hbeta lines, strong FeII emission and weak [OIII] emission in the three Narrow Line Seyfert 1 galaxies, in line with known correlations with respect to the steepness of the X-ray spectra. We have discovered strong optical variability in the BL Lac object and two of the Seyfert galaxies using photographic plates of the Sonneberg Observatory field patrol. We finally discuss the statistical implications of our search algorithm on the expected number density of soft X-ray selected AGN and conclude that up to 30% of X-ray selected AGN might have supersoft X-ray spectra.
Introduction
Observations with the HEAO-1, Einstein, EXOSAT and Ginga satellites have shown that the X-ray spectra of active galactic nuclei (AGN) above a few keV are well described by a power law with a photon index of about {1.5 for radio-loud and about {1.9 for radio-quiet quasars. A soft X-ray excess below 1 keV is a common feature in the X-ray spectra of AGN. This excess is often related to the optical/UV big blue bump which dominates the spectra Send o print requests to : Greiner, jcg@mpe-garching.mpg.de ? The table containing the optical brightness estimates of three of the four objects over the past 30 years is only available in electronic form at the CDS via anonymous ftp 130.79.128.5. of most radio-quiet AGN. In some cases the excess at Xrays can be modeled as a very steep and soft component which is consistent with the Wien tail of a hot thermal component.
A systematic correlation of ROSAT all-sky-survey Xray sources with known AGN has resulted in 102 sources with more than 80 counts suited for an estimation of their spectral parameter, in particular their hardness ratios (Schartel 1994). AGN in the radio-quiet subsample of this ROSAT -quasar sample show signi cantly steeper spectra (photon index = {2.53 0.04) than those of the radio-loud subsample ( = {2.27 0.07).
Further detailed studies of selected AGN have revealed some objects with extremely steep soft X-ray spectra including IRAS 13324-3809 (Boller et al. 1993), Ark 564 (Brandt et al. 1994), the high redshift object E1346+266 (Puchnarewicz et al. 1994), IC 3599 , Grupe et al. 1995a, and WPVS007 (Grupe et al. 1995b). A variety of di erent models has been proposed to describe the emission of these objects, but no generally accepted explanation has emerged yet. Reprocessing and free-free emission have been suggested already earlier, and recently accretion disk models with various modi cations have become popular.
Optical properties of a large sample of Einstein ultrasoft AGN have been studied by Puchnarewicz et al. (1992). They found that a major part of these soft X-ray selected AGN turn out to be Narrow Line Seyfert 1 galaxies with narrow (FWHM <2000 km/s) H lines (Osterbrock and Pogge 1985).
Here we report the discovery of four very soft X-ray AGN which were found searching ROSAT all-sky-survey data. We present the optical identi cations (section 2.2), details of the optical spectroscopy (section 2.3), describe the discovery of optical variability of three out of the four objects (section 2.4), give details on the X-ray spectra and the resulting parameters of the model tting (section 2.5), present the survey X-ray lightcurves and derive X-ray luminosities (section 2.6), report the radio observations of (1) Here, the radio position is given since the optical position is not accurate due to blending (see section 2.7).
three of the four objects (section 2.7), and nally discuss our observational and tting results and the statistical implications of our search (section 3).
Observational results
2.1. Selection criteria In a study aimed at a statistical comparison of optically variable sources at di erent galactic latitudes we have examined ROSAT data in a 100 square degree eld centered around 26 Com (for rst results on are stars see Richter, Br auer & Greiner (1995) and on cataclysmic variables see Richter & Greiner (1995a, b)). The Coma eld was scanned during the ROSAT all-sky-survey in December 1990 for a mean total observing time of 470 sec (Tab. 6). Using a maximum likelihood method we detected 238 X-ray sources in the above 10 10 degree eld with a likelihood larger than 10. These X-ray sources were identied using (1) the objective prism spectra taken with the Hamburg Schmidt telescope on Calar Alto, (2) including the positional correlation with the X-ray positions which are accurate to typically less than 30 00 and (3) the X-ray to optical intensity ratio for known populations. In the Hamburg objective prism survey spectra are taken in the 3400{5400 A range with a dispersion of 1390 A/mm down to 17{18th mag covering the whole northern hemisphere except the galactic plane (j b j>20 ). For the present purpose of looking for soft AGN we have applied the following three selection criteria: 1. The hardness ratio HR1 (de ned as the number of counts in the PSPC channels (52{201)-(11{41) vs. those in channels (11{41)+(52{201)) plus its error is lower (i.e. softer) than {0.5. 2. The total number of counts collected during the allsky-survey with the PSPC is greater than 80. 3. The object classi cation using the spectra of the objective prism plates indicates an extragalactic object, i.e. classi cations of \QSO", \EBL-WK" (weak blue object with emission lines) or \Blue Gal." were chosen (see Bade et al. 1995 for details on the object classication). This selection yielded three of the four sources listed in Tab. 1. In addition, we included the X-ray brightest non-stellar object (RX J1257.5+2412) in our eld because of its soft spectrum (which does not t the hardness ratio criterion, however). This source was previously detected in X-rays during an Einstein slew and then designated 1ES 1255+244 (Elvis et al. 1992).
All objects are new identi cations. In Tab. 1 we give the ROSAT name (column 1), the total number of counts collected during the ROSAT all-sky-survey observation (2), the hardness ratio HR1 with error (3), the Sonneberg variable designation (4, see paragraph 2.4. for more details), the position of the optical counterpart (5), and the distance D between X-ray and optical position (6).
Optical identi cation
Inspection of the X-ray source positions on the Palomar Observatory Sky Survey revealed only one optical counterpart candidate each inside the 2 error circle for RX J1225.7+2055 and RX J1250.2+1923, whereas there are three and two objects near RXJ1257.5+2412 and RX J1239.3+2431, respectively. Low-resolution spectra of the optical objects nearest to the X-ray positions of RXJ1239.3+2431, RXJ1257.5+2412 and RX J1225.7+2055 were obtained on March 28, 1995 with the double spectrograph at the Palomar 200-inch Hale telescope (Tab. 6). Gratings with 316 and 300 lines/mm, resulting in a dispersion of 204 A/mm and 140 A/mm, were mounted in the red and the blue arm of the spectrograph. The dichroic separated the two sides at 5200 A. The spectra covered 3500{5200 A and 5200{7500 A at a FWHM resolution of 3 A and 6 A, respectively. The spectra were corrected for bias and at eld, and were wavelength calibrated using standard IRAF procedures. HD 84937 was used as standard for the ux calibration.
On June 29, 1995, a spectrum of the object closest to RX J1250.2+1923 was taken with COSMIC in the longslit spectrograph mode at the Palomar 200-inch Hale telescope (Tab. 6). A grism with 300 lines/mm, yielding a dispersion of 130 A/mm, was used. The response of the spectrograph was derived from an observation of HZ 44, but no ux calibration was possible.
The optical positions of the identi ed AGN (given in Tab. 1) were determined using the APM nding chart programme which is based on the digitized Palomar Obser- . Crosses denote the best t X-ray position while circles mark the 2 X-ray error box (33 00 radius). All charts are 5 0 by 5 0 with North at the top and East to the left. vatory Sky Survey. Except object RXJ1250.2+1923 the spectra were also used to derive an optical brightness (Tab. 2).
RX J1239.3+2431
The spectrum of the central object at the location of RXJ1239.3+2431 shows redshifted Balmer lines of hydro-gen on top of a blue continuum. The strongest line is identi ed with H (see Fig. 2 for more identi cations), which is consistent with the positions of the other strong lines detected. This results in a redshift of z=0.186. The high state of ionization, as indicated by the strength of OIII] 5007 relative to OII] 3727 (the latter is not detected in the spectrum), points to the Seyfert nature of the object while the faint absolute visual magnitude excludes (1) Intensity around 4350 A.
(2) Maximum and minimum brightness on archival photographic plates of Sonneberg Observatory.
(3) Using Ho=50 km/s/Mpc and qo=0.5 (4) This measurement is from the January 1992 observation, while all other radio observations are from August 1994.
(5) Brightness estimated on the Palomar Observatory Sky Survey blue print.
a quasar identi cation. In particular, the clear presence of FeII emission and the fact that OIII] < H (both unexpected for Seyfert 2 galaxies, which do not show FeII (e.g. Osterbrock 1989) and are characterized by an intensity ratio of OIII]/H > 3 (Osterbrock and Shuder 1982)) lead to the classi cation of either a Seyfert 1 or Narrow Line Seyfert 1 (NLSy1 hereafter). The NLSy1 interpretation according to the classi cation of Osterbrock and Pogge (1985) is favoured by the weakness/absence of further narrow emission lines and particularly the not much broader width of H as compared to OIII] (as quanti ed in section 2.3.).
There is an additional object of 22 mag at 27 00 distance to the X-ray position (east to the Seyfert galaxy). The noisy spectrum is blue and featureless, suggesting an identi cation as a faint blue galaxy which is not related to the X-ray source (based on the implied high L x /L opt ratio).
RXJ1257.5+2412
There are three objects within the RX J1257.5+2412 Xray error circle. The northern and more distant object has a featureless spectrum pointing to a galaxy. The faintest of the three objects (south-west of the best t X-ray position) is found to be an F or G star.
The optical spectrum of the third, brightest object is featureless without emission lines. Although some strong features from the underlying galaxy are visible the relative ux depression bluewards the Ca H/K break is less than 25%. Therefore this spectrum ful lls all spectroscopic BL Lac characteristics. Additional support for this interpretation comes from the relative uxes in the radio, optical and X-ray bands (see below). Using the MgI b 5176 absorption line we derive a redshift of z = 0.140. No correction for the presence of starlight has been made for the tabulated optical magnitudes and ux ratios. RX J1225.7+2055 shows evidence for a second, broader H component as compared to a single Gaussian which only would t the narrow core of the line. However, the quality of the data does not allow a detailed multicomponent t, which also strongly depends on the adopted continuum level and FeII contribution. Therefore, the 2component t presented in Tab. 3 only serves to determine an upper limit on the H luminosity. Additionally, H shows a slight blue-asymmetry in both, RX J1239.3+2431 and RX J1225.7+2055.
Optical variability
We have examined the long-term optical behaviour of all our objects on photographic plates of the Sonneberg astrographs 400/1600 mm and 400/2000 mm. About 320 plates of the interval 1962{1995 of the eld 26 Comae were used (Tab. 6). The limiting blue magnitude of the best plates is about 18 m {18: m 5 mag. In addition, for the object RX J1225.7+2055, 300 plates of the overlapping eld 5 Comae, and for the objects RX J1257.5+2412 and RX J1250.2+1923 some hundred plates of the overlapping eld 35 Comae could be used. The photometric calibration was done using several digitized photographic plates. The magnitudes were linked to the B magnitudes of a UBV sequence of several dozens of stars in the Comae region (Argue 1963) using a method of brightness determination on photographic plates developed by Kroll and Neuge- bauer (1993). For stars fainter than 17 m the sequence had to be extrapolated. Therefore, the given optical magnitudes have an internal error of about 0.1 mag and might be shifted absolutely by several tenth of a magnitude. The detailed measurements are available in electronic form at the CDS.
RXJ1250.2+1923 proved to be too faint to be visible on Sonneberg plates. { The remaining objects turned out to be variable. In the fourth column of Tab. 1 the designation of a Sonneberg variable is given. The light variations can be described as follows: RXJ1239.3+2431= S 10940 is visible only on good plates; in most cases its brightness is below the plate limit. The light curve is typical for Seyfert galaxies. It shows lively brightness changes: some spikes with a duration of several days or weeks can be seen (Fig. 4). The fastest variation with the largest amplitude in our data is a 0.55 mag jump within three days. After correction for the time stretching this timescale corresponds to a maximal size of the emission region of 6:5 10 15 cm.
RXJ1257.5+2412= S 10941 is clearly found to be variable, which besides the spectral characteristics is a classical property of BL Lac objects , the prototype of which was discovered by Ho meister (1929). Since S 10941 is mostly below the plate limit, no details on the form of the variations can be given except the rather slow fall of the mean brightness during the late 60ies and the slow rise in the early 80ies (Fig. 5). Variations on shorter timescales are not excluded, and indeed such variations are readily detected during seasons with better temporal coverage (lower panel of Fig. 5). Again, intensity drops and rises within a few days are measured with amplitudes up to nearly 1 mag.
The lightcurve of RX J1225.7+2055= S 10942 exhibits long waves (several hundred to thousand days) superimposed on shorter (several tens of days) waves of small amplitude (Fig. 6). The large dispersion of the single observations indicates that there should be variations on a still shorter time scale. Indeed, there seem to be changes of several tenths of a magnitude, mostly minima, within a day. But the object being a Seyfert galaxy, the existence of short-term eclipsing light variations is improbable. CCD photometry performed in May 1995 on three night for 2{ 3 hours each failed to detect such short-term variations. Examples of the photographically detected variations are shown for two well covered seasons near the minima of the long-term waves and during a very deep minimum in 1986 (lower panel of Fig. 6). The largest observed variability timescale is 11 yrs (or 8 yrs in the Seyfert's rest frame), has an amplitude of 0.3 mag and is seemingly periodic (indicated by the dotted line in the upper panel of Fig. 6).
X-ray spectra
For the X-ray spectral analysis the source photons collected during the all-sky-survey scans were extracted with (1) All ts were performed with the redshift xed at the optically determined value.
(4) The power law photon index was xed at {1.9, and the mass of the central object at 10 6 M . (6) Ratio of the optical ux at 2500 A in the AGN rest frame, and the X-ray ux at 2 keV: OX = { log(S2keV/S2500A) / 2.605 according to Tananbaum et al. (1979). (7) Ratio of the radio ux at 5 GHz and the optical ux: RO = log(S5GHz/S2500A) / 5.38 according to Stocke et al. (1985). a radius of 5 0 . The background was chosen at the same ecliptic longitude at 1 distance, corresponding to background photons collected typically 15 sec before or after the time of the source photons. Standard corrections were applied using the dedicated EXSAS software package (Zimmermann et al. 1994).
According to the selection criteria all sources are dominated by emission below 0.5 keV though there is always weak, but non-zero emission above 0.5 keV. Motivated by the extreme hardness ratios we have tried some spectral tting of the background-subtracted and vignettingcorrected source photons. We caution, however, that except for RX J1257.5+2412 all sources have only a relatively small number of photons (see column 2 in Tab. 1), and therefore all tting results come with large statistical errors.
As a rst step, we have tted a power law model to the data. In all cases, this gives acceptable ts, i.e. in no case a second spectral component is necessary. The resulting photon indices range from {2.4 up to {5.4, and except for RXJ1225.7+2055 all power law ts seem to require a larger absorbing column than the galactic hydrogen column. These and the remaining t parameters are given in Tab. 4 which lists for all four new objects the total galactic absorption in the direction of the source ( rst row), the results of spectral ts of a power law, blackbody as well as disk blackbody plus power law model (for the case of the latter model including the e ective temperature as well as the fractional luminosity of the power law component), and in the last rows the unabsorbed X-ray luminosity in the ROSAT band, and the optical to X-ray and radio to optical ux ratios. As a next step, tting of a blackbody model (in the systems' rest frame) in all cases gives a considerably poorer reduced 2 due to the fact that the hard energy tail has to be ignored. The same holds when tting the standard disk blackbody model (Shakura and Sunyaev 1973). It is interesting to note that RX J1257.5+2412 requires practically no absorption at all when using a soft component above a power law. This strongly argues in favour of a single power law for the X-ray spectrum of this BL Lac object since it obviously cannot avoid the galactic absorption. Therefore, we have tted for comparison purposes a disk blackbody plus a power law model (this combination is referred to as the accretion disk model in the following) to the X-ray data of the three NLSy1s. Since the slope of the power law is not constrained by the data we xed the photon index at {1.9. Changing this xed photon index to {1.5 has no e ect on the best t parameters except a < 5{10% change in the normalization.
Finally, we also applied a warm absorber model to the NLSy1 data (Tab. 5). The warm material was modeled using the photoionization code Cloudy (Ferland 1993). The ionizing spectral energy distribution (SED) incident on the clouds was assumed to originate from a pointlike central energy source. Solar abundances were adopted (Grevesse and Anders 1989). As SED we have chosen a mean Seyfert continuum after Padovani and Rafanelli (1) ROSAT Name log U log N w log Norm (2) (1) NH was xed at its galactic value.
(2) In ph/cm 2 /s/keV at 10 keV. Fig. 8. Spectral energy distribution from the optical to soft X-rays in the NLSy1 RX J1239.3+2431 (top) and RX J1225.7+2055 (bottom). The solid lines in the right part are the (absorption corrected) best t models applied to the ROSAT data. The dashed line corresponds to a power law model with the absorption xed at the galactic NH value. The range of the low-energy ends of these curves represents the error in the ux determination at 0.1 keV. The solid lines drawn through the (extinction corrected) optical spectrum up to the Lyman limit visualize the lower and upper limit in the determination of the ux at the Lyman limit. The dotted line is the low-energy extension of the standard disk black body model which is simply added to show the general turn-over. It should be noted that the largest amplitude of the observed optical variability in RX J1225.7+2055 is a factor of 3, which cannot explain the rather large OX.
(1988) from the radio to the optical region with a break at 10 and an energy index ={2.5 -longwards, an UV-EUV power law with uv x ={1.4 (Kinney et al. 1991) extending up to 0.1 keV, and an intrinsic X-ray power law with ={0.9 (Nandra and Pounds 1994) extending up to 100 keV followed by a break into the gamma-ray region. The index uv x was later modi ed for RX J1239.3+2431 (to uv x ={0.75) and RXJ1225.7+2055 (to uv x ={1.9) to account for the estimated EUV SED in these objects (see section 3.2.2 and Tab. 5 ). We note, however, that for the nal best t the ionization structure of the warm gas is dominated by the X-ray regime of the SED.
We calculated a sequence of models with varying warm hydrogen column density N w and ionization parameter U, de ned as U=Q/(4 r 2 n H c), where Q is the number rate of photons above the Lyman limit, r is the distance between nucleus and warm absorber, n H is the hydrogen density ( xed to 10 9:5 cm 3 ; this value is also used for the estimates in the discussion, but all derived quantities and in particular the X-ray absorption structure depend only weakly on n H ) and c the speed of light. Initially, the cold column density was left as an additional free parameter but turned out to always approach the galactic value and thus was xed at that value for the nal parameter estimates. The warm absorber ts were done in the rest frame of the Seyfert galaxies.
We nd that a warm-absorbed at power law provides a successful t to the observations as well. As expected, the data do not allow to overcome the degeneracy between di erent combinations of U and N w that produce a similar ionization structure, i.e. several U-N w pairs represent the observed spectrum with comparable success. Instead of statistical errors we supply the range in both parameters according to a 4 2 red = 0:2 (Tab. 5).
The multifrequency SEDs of RX J1239.3+2431 and RX J1225.7+2055 resulting from the combined optical and X-ray data are shown in Fig. 8.
X-ray intensity and luminosity
The ROSAT all-sky-survey observations (Tab. 6) consist of 31 scans with 9{30 sec duration each and spaced by the orbital period of the satellite of 96 min (or multiples). Due to these short exposure times and countrates of the sources between 0.2{1 cts/sec (Tab. 2) we have binned all photons of each scan into one time bin. The resulting lightcurves of all four objects are shown in Fig. 9. There are no drastic X-ray intensity variations over the two days of scanning observations. Whether or not the variations by a factor of two are real is hard to evaluate due to statistical uncertainties.
The object RX J1257.5+2412= 1ES 1255+244 was already \observed" in X-rays during 8 Einstein slews, resulting in the detection of 14 counts during the 34.2 sec of slew exposure (Elvis et al. 1992). Using Fig. 10.21 in the ROSAT AO1 document which gives the PSPC/IPC count rate ratio in dependence of the power law slope and the absorbing column we convert the background subtracted Einstein IPC rate of 0.36 cts/s with the best t power law model parameters of Tab. 4 into an expected ROSAT PSPC rate of 1.01 cts/sec. The comparison with our ROSAT all-sky-survey measurement of 0.93 cts/sec demonstrates no variability within the errors of the mea-ison assumed the continuation of the steep X-ray slope as measured with the PSPC into the Einstein band which might be questionable. A hardening of the intrinsic spectrum above the ROSAT band would reduce the expected PSPC rate, but since only a part of the Einstein band would be a ected, the conclusion of no X-ray variability is rather robust.
Converting the mean ROSAT all-sky-survey countrates into luminosities strongly depends on the spectral model adopted. We have used the accretion disk model for the three NLSy1s and the power law model for RX J1257.5+2412, and give as \errors" of these estimates for the NLSy1s the range of luminosities resulting from applying those of the di erent spectral models of Tab. 4 which yield acceptable ts.
Since we have no contemporaneous optical measurements to the X-ray survey observations, and all but one source show optical variability by a factor of 2{4, the optical luminosities during December 1990 are uncertain also. Thus, the derived OX ratios for our objects (Tab. 4) can only serve as order of magnitude estimates. 10. Broad band energy distribution of the new BL Lac source RX J1257.5+2412 as determined from our non-contemporaneous data. Filled circles denote VLA measurements from 1994, the open circle that from 1992 (the vertical line visualizes the amplitude of variation within these two years). The arrow gives the 90% upper limit above 100 MeV from the EGRET phase 1{3 database (Maddox 1995). The ROSAT X-ray spectrum is shown as observed with the best t power law model added (extinction corrected) while the optical data are extinction corrected.
Radio Observations
All sources except RX J1250.2+1923 were mapped in August 1994 with the Very Large Array (VLA) radio telescope in its hybrid BC con guration at 1.4 and 8.4 GHz for all three objects at the two frequencies. Due to radio interference, the noise limit at 1.4 GHz stayed well above the theoretical limit of 190 Jy.
In the eld of RX J1257.5+2412 an unresolved radio source was detected at both radio frequencies (see Table 2). The radio position is R.A. (2000.0)= 12 h 57 m 31: s 9 and Decl. (2000.0) = +24 12 0 40 00 corresponding to a 2: 00 5 o set from the optical APMchart position (12 h 57 m 31: s 8 +24 12 0 38 00 ). However, since the APM position is derived from a blended object, its position might have a larger than usual error. We therefore adopt the radio position as the more precise one (see Tab. 1). The o set from the nominal ROSAT all-sky-survey position is 8 00 .
We also analyzed an observation taken from the VLA archive (Schachter 1995) which was performed in January 1992 at 4.8 GHz, and included the radio ux from the again unresolved source in Table 2. The ux of this 1992 observation is about a factor of 8 lower than the interpolation of the ux measurements in 1994 between 1.4 GHz and 8.4 GHz. Thus, besides the optical brightness changes the BL Lac object RX J1257.5+2412 is also strongly variable at radio frequencies.
3. Discussion 3.1. The BL Lac object RX J1257.5+2412 Recent investigations of X-ray spectra of BL Lac's (Ciliegi et al. 1995) have shown that their average photon index in the soft X-ray range (0.2{2.4 keV) is {2.23 0.17 for X-ray selected and {2.52 0.73 for radio selected objects. RXJ1257.5+2412 with its photon index of {2.4 is among the softest X-ray selected BL Lac's. This is interesting because the uni ed model of AGN with X-ray selected BL Lac's viewed in average at higher angles between viewing and jet axis than radio selected BL Lac's predicts that there should be a di erence between the X-ray spectra of the unbeamed (or at least wide opening angle), X-ray selected and beamed, radio selected BL Lac's. With a decreasing angle the inverse compton emission becomes more important due to Doppler boosting, thus dominating at high energies with respect to synchrotron emission and attening the X-ray spectra. Therefore, the discovery of RXJ1257.5+2412 argues against this clear cut between radio and X-ray selected BL Lac's.
Its multifrequency colours of OX = 0:67 and RO = 0:59 clearly show the X-ray dominated SED of this object (see also Fig. 10) for which often the abbreviation XBL is used (Stocke et al. 1985). Since the identi cation of serendipitously found Einstein sources it has been realized that BL Lac objects are luminous X-ray emitters with L x = 10 43 ...10 46 erg/s in the 0.3{3.5 keV band. Among the brightest extragalactic X-ray sources BL Lac objects form a high portion. Nevertheless, bright XBLs like RX J1257.5+2412 (with a corresponding X-ray ux in the Einstein band of F X = 8 10 12 ergs=cm 2 =s) are rare. Published surface densities of X-ray selected BL Lac objects from Einstein (Maccacaro et al. 1989) and preliminary results from the ROSAT all-sky-survey (Nass et al. 1995) lead to the expectation of less than one object with such a ux in a 100 square degree area. However, it is not possible to verify these earlier results with this one object.
3.2. The Narrow Line Seyfert 1 objects 3.2.1. Implications from the optical variability The optical variability of Seyfert 1 galaxies is usually slow and irregular with small amplitudes. The typical timescales of variations are months to years while faster variations are rare. The extremes in both, amplitude and timescale, are characterized as \some Seyfert nuclei have been observed to vary up to two magnitudes within a few years, while variations of several tenths of a magnitude can occure within days or weeks" (Hamilton et al. 1978). From this point of view our two variable Seyfert galax-extraordinary objects. Studies of the optical long-term behaviour of AGN have revealed indications of periodic or quasi-periodic uctuations for some of them. Rest frame \periods" of the order of ten years have been reported for the quasars 1217+02, 1004+13, 2349{01 (Pica et al. 1980), 3C 120 and 3C 345 (Webb et al. 1988), and 0736+017 (Wallinder et al. 1992). However, in all cases no de nite conclusions were possible mainly due to the fact that the available time base was not much longer than these long periods. Our database over more than 30 years covers three cycles of RX J1225.7+2055 (Fig. 6), thus supplying one of the strongest evidences for periodic or quasi-periodic variations in AGN.
Several possibilities have been discussed to explain such long-term periodic or quasi-periodic variations (see e.g. Wallinder et al. 1992 for a review): (1) A bright spot in the accretion disk with a lifetime of several orbital periods might be eclipsed by the outer part of the disk, (2) Precessing jets in near pole-on geometry or shocks propagating along the jet, (3) Dwarf-novae type disk instabilities with matter accumulation times much longer than the consumption phase at high luminosity, (4) Thermal limit-cycle disk oscillations in the inner part of the disk. In application to our Seyfert galaxies the jet models seem to be ruled out because of the (generally believed) small contribution of the jet to the overall luminosity. If there would be a substantial part of the optical radiation escaping to us without impinging on the broad line region (BLR), this would cause additional problems (for instance for the L H production).
The rest frame time scale of 8 yrs corresponds to the Kepler frequency at a distance of 6 10 15 (6 10 16 ) cm of a 10 6 (10 9 ) M Schwarzschild black hole. At these distances the temperature is still too high for dust formation, so that absorption at this location can be excluded. On the other hand, these distances are very similar to the ones derived from the light travel time argument using the shortest observed optical variations. This suggests a common emission process being responsible for both, shortand long-term variations. All proposed scenarios for longterm variability have problems with such a combination.
In the case of RX J1225.7+2055 there seems to be an additional problem with accretion disk models because the multifrequency energy distribution is hard to reconcile with these kind of models (see below and Fig. 8). This suggest either that the optical emission does not stem from the disk and hence all the above mentioned models would not be applicable, or that the X-ray part of the spectrum does not correspond to the Wien tail of the disk.
3.2.2. Implications from the optical spectra Several comparisons of X-ray and optical (line and continuum) properties of AGN have revealed various correlations ratio with steeper X-ray spectra, and a decreasing L OIII] with increasing L FeII (Puchnarewicz et al. 1992, Boroson and Green 1992, Bade 1993, Laor et al. 1994, Boller et al. 1995. All our three objects t in these trends though they are selected by their X-ray properties (extremely soft Xray spectra). In particular, RXJ1250.2+1923 with its very steep X-ray spectrum has the smallest H FWHM, representing the most extreme object so far in the FWHM H { x distribution of NLSy1s (cf. Fig. 8 in Boller et al. 1995). Previous investigations have found an increasing scatter of the X-ray continuum slope with decreasing FWHM (Bade 1993, Boller et al. 1995. Among our objects, the spectrum of RX J1225.7+2055 exhibits particularly strong FeII complexes, about a factor 10 more than in RXJ1239.3+2431. Integrating over the FeII 4570 A complex in RXJ1225.7+2055 yields F = 2.6{3.1 10 14 erg/cm 2 /s which again ranges among the highest FeII uxes among similar distant (z) NLSy1s (Puchnarewicz et al. 1992). Dividing by the H ux derived from the 1 component t (as the lower limit of the H luminosity) results in a FeII/H ratio of 1.75{2.15. This is not unusually high despite the high FeII ux. This ratio is dropped even more if we use the H ux derived from the two component t (see Tab. 3). Thus, the FeII/H ratio does not necessarily represent a good measure of the FeII ux.
In a slightly di erent approach of the same problem, Boller et al. (1995) have argued that the unusually large ratio of FeII/H observed in many NLSy1 might be explained by weaker than usual H emission. The two objects for which we derive H luminosities do not behave according to this hypothesis. While the H luminosity of RX J1239.3+2431 just corresponds to the mean value of the Sy1 sample in Padovani and Rafanelli (1988), that of RX J1225.7+2055 (and even the lower limit derived from the one component t) is at the high end of this distribution.
L H allows to estimate the minimal number of hydrogen-ionizing photons Q isotropically emitted by the central continuum source. Assuming T=10.000 K and using Tabs. 2.1 and 4.2 of Osterbrock (1989) results in Q = 2.1 10 12 L H . Using the one-component ts to H this translates to log Q = 54.40 for RXJ1239.3+2431 and log Q = 55.23 for RX J1225.7+2055.
The three X-ray spectral models used for the tting and discussed in the next section show di erent ux distributions in the EUV spectral region. However, all these models have to ful ll the constraint to provide enough photons to account for the observed H luminosity. In the following extrapolations, we always consider only the simplest case for each of these models, i.e. the accretion disk model without a possible additional underlying UV-X power law, or similarly a single power law or the warm absorber model without a possible additional EUV bump component. In order to estimate a lower limit on Q for served optical-UV power law to the Lyman limit with the steepest slope allowed by the data. For the pure power law and warm absorber models we then constructed (and integrated over) a power law from the Lyman limit to the lowenergy end of the ROSAT spectrum (0.1 keV) using the best t X-ray parameters and applying the absorption correction. In case of the accretion disk model the predicted EUV ux distribution was directly integrated. The resulting uv x for RXJ1239.3+2431 and RX J1225.7+2055 are given in Tab. 5. The calculations were done in the objects' restframes.
We nd that each model can account for the observed H luminosity, except for RXJ1225.7+2055, for which the accretion disk model results only in Q Q LH . This relation would imply a covering factor of unity for the BLR gas, thus blocking our view to the X-ray source which is in contradiction to what is observed. This low Q value results from the fact, that the predicted EUV bump does not match the extrapolated observed ux at the Lyman limit but falls short by two orders of magnitude.
Under the assumption that Q has a value between the above deduced lower limit and an upper limit estimated with the attest possible optical-UV power law (with no additional bump in the unobserved EUV region) we discuss the resulting properties of the warm absorber in item 3 of section 3.2.3.
3.2.3. Implications from the X-ray spectral ts Three distinctly di erent spectral distributions are possible to explain the X-ray data. Either these systems have single power law spectra extending with the same slope towards higher energies or they have (blackbody-like) excess emission on top of a standard Seyfert power law spectrum. Alternatively, a steep spectrum in the soft X-ray region can result from warm absorption of an intrinsically at spectrum. While the X-ray data alone are not su cient to decide between these possibilities in our cases, it might be instructive to explore them in some more detail. 1. Single steep power law: As stated above, single power law spectra t our X-ray data well. With the resulting best-t photon indices these new sources belong to the steep (soft) end of their population known so far. As a cautious note we point out that the energy coverage together with the limited energy resolution of the ROSAT PSPC does not allow to unambiguously constrain multicomponent spectral models (except possibly for very high signal-to-noise ratio observations of a few bright objects). This often serves, quite correctly, as justi cation for using a single power law model for the spectral tting of ROSAT AGN data. However, the nding of a steep spectrum (high photon index) does not imply that the X-ray spectrum continues to be steep outside the ROSAT band, especially above 2.5 keV. Therefore, high photon indices derived from simplistic description of a phenomenon which can be any of the above mentioned three di erent spectral distributions. Thus, measuring a broader spectral range is essential to reveal the true energy distribution. While for RX J1225.7+2055 the slope of the X-ray power law can be easily imagined to be extended through the UV and smoothly matching the optical spectral slope, the situation is quite di erent for RX J1239.3+2431 (see Fig. 8). Here, the X-ray intensity at 0.1 keV is similar to (or even exceeds) the extrapolated intensity at the Lyman limit. Thus, one would have to invoke a completely at spectrum between these two energies which in turn would imply two unnatural breaks. We therefore conclude that the single steep power law is an unsatisfactory model for the combined X-ray and optical data. 2. Soft excess on at power law: Given the observational evidence of the presence of a blue/UV bump, a more natural interpretation of the X-ray spectra than steep power laws would assume an additional soft component on top of a power law of typical slope (photon index of {1.9). Since this soft component often is related with the emission of an optically thin accretion disk (e.g. Czerny and Elvis (1987), Madau (1988)), we have used a disk blackbody model as soft component.
Corresponding to the low statistics we have xed the mass of the central object thus tting only the accretion rate and the normalization. We have performed several ts with the ( xed) mass between 10 4 and 10 9 M and veri ed that (1) the ratio between accretion rate and mass (the maximum temperature of the disk) is always the same and (2) that the intensity in the power law component does not change. The maximum temperature of the disk as given in Tab. 4 turns out to be very similar to the temperatures of single blackbody ts.
The extrapolation of the disk blackbody spectrum in its standard form (Shakura and Sunyaev 1973) towards the UV and optical range is problematic due to several e ects (see extensive discussion in Greiner et al. 1994). We only note here that the observed optical ux is higher by a factor 10{500 than the extrapolation of the best t models for all systems. 3. Flat power law with warm absorption: Another possibility to produce a steep spectrum in the soft X-ray region is via warm absorbing material along the line of sight to the central continuum source. In such warm gas (T of the order of 10 5 K) highly ionized metal ions imprint ionization edges onto the soft X-ray continuum passing the gas. These edges have been seen in the spectra of several Seyfert galaxies. One expects to also see objects with even stronger warm absorption, i.e. deep edges which recover only at hard energies above 2.5 keV, leaving mainly the \down-turning" part visible in the ROSAT spectral band.
nario (as in other proposed models) it is not straightforward to explain the narrowness of the Balmer lines in NLSy1s. On the other hand, considering the scatter in the FWHM H { x correlation, NLSy1s might well be a heterogeneous group with more than one mechanism at work to make them look di erent from`normal' Sy1s. The best-t column densities, log N w 22.8 { 23.2, are rather similar for the three objects, whereas the ionization parameters, log U {0.1 { 0.8 vary by nearly an order of magnitude. However, both come with large errors. The range in U indicates a higher state of ionization than is typically found in the`usual' BLR, consistent with the warm absorbers observed so far. Taking the mean Q (as de ned in section 2.5. and estimated from the multi-wavelength SED in section 3.2.2), i.e. 4.2 10 56 for RXJ1225.7+2055 and 8.8 10 54 for RXJ1239.3+2431, the absorber-intrinsic H emission for a thin shell geometry with full covering is calculated to be about 10 41:89 for RX J1225.7+2055 and 10 41:23 for RXJ1239.3+2431. This corresponds to 1/10 and 1/7 of the observed L H , respectively. Scaling the predicted Fe14] 5303 emission of the warm material in order not to con ict with the observed upper limit (see Fig. 2) constrains the covering factor of the gas to 1/6 in RXJ1239.3+2431 and is consistent with 1 in RXJ1225.7+2055. Given the high discovery rate of supersoft AGN among X-ray selected ones (section 3.3), the covering factors indeed have to be high to account for this fact.
The two composite spectra including a at power law model (i.e. above items 2 and 3) have two important advantages: The at power law with the Seyfert-1 typical index x ={1.9, as used in our modelling, is consistent with the nonthermal pair models usually invoked to explain the X-ray spectrum in Seyferts, which naturally predict this spectral slope (e.g. Svensson 1994). Moreover, the favourite model for producing the FeII emission, i.e. via reprocessing of X-rays deep within the BLR clouds, could be reconciled much easier with a at instead of a steep soft X-ray spectrum.
The major di erence between accretion disk and warm absorber model, as far as the ux distribution is concerned, is demonstrated in Fig. 8. These di erent EUV SEDs would be expected to di erently in uence the BLR (and NLR) line emission (although both cannot easily explain the absence/weakness of the usual NLR lines, if they are assumed to illuminate an otherwise normal, i.e. Seyfert-like, NLR (Komossa and Greiner, 1995)). The second major di erence is the di erent intensity of the hard X-ray ux at a few keV. The X-ray intensity of the warm absorber model is nearly one order of magnitude larger than that of the at power law added to the disk blackbody component. This prediction should be tested with inate between these two models.
As a consequence, the higher hard X-ray continuum of the warm absorber model relative to an accretion disk continuum may in uence the strength of the FeII emission. It is interesting to note that the ratio in the integrated intrinsic luminosities (for both, the warm absorber and the accretion disk model) between RX J1239.3+2431 and RX J1225.7+2055 of a factor of 10 is similar to the ratio of the integrated FeII ( 4570 A complex) luminosities.
Source statistics
The 100 square degree eld searched systematically in the ROSAT all-sky-survey data is located at a galactic latitude of bII = 77{88 degree. Correspondingly, more than half of the detected 238 X-ray sources are of extragalactic nature. The new soft AGN reported here are the softest extragalactic sources in this sample, thus representing the soft end of the AGN spectral distribution. Though the optical identi cation of this sample is not yet complete the following estimate can be made on the expected number of new supersoft AGN to be discovered in the ROSAT allsky-survey data: The number of supersoft AGN according to the selection criteria given in section 2.1. divided by the searched area results in a number density of 0.04 per square degree. Excluding a 20 degree zone around the galactic equator one might expect on the order of 1500 supersoft AGN in the ROSAT survey which is about 7% of the number of expected AGN of all subclasses. Though the above numbers are small and admittedly uncertain, they are considered to be a lower limit, however, because this estimate of the number density of supersoft AGN is based only on sources with more than 80 photons (one of our selection criteria) while the expected number of the total ROSAT survey AGN population ( 25000) includes all survey sources with more than typically 10 photons.
Taking into account the intensity cut in our selection, the above percentage of supersoft AGN readily increases. Out of the 238 X-ray sources in our eld there are 20 sources with more than 80 counts which divide up into 8 stars and 12 extragalactic objects, mainly AGN. Among these, the herewithin reported four supersoft AGN are equally distributed according to X-ray intensity, i.e. the relative number density of supersoft AGN is independent of X-ray intensity. Any extrapolation to faint objects has to scope with two major selection e ects: First, uneven absorption over the sky would a ect the detection probability more severely at low galactic latitudes. Second, fainter (and thus more distant) objects are typically harder due to their higher redshift. Both these e ects work against the detection of supersoft AGN, thus leaving the following estimates still lower limits. Thus, ignoring for the moment these biases and extrapolating this number density downwards to the all-sky-survey detection threshold would imply that basically every third X-ray selected AGN is super-Having about 120 AGN among the 238 sources, the 12 AGN with more than 80 counts represent only 10%. With the above extrapolation we expect to have 40 instead of 4 supersoft AGN in our 100 square degree eld. (We note that this eld has a mean exposure time of about 450 sec, and thus the uneven exposure of the sky during the ROSAT all-sky-survey does not invalidate this extrapolation.) With these numbers the conclusion seems justi ed that supersoft X-ray spectra are common among X-ray selected AGN samples and do not only constitute the \soft tail" of the AGN spectral distribution.
Conclusions
We have discovered four AGN in the ROSAT all-skysurvey data with very steep X-ray spectra. We identify three of these objects as Narrow Line Seyfert 1 galaxies, and the fourth as BL Lac object. If one requires one model to explain the optical to X-ray energy distribution of all three NLSy1s then the warm absorber model is preferred. In this case, an additional EUV spectral component would have to be assumed in RXJ1225.7+2055 while in the other two sources the extrapolation of the warm absorber model is consistent with the observed optical intensities. If di erent models are allowed for di erent sources then a single steep power law model seems to be the simplest explanation for the optical to X-ray energy distribution in RX J1225.7+2055, and a warm absorber model for RXJ1239.3+2431.
The small{FWHM H lines, strong FeII emission and weak OIII] emission in the three Narrow Line Seyfert 1 galaxies is in line with known correlations with respect to the steepness of the X-ray spectra in AGN. While one object shows particularly strong FeII emission (in terms of ux and luminosity), its FeII/H ratio is usual. This indicates that the FeII/H ratio is not a well suited observational indicator of the \FeII problem".
We have discovered strong optical variability in the BL Lac object and two of the Seyfert galaxies using photographic plates of the eld patrol of Sonneberg Observatory (the third NLSY1 is too weak for a study on archival plates). All objects show strong short-term variability (few days or less). The long-term variation in RX J1225.7+2055 seems to be periodic or quasi-periodic with three cycles of 11 yrs covered by the data.
Extrapolating the number of supersoft AGN found in the 10 10 degree eld under the assumption that they are equally distributed with respect to observable X-ray uxes, we conclude that about 30% of all X-ray selected AGN could be supersoft. While we nd three of the four objects to be NLSy1s, the distribution of supersoft AGN over the di erent sub-classes remains to be determined by optical follow-up studies of larger samples of ROSAT sources. | 2019-04-14T01:49:27.018Z | 1995-12-04T00:00:00.000 | {
"year": 1995,
"sha1": "9acacebaf88747227a34c63118fbc2e09e5a895b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "53cca379bbb54d77b9a4ace392d0b0e5d96dc560",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119128906 | pes2o/s2orc | v3-fos-license | Supersymmetric V-systems
We construct ${\mathcal N}=4 \,$ $\, D(2,1;\alpha)$ superconformal quantum mechanical system for any configuration of vectors forming a V-system. In the case of a Coxeter root system the bosonic potential of the supersymmetric Hamiltonian is the corresponding generalised Calogero-Moser potential. We also construct supersymmetric generalised trigonometric Calogero-Moser-Sutherland Hamiltonians for some root systems including $BC_N$.
Introduction
Calogero-Moser Hamiltonian is a famous example of an integrable system [9,38,43] which is related to a number of mathematical areas (see e.g. [11]). Generalised Calogero-Moser systems associated with an arbitrary root system were introduced by Olshanetsky and Perelomov [40], [41]. N = 2 supersymmetric quantum Calogero-Moser systems were constructed in [21] and considered further in [7]. They were generalised to classical root systems in [8] and to an arbitrary root system in [5].
A motivation for construction of N = 4 Calogero-Moser system goes back to the work [25] on a conjectural description of near-horizon limit of Reissner-Nordström black hole where appearance of su(1, 1|2) superconformal Calogero-Moser model was suggested. Though we also note more recent different considerations of near extremal black holes in [35]. Another motivation to study supersymmetric (trigonometric) Calogero-Moser-Sutherland systems comes from the relation of these systems with conformal blocks and possible generalisation of these relations to the supersymmetric case [29].
Wyllard gave an ansatz for N = 4 supercharges in [45]. In general Wyllard's ansatz depends on two potentials F and W . He constructed su(1, 1|2) N particle Calogero-Moser Hamiltonian for a single value of the coupling parameter c = 1/N as bosonic part of his supersymmetric Hamiltonian with W = 0. Wyllard argued that his ansatz does not produce superconformal Calogero-Moser Hamiltonians for general values of c. Necessary differential equations for F and W were derived in [45]. Thus potential F satisfies generalised Witten-Dijkgraaf-Verlinde-Verlinde (WDVV) equations (in the form of [36]) as it was pointed out in [1]. Wyllard's potential F has the form F = γ∈A (γ, x) 2 log(γ, x), (1.1) where A is the root system A N −1 . Examples based on root systems A = G 2 , B 3 were also considered in [45]. Solutions F to WDVV equations of this type appear also in Seiberg-Witten theory [36] and in theory of Frobenius manifolds [10]. More generally, Veselov introduced the notion of a ∨-system in [44]. ∨-systems form special collections of vectors in a linear space, which satisfy certain linear algebraic conditions. A logarithmic prepotential (1.1) corresponding to a collection of vectors A satisfies WDVV equations if A is a ∨-system. The class of ∨-systems contains Coxeter root systems, deformations of generalized root systems of Lie superalgebras, special subsystems in and restrictions of such systems [18,42]. A complete description of the class remains open (see [19] and references therein).
Several attempts have been made to construct supersymmetric mechanics such that the corresponding Hamiltonian has bosonic potential of Calogero-Moser type with a reasonably general coupling parameter(s). Wyllard's ansatz for N = 4 supercharges was extended to other root systems in [23], [24] where solutions for a small number of particles were studied both for W = 0 and W = 0. In particular, su(1, 1|2) superconformal Calogero-Moser systems related to A = A 1 ⊕ G 2 , F 4 and subsystems of F 4 were derived. Superconformal su(1, 1|2) Calogero-Moser systems for the rank two root systems were derived in [3] via suitable action in the superspace. For the WDVV equations arising in the superfield approach we refer to [31].
A many-body model with D(2, 1; α) supersymmetry algebra with α = − 1 2 was considered in [12]. This model was obtained by a reduction from matrix model and it incorporates an extra set of bosonic variables ("U(2) spin variables") which enter the bosonic potential of the corresponding Hamiltonian. One-dimensional version of such a model was considered in [14] and, for any α, in [2], [13]. A generalisation of the many-body classical spin superconformal model for any value of the parameter α was proposed in [30]. Within D(2, 1; α) supersymmetry ansatz of [30] a class of bosonic potentials was obtained in [17]. The potential F has the form (1.1) for a root system A. Then W is a twisted period of the Frobenius manifold on the space of orbits corresponding to the root system A. Such polynomial twisted periods were described in [17], they exist for special values of parameter α. Although the corresponding bosonic potentials are algebraic this class does not seem to contain generalised Calogero-Moser potentials associated with A.
Recently a construction of type A N −1 supersymmetric (classical) Calogero-Moser model with extra spin bosonic generators and N N 2 fermionic variables (for any even N ) was presented in [32]. The ansatz for supercharges is more involved and extra fermionic variables appear due to reduction from a matrix model. A related quantum N = 4 supersymmetric spin A N −1 Calogero-Moser system was studied recently in [16]. Furthermore, a simpler ansatz for supercharges for the spin classical A N −1 Calogero-Moser system was presented in [33]. This model has 1 2 N N(N + 1) fermionic variables and the supersymmetry algebra is osp(N |2). Most recently classical supersymmetric osp(N |2) Calogero-Moser systems were presented in [34]; these models have nonlinear Hermitian conjugation property of matrix fermions and supercharges are cubic in fermions.
In the current work we present two constructions of supersymmetric N = 4 quantum mechanical system starting with an arbitrary ∨-system. In the case of a Coxeter root system A the bosonic part of the Hamiltonian is the Calogero-Moser Hamiltonian associated with A introduced by Olshanetsky and Perelomov in [41], which we get in two different gauges: the potential and potential free ones. In the latter case the Hamiltonian is not formally self-adjoint; this gauge comes from the radial part of the Laplace-Beltrami operator on symmetric spaces [4,26,41]. The superconformal algebra is D(2, 1; α) where α depends on the ∨-system and is ultimately related with the coupling parameter in the resulting Calogero-Moser type Hamiltonian. We use original ansatz for the supercharges [45], [23] based on the potentials F , W and we take W = 0. In the special case when α = −1 the superalgebra D(2, 1; −1) contains the superalgebra su(1, 1|2) as its subalebra, and our first ansatz on the su(1, 1|2) generators reduces to the one considered in [23,24]. It was emphasised in [24] that such quantum models with W = 0 are non-trivial with bosonic potentials proportional to squared Planck constant, though they were not considered in more detail in [24]. Thus we extend considerations in [24] for W = 0 to the case of superconformal algebra D(2, 1; α), and we get in this framework quantum Calogero-Moser type systems associated with an arbitrary ∨-system, which includes Olshanetsky-Perelomov generalisations of the Calogero-Moser system with arbitrary invariant coupling parameters.
We also consider generalised trigonometric Calogero-Moser-Sutherland systems related to a collection of vectors A with multiplicities. We include these Hamiltonians in the supersymmetry algebra provided that extra assumptions on A are satisfied which are similar to WDVV equations for the trigonometric version of the potential F . We show that these assumptions can be satisfied when A is an irreducible root system with more than one orbit of the Weyl group, that is BC N , F 4 and G 2 cases. A related solution of WDVV equations for the root system B N was obtained in [27].
The structure of the paper is as follows. We recall the definition of the Lie superalgebra D(2, 1; α) in Section 2. We give two types of representations of this superalgebra in Sections 3, 4. Starting with any ∨-system we get two corresponding supersymmetric Hamiltonians. In Section 5 we present them explicitly. We consider supersymmetric trigonometric Calogero-Moser-Sutherland systems in Section 6.
Let us recall the definition of the family of Lie superalgebras D(2, 1; α), which depends on a parameter α ∈ C (see e.g. [20,Section 20]). The algebra has 8 odd generators Q abc and 9 even generators T ab = T ba , I ab = I ba , J ab = J ba (a, b, c = 1, 2). Elements T ab , I ab and J ab generate three pairwise commuting sl(2) algebras.
For example, ǫ f (a Q |cd|b) = 1 2 ǫ f a Q cdb + ǫ f b Q cda . We also have relations for all a, b, c, d, e, f = 1, 2. Let us rename generators as follows: We will use ǫ ab and ǫ ab to lower and raise indices, e.g. Q a = ǫ ab Q b ,Q a = ǫ abQ b . We consider N (quantum) particles on a line with coordinates and momenta (x j , p j ), j = 1, . . . , N to each of which we associate four fermionic variables {ψ aj ,ψ j a |a = 1, 2}. We will also write x = (x 1 , . . . , x N ), p = (p 1 , . . . , p N ).
We assume the following (anti)-commutation relations (a, b = 1, 2; j, k = 1, . . . , N): Thus one can think of p k as p k = −i ∂ ∂x k . We introduce further fermionic variables by They satisfy the following useful relations: We will be assuming throughout that summation over repeated indices takes place (even when both indices are either low or upper indices) unless it is indicated that no summation is applied.
Let F = F (x 1 , . . . , x N ) be a function such that where F rjk = ∂ 3 F ∂xr∂x j ∂x k for any r, j, k = 1, . . . , N. We assume that all the derivatives F rjk are homogeneous in x of degree -1. Furthermore, we assume that F satisfies the following Witten-Dijkgraaf-Verlinde-Verlinde equations (WDVV) equations for any r, j, k, m, n = 1, . . . , N.
The following relations for arbitrary operators A, B, C will be useful: We are going to present two representations of D(2, 1; α) algebra using F .
Let us firstly check relations (2.3), (2.4) involving generators J ab and I ab .
Proof. We consider the commutator which implies the statement.
We will use the following relations: Thus by using the first relation in (3.11) [I 11 , I 12 ] = ψ k b ψ bk = iI 11 . Similarly, . Hence, by using the latter relation in (3.11) [I 22 , I 12 ] =ψ bkψk b = −iI 22 , and hence the statement follows.
In what follows, we will use the following relation: Lemma 3.4 (cf. [23]). Let Q abc , J ab be as above. Then the relations (2.4b) hold.
Proof. Firstly let us note that the sum of the last two terms in (3.14) is anti-symmetric in a and b and J ab = J ba . Therefore we have by applying (3.14) Therefore we get from (3.13) and (3.15) that as required in (2.4b). Further, we consider which coincides with the corresponding relation in (2.4b). The remaining relations can be proven similarly. (3.20) By reordering terms in (3.20) we obtain Note that F lmnψ l cψ an ψ cm = 0 if c is fixed such that c = a. Hence (3.21) can be rearranged as −2F lmnψ l aψ am ψ an which is also equal to −F lmnψ l bψ bm ψ an . Therefore Therefore, with the help of (3.13) we get which matches with (2.4c).
Let us now consider the generator Q 11a . Firstly, it is immediate that [I 11 , Q 11a ] = 0, as required. In addition, we have by (3.19) that as required for (2.4c). The remaining relations in (2.4c) can be checked similarly.
In the following theorem we will use the identity We will use the following relations. We have by (2.11) and (2.13) and similarly, where the Hamiltonian H is given by Similarly, using (3.26) we obtain Note thatψ al ψ n c F lnr p r + ψ r cψ ak F rkj p j = 1 2 δ ln δ ac F lnr p r . Then, after canceling out terms and simplifying we have In particular, we note that using the symmetry of F ljk we have that Using the symmetry ∂ r F ljk = ∂ l F rjk and F ljk = F kjl it follows from (3.31), (3.32) and (3.33) that the sum of expressions in (3.31) and (3.32) vanishes if a = c. Therefore we get from (3.31), (3.32), (3.33) that Note that here a = a. Therefore the right-hand side of (3.34) equals Therefore in total expression (3.30) becomes Finally, let us consider the term {B, B ′ }. We first show that By using (3.24) we obtain Then using the symmetry of F lmn under the swap of l and m we obtain Note that by (2.6), (2.8) we have Further on by (2.10) we have F rjk F rmn = F rnk F rmj and therefore some terms in the righthand side of (3.38), (3.39) enter the relation Then by using (3.38)-(3.40) and the symmetry of F rjk under the swap of r and j we obtain Note that for c = a we have C = 0, since F rjk F lmn δ jl ψ br ψ n cψ m bψ ak = 0 by using (2.10). Further on, if c = a then by using (2.10) we have which is equal to 1 4 F rjk F klm ψ br ψ j bψ l dψ dm because of relations (3.35). This proves that C = 0. Then the term {B, B ′ } takes the following form: Therefore, the statement follows. Proof. Firstly, we have that which is the corresponding relation (2.2). Lemma 3.8. Let Q abc , I ab , T ab , J ab be as above. Then relations (2.1) hold.
Proof. Firstly let us consider where a is complimentary to a. Note that we can assume now that a = f . Therefore Therefore by formula (2.9) as required for the corresponding relation (2.1).
Let us now note that Hence the right-hand side of (2.1) for {Q 21a , Q 12b } is By considering various values of a, b ∈ {1, 2}, expression (3.45) takes the form Note that by (2.12), (2.13) we have Note also that ψ ar ψ al ∂ r F lmn = 0 using the symmetry of ∂ r F lmn under the swap of r and l.
Then ψ ar ψ dl ψ m dψ cn ∂ r F lmn = 0 and hence Similarly, Therefore using the symmetry of F rjk under the swap of j and r, and that of F lmn under the swap of l and m we obtain Further, note that for any b ∈ {1, 2} we have by using (2.10) that F lmr F rjk ψ dl ψ m d ψ bj = 0. Hence the right-hand side of (3.50) vanishes. Therefore it follows that Therefore by considering various values of a, b ∈ {1, 2} and by using Lemma 3.8 and Theorem 3.6 we obtain the following: which are the corresponding relations (2.5).
Similarly we have By Lemma 3.4 we have Therefore by considering various values of a, b ∈ {1, 2} we obtain:
The second representation
Let now the supercharges be of the form (3.37)). Therefore in total, we get that and hence the statement follows. Proof. Firstly, we have that as required. Moreover we have [F rmn p r , x j p j ] = −iF rmn p r + ix j ∂ j F rmn p r = −2iF rmn p r . Then it is easy to see that [H, D] = iH, as required. Further on, [K, D] = − 1 2 [x 2 , x j p j ] = iK, which is the corresponding relation (2.2).
We note that since I and J keep the same form as in the first representation, the statement of the Lemmas 3.2, 3.3 hold.
and the right-hand side of (2.1) becomes (cf. (3.46)) {Q 21a , Q 12b } = x r p r ǫ ab + 4iαψ (arψbr) − 2i(1 + α)ǫ ab (ψ 2rψ1r − ψ 1rψ2r ) = −2iψ arψbr + x r p r ǫ ab + 2i(1 + 2α)ψ brψar , which is equal to (4.8) as required. The remaining relations can be checked similarly. Let us recall that from the proof of Lemma 3.9 (formula (3.53)) we have Therefore an analogue of (3.54) takes the form The proof of the lemma is the same as the proof of Lemma 3.10 for the first representation since I ab and J ab keep the same form, and the proof of commutation relations with H in Lemma 3.10 relies only on relations (2.1) which express H as the anticommutator of the supercharges Q a andQ a .
Hamiltonians
We now proceed to explicit calculations of Hamiltonians appearing in Theorem 3.6 and Theorem 4.1. We start with a Coxeter root system case.
Coxeter systems.
In this case we take R to be a Coxeter root system in V ∼ = R N [28]. More exactly, let R be a collection of vectors which spans V and is invariant under orthogonal reflections about all the hyperplanes (γ, x) = 0, γ ∈ R, where (·, ·) is the standard scalar product in V . We also assume that R can be decomposed as a disjoint union of its subsets R + and −R + such that each subsystem R + and R − contains no collinear vectors. Furthermore, let us assume that squared length (γ, γ) = 2 for any γ ∈ R, and that R is irreducible. Non-equal choices of length of roots in the cases when the Coxeter group has two orbits on R are covered by considerations in Subsection 5.2 below.
The corresponding function F has the form where λ ∈ C. It is established in [37], [44] that F satisfies generalized WDVV equations (2.10). Recall the following property.
where h is the Coxeter number of R.
Lemma 5.1 has the following corollary.
Lemma 5.2. Let F be given by (5.1). Then Proof. Let γ ∈ R have coordinates γ = (γ 1 , . . . , γ N ). By Lemma 5.1 we have The following identity will be useful below: It follows from the observation that the left-hand side is non-singular at all the hyperplanes (β, x) = 0, β ∈ R + . Let us choose now Then hλ = −(2α+1), so by Lemma 5.2 function F satisfies the required condition (2.9). Thus it leads to D(2, 1; α) superconformal mechanics with the Hamiltonians given by Theorems 3.6, 4.1. We now simplify these Hamiltonians for the root system case.
Theorem 5.3. Let function F be given by (5.1). Then the Hamiltonian H given by (3.27) is supersymmetric with the superconformal algebra D(2, 1; α), where α is given by (5.3). The rescaled Hamiltonian H 1 = 4H has the form where ∆ = −p 2 is the Laplacian in V and the fermionic term Proof. By formula (3.27) we have that where potential Let us firstly simplify U. We have Then because of identity (5.2). The statement follows from formulas (5.5), (5.6).
The following theorem can be easily checked directly.
Proposition 5.5. Hamiltonians H 1 , H 2 from Theorems 5.3, 5.4 satisfy gauge relation The proof follows immediately by making use of the identity (5.2).
Remark 5.6. We note that the Hamiltonian H 2 is not self-adjoint under hermitian involution defined by Note that since F rjk ψ k aψ r bψ bj = F rjk (ψ r bψ bj ψ k a −ψ r a δ kj ) we may express (Q a ) † in terms ofQ a (see (4.2)) as follows (Q a ) † =Q a − iF lmnψ l a δ nm . We then have (4.7). Then supersymmetry algebra constraint {Q a , (Q c ) † } = −2δ a c H leads to restrictions α = − 1 2 , or α = − h+2 4 . In both cases the bosonic part of the Hamiltonian H can be seen to be zero.
5.2.
General ∨-systems. Let us consider a finite collection of vectors A in V ∼ = C N such that the corresponding bilinear form Let us recall what it means that A is a ∨-system [44]. We can assume by applying a suitable linear transformation to A that for any u, v ∈ V . In this case A is a ∨-system if for any γ ∈ A and for any two-dimensional plane π ⊂ V such that γ ∈ π one has β∈A∩π (β, γ)β = µγ, for some µ = µ(γ, π) ∈ C.
Let F = F A (x 1 , . . . , x N ) be the corresponding function where λ ∈ C. Then F satisfies generalised WDVV equations (2.10) (see [44]). Furthermore, the condition Therefore this leads to D(2, 1; α) superconformal mechanics with the Hamiltonians given by Theorems 3.6, 4.1, which we present explicitly in the following theorem.
Theorem 5.7. Let function F be given by (5.7). Then the Hamiltonian H given by (3.27) is supersymmetric with the superconformal algebra D(2, 1; α), where α = − 1 2 (λ + 1). The rescaled Hamiltonian H 1 = 4H has the form where ∆ = −p 2 is the Laplacian in V and the fermionic term Furthermore, the Hamiltonian H given by (4.4) is also supersymmetric with the superconformal algebra D(2, 1; α), where α = − 1 2 (λ + 1) and the rescaled Hamiltonian H 2 = 4H has the form The proof is similar to the one in the Coxeter case. The following proposition can also be checked directly.
Trigonometric version
In this section we consider prepotential functions F = F (x 1 , . . . , x N ) of the form where A is a finite set of vectors in V ∼ = C N , c α ∈ C are some multiplicities of these vectors, and function f is given by so that f ′′′ (z) = coth z.
We are interested in the supercharges of the form Q a = p r ψ ar + iF rjk ψ br ψ j bψ ak , Q c = p lψ l c + iF lmn ψ l dψ dm ψ n c , a, c = 1, 2, which is analogous to the first representation considered in Section 3.
Function F should satisfy conditions for all r, j, m, n = 1, . . . , N but we no longer assume conditions (2.9). Then we have the following statement on supersymmetry algebra.
Theorem 6.1. Let us assume that F satisfies conditions (6.2). Then for all a, b = 1, 2 we have where the Hamiltonian H is given by Furthermore, the rescaled Hamiltonian H 1 = 4H has the form where ∆ = −p 2 is the Laplacian in V and the fermionic term The proof of the first part of the theorem is the same as the proof of Theorem 3.6 together with the proof of the relevant part of Lemma 3.8. The proof of formula (6.3) is similar to the proof of Theorem 5.3.
Let us now consider supercharges of the form Q a = p r ψ ar + iF rjk ψ br ψ j bψ ak , Q c = p lψ l c + iF lmnψ l dψ dm ψ n c , a, c = 1, 2, which is analogous to the second representation considered in Section 4. Then we have the following statement on supersymmetry algebra. Theorem 6.2. Let us assume that F satisfies conditions (6.2). Then for all a, b = 1, 2 we have where the Hamiltonian H is given by Furthermore, the rescaled Hamiltonian H 2 = 4H, has the form
The proof of the first part of the theorem is the same as the proof of Theorem 4.1 together with the proof of the relevant part of Lemma 4.4. Then formula (6.6) can be easily derived from the form (6.5) of H.
Let us now assume that A = R is a crystallographic root system, and that the multiplicity function c(α) = c α , α ∈ R is invariant under the corresponding Weyl group W . For a general root system R the corresponding function F does not satisfy equations (6.2). For example, if R = A N −1 then relations (6.2) do not hold. But for some root systems and collections of multiplicities relations (6.2) are satisfied.
In the rest of this section we consider such cases when prepotential F satisfying (6.2) does exist. The corresponding root systems R have more than one orbit under the action of the Weyl group W . We start by simplifying the corresponding Hamiltonians H 1 given by (6.3). Proposition 6.3. Let us assume that prepotential F given by (6.1) for a root system R with invariant multiplicity function c satisfies (6.2). Then Hamiltonian (6.3) can be rearranged as Φ = Φ + const, with Φ given by (6.4) and R + is a positive subsystem in R.
Indeed, it is easy to see that for the crystallographic root system R the term β,α∈R β ∼α is non-singular at tanh(α, x) = 0 for all α ∈ R, hence it is constant. One can show that the Hamiltonian H 1 given by (6.3) simplifies to the required form. We now show that solutions to equations (6.2) exist for the root systems R = BC N , R = F 4 and R = G 2 , with special collections of invariant multiplicities.
Let R + be a positive subsystem in the root system R. For a pair of vectors a, b ∈ V we define a 2-form B (a,b) R + has good properties with regard to the action of the corresponding Weyl group W . Namely, the following statement takes place.
for any w ∈ W .
Proof. Let us choose a simple root α ∈ R + . It is sufficient to prove the statement for w = s α . Let us rewrite B (a,b) It is easy to see that for any β, γ ∈ R B β,γ (s α a, s α b) = B sαβ,sαγ (a, b) (6.10) since (u, s α v) = (s α u, v) for any u, v ∈ V . Let us now apply s α to equality (6.8). Since by the relation (6.10). This proves the first equality in (6.9). In order to prove the second equality (6.9) let us notice that in fact Hence s α B (a,b) Let us derive some conditions for a function F to satisfy equations of the form (6.2). Let F i be the N × N matrices of third derivatives of F , (F i ) lm = ∂ 3 F ∂x i ∂x l ∂xm , and for any vector a = (a 1 , . . . , a N ) ∈ V let us denote F a = N i=1 a i F i . Theorem 6.5. Let a, b ∈ V . Then the equations and therefore Hence the equations [F a , F b ] = 0 are equivalent to α,β∈R c α c β B α,β (a, b)(α, β) coth(α, x) coth(β, x)α ⊗ β = 0, which can be easily checked to be equivalent to It is easy to see that the sum in the left-hand side of the equality (6.12) is non-singular at tanh(α, x) = 0 for all α ∈ R + , hence this sum is always constant. In an appropriate limit in a cone coth(α, x) → 1 for all α ∈ R + , and therefore the equality (6.12) is equivalent to the equality Let e i , i = 1, . . . , N be the standard orthonormal basis in V . We may express B (a,b) for some scalars g ij = g ij (a, b). Then linear independence of the basis vectors and condition (6.11) give rise to N 2 equations g ij (a, b) = 0. If A N −1 ⊂ R then by Proposition 6.4 we should have that g ij (a, b) = ±g σ(i)σ(j) (σ(a), σ(b)) for any transposition σ ∈ S N which acts on vectors a, b by the corresponding permutation of coordinates. This shows that the condition (6.11) reduces to a single equation g ij = 0 for any fixed i, j and general a, b ∈ V . For convenience we will write below B e i ,e j (a, b) as B ij (a, b). Theorem 6.6. Let R = BC N . Let the positive half of the root system BC N be where η ∈ C × is a parameter. Let r be the multiplicity of vectors ηe i , and let s be the multiplicity of vectors 2ηe i . Let q be the multiplicity of vectors η(e i ± e j ). Then the function satisfies conditions (6.2) if and only if r = −8s − 2(N − 2)q. The corresponding supersymmetric Hamiltonians given by (6.6), (6.7) take the form and with Φ given by where d mtk = d mtk (ǫ) = δ mk + ǫδ tk , and Φ = Φ + const.
By Proposition 6.4, g ij = 0 for all 1 ≤ i < j ≤ N if and only if r = −8s − 2(N − 2)q. The form of the Hamiltonians H 2 , H 1 follows from Theorem 6.2 and Proposition 6.3 respectively. Then the statement follows.
Remark 6.7. We note that for the multiplicity s = 0 Theorem 6.6 is contained in [27]. Indeed, Theorem 2.3 in [27] states that the function F given by formula (6.14) with root system R = B N satisfies WDVV equations. It also follows from the proof of Theorem 2.3 in [27] that the corresponding metric is proportional to the standard metric δ ij . Therefore WDVV equations are equivalent to equations (6.2).
By Proposition 6.4, g ij = 0 for all 1 ≤ i < j ≤ 4 if and only if r = −2q or r = −4q. The form of the Hamiltonians H 2 , H 1 follows from Theorem 6.2 and Proposition 6.3. Then the statement follows. where η ∈ C × is a parameter. Let s be the multiplicity of the short roots α i , i = 1, 2, 3 and let r be the multiplicity of the long roots α j , j = 4, 5, 6. Then the function satisfies conditions ( Proof. The coefficient at e 1 ∧ e 2 in the form B R + given by (6.8), (6.13) is where (α i ∧ α j , e 1 ∧ e 2 ) = det(c 1 , c 2 ) where c k are the column vectors c k = ((α i , e k ), (α j , e k )) ⊺ , k = 1, 2, and 2c α i c α j (α i , α j )B α i ,α j (a, b)(α i ∧ α j , e 1 ∧ e 2 ).
By Proposition 6.4, g ij = 0 for all 1 ≤ i < j ≤ 3 if and only if s = −3r or s = −9r. The form of the Hamiltonians H 1 , H 2 follows from Theorem 6.2 and Proposition 6.3 respectively. Then the statement follows.
Remark 6.10. The bosonic part of the supersymmetric Hamiltonians (6.6), (6.7) becomes Calogero-Moser Hamiltonian in the rational limit. For example let us consider the case of the root system BC N and let us introduce rescaled multiplicities s = η 2 s, q = η 2 q and r = η 2 r in Theorem 6.6. Then in the limit η → 0 bosonic parts of Hamiltonians H 1 and H 2 given by (6.15), (6.16) become the rational B N Hamiltonians H b,r 1 , H b,r 2 with two independent coupling parameters, namely, where l = 2((N − 2) q + 2 s). Thus supersymmetric Hamiltonians (6.15), (6.16) can be viewed as η-deformation of the rational superconformal Hamiltonians considered in Theorems 5.3, 5.4 for the root system R = B N .
Concluding remarks
Since work [45] there were extensive attempts to define superconformal N = 4 Calogero-Moser type systems for sufficiently general coupling parameters and suitable superconformal algebras. Some low rank cases were treated in [23], [24]. A number of works were devoted to the superconformal extensions of Calogero-Moser systems where extra spin type variables had to be present (see [15] for a discussion and the review). In the current work we presented superconformal extensions of the ordinary Calogero-Moser system with scalar potential as well as its generalisations for an arbitrary ∨-system, which includes Olshanetsky-Perelomov generalisations of Calogero-Moser systems with arbitrary invariant coupling parameters. The superconformal algebra is D(2, 1; α) where parameter α is related to the coupling parameter(s). It is crucial for our considerations that we deal with quantum rather than classical Calogero-Moser type systems.
We also presented supersymmetric non-conformal deformations of the Calogero-Moser type systems related with the root system B N (which may be thought of as the Calogero-Moser system with boundary terms) as well as with some other exceptional root systems. It would be very interesting to see if there are any relations of considered systems with black holes (cf. [25] for the conjectural relation with supersymmetric Calogero-Moser systems and e.g. [35], [39] and references therein for non-conformal deformations of AdS 2 black hole geometry).
All our considerations are also extended to non-self-adjoint gauge of the Calogero-Moser type Hamiltonians. There has been considerable interest in such non-self-adjoint but PT symmetric bosonic Hamiltonians (see e.g. [22] and references therein). It would be interesting to see whether these Hamiltonians play a role in the context of supersymmetry.
It may also be interesting to clarify integrability of considered supersymmetric Hamiltonians. | 2019-02-27T20:42:54.000Z | 2018-12-06T00:00:00.000 | {
"year": 2018,
"sha1": "f7e7f49314ad0887c7ddb794ad409656a51e7e0d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP02(2019)115.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "f7e7f49314ad0887c7ddb794ad409656a51e7e0d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
256901073 | pes2o/s2orc | v3-fos-license | Locally stable degenerations of log Calabi-Yau pairs
We study the birational boundedness of special fibers of log Calabi-Yau fibrations and Fano fibrations. We show that for a locally stable family of Fano varieties or polarised log Calabi-Yau pairs over a curve, if the general fiber satisfies some natural boundedness conditions, then every irreducible component of the special fiber is birationally bounded.
Introduction
The notion of locally stable morphisms, firstly defined for surfaces in [KSB88] and defined for all dimensions in [Kol23], is very important in the moduli of varieties. By a locally stable degeneration of a set of varieties C , we mean the special fiber of a locally stable morphism whose general fiber is in C . It is natural to ask if C is a bounded family, then are all of their locally stable degenerations in a bounded family?
In this paper, we use singularities of log pairs and moduli of polarised log Calabi-Yau pairs to study the birational boundedness of special fibers of Fano fibrations and log Calabi-Yau fibrations. The boundedness of ǫ-lc Fano varieties is studied in [Bir19] and [Bir21] and the boundedness of (d, c, v)-polarised Calabi-Yau pairs is studied in [Bir23]. The first result shows locally stable degeneration of these two class of pairs is birationally bounded (see Definition 2.2).
Theorem 1.1. Fix a natural number d and positive rational numbers c, v, ǫ. Let X be a quasiprojective normal variety, f : (X, ∆) → C be a locally stable morphism of relative dimension d over a smooth curve C. Suppose either (1) −(K X + ∆) is ample over C, the general fiber (X g , ∆ g ) is ǫ-lc and coeff(∆) ⊂ cN, or (2) K X + ∆ ∼ Q,C 0 and there is a divisor N on X such that the general fiber (X g , ∆ g ), N g is a (d, c, v)-polarised Calabi-Yau pair (see Definition 4.1). Then every irreducible component of X 0 is birationally bounded for any closed point 0 ∈ C.
If the morphism is not locally stable, we have a weaker result.
Theorem 1.2. Fix a natural number d and positive rational numbers c, v, ǫ. Then there exists a natural number l and a bounded family of projective varieties W → T , such that: Let X be a normal quasi-projective variety, (X, ∆) is a log canonical pair. f : X → C be a fibration of relative dimension d over a smooth curve C. Suppose either (1) −(K X + ∆) is ample over C and the general fiber (X g , ∆ g ) is ǫ-lc, or (2) K X + ∆ ∼ Q,C 0 and there is a divisor N on X such that the general fiber (X g , ∆ g ), N g is a (d, c, v)-polarised Calabi-Yau pair. Then for any closed point 0 ∈ C, if P a log canonical place of (X, ∆ + lct(X, ∆; f * 0)f * 0), there is a closed point t ∈ T and a finite dominant rational map W t P whose degree is a factor of min{l, mult P f * 0!}. (mult P f * 0! is the factorial of mult P f * 0.) 2. Preliminary 2.1. Notations and basic definition. We will use the same notation as in [KM98] and [Laz04].
A contraction is a projective morphism f : X → Z of quasi-projective varieties with f * O X = O Z . If X is normal, then so is Z, and the fibers of f are connected. A fibration is a contraction f : X → Z of normal quasi-projective varieties with dimX > dimZ.
In this paper we only consider locally stable morphisms over smooth varieties, the following definition comes from [Kol23,Corollary 4.55].
Definition 2.1. Let S be a smooth variety, (X, ∆) be a log canonical pair, and f : X → S be a morphism. We say that f : (X, ∆) → S is locally stable if the pair (X, ∆ + f * D) is semi-log-canonical for every snc divisor D ⊂ S.
Since S is smooth, D is a Cartier divisor, this implies that (X, ∆) is semi-log-canonical. • M − (K X + B) is nef and big, and • S is a non-klt centre of (X, B) with M | S ≡ 0. Then there is a Q-divisor Λ ≥ B such that • (X, Λ) is log canonical over a neighbourhood of z := f (S), and • n(K X + Λ) ∼ (n + 2)M .
Lemma 2.4. Fix a natural number d, a positive rational number c, then there is a natural number l depending only on d, c, such that if f : X → C is a fibration over a curve C, (X, ∆) is a log canonical pair such that • coeffB ⊂ cN, is ample over C, and • the general fiber (X g , ∆ g ) of f is klt. Then for any closed point s ∈ C and any log canonical place P of (X, ∆ + lct(X, ∆; f * (s))f * (s)), there exists a birational map Y X and a Q-divisor Λ Y satisfying the following properties: Proof. Let g : X ′ → X be a dlt modification of (X, ∆ + lct(X, ∆; f * (s))f * (s)) such that P is a divisor on X ′ . Define ∆ ′ and D ′ by K X ′ + ∆ ′ ∼ Q g * (K X + ∆), and K X ′ + D ′ ∼ Q g * (K X + ∆ + lct(X, ∆; f * (s))f * (s)).
Because −(K X +∆) is ample over C, −(K X ′ +D ′ ) is semi-ample over C, we can choose a general member B ′ ∈ | − (K X ′ + D ′ )/C| Q such that (X ′ , D ′ + B ′ ) is a dlt pair.
Next, we run a (K X ′ + D ′ + δF ′ + B ′ )-MMP with scaling of an ample divisor. Because by [Bir12, Theorem 1.8], this MMP terminates with a model f ′′ : Notice that the general fiber of (X, ∆+B) is klt, for a general point g ∈ C, we have (X g , Finally, since each MMP is an isomorphism over the general point of c, we have X g ∼ = Y g . Lemma 2.5. Let (X , D ′ ) → S be a locally stable morphism over a smooth variety S, fix a Q-divisor D ≤ D ′ such that K X + D is Q-Cartier. Then the set {V | V is a log canonical center of (X s , D s ) for some closed point s ∈ S} is bounded.
Proof. After passing to a stratification of S, we may assume that (X , SuppD) → S has a fibrewise log resolution ξ : for any closed point s ∈ S. It is easy to see that every log canonical center of (X s , D s ) is dominated by a log canonical center of (Y s , D Ys ).
By construction, (Y, D Y ) is log smooth over S, denote its strata by V i , i ∈ I, then V i → S is smooth for all i ∈ I. By an easy computation of discrepancy on log smooth pairs, any log canonical center of (Y s , D Ys ) is V i | Ys for some i ∈ I. Then any log canonical center of (X s , D s ) is isomorphic to ξ(V i )| Xs for some i ∈ I, and the set of families ξ(V i ) → S, i ∈ I parametrizes all log canonical center of (X s , D s ), the result follows.
Lemma 2.6. Let f : X → T be a flat morphism from a normal variety to a smooth curve T . Suppose π : S → T is a ramified cover, Y → X × T S is the normalization of the main component and let f Y : Y → S be the projection.
Fix a closed point 0 T ∈ T , let 0 S ∈ S be a preimage of 0 T in S. Suppose P is an irreducible component of f * 0 T , and Q is an irreducible component of the preimage of P in Y such that f Y (Q) = 0 S . Denote the ramified index of π along 0 S by r S , the multiplicity of f * 0 T along P by m P . Then the degree of the finite morphism π Q : Q → P is a factor of min{r S !, m P !}.
Proof. By assumption, we have the following diagram Denote the ramified index of π Y along the generic point of Q by r Q and mult Q f * Y 0 S by m Q . Next, we calculate the multiplicity of f * Y 0 T along Q, by the definition of the ramified index, we have Multiply both side by m Q , we have Then we have deg(π Q )m Q ≤ m P . Since deg(π Q ) is both a factor of a positive integer ≤ r S and a factor of a positive integer ≤ m P , then deg(π Q ) is a factor of min{r S !, m P !}.
Semistable reduction and Toroidal embedding
3.1. Toric varieties. We fix the following notations: • N ∼ = Z n : a lattice; • τ ≺ σ: the relation between cones τ and σ that τ is in the face of σ; • X Σ : the toric variety associated to Σ; • U σ : the local affine chart of X Σ associated to σ in Σ; • x σ ∈ U σ : the distinguished point associated to σ; • O σ : the T N -orbit of X σ under the T N -action on X Σ ; • N σ : the sublattice of N generated as a subgroup by σ ∩ N ; • Span R (σ): the real vector subspace spanned by σ; • interior of σ: the topological interior of σ in Span R (σ).
Let Σ ′ σ be the set of cones in Σ ′ whose interior is mapped to the interior of σ ∈ Σ. Let ] to be the index ofψ over O σ , and denote it by Ind(σ). Let τ ′ ∈ Σ ′ σ and {σ ′ 1 , σ ′ 2 , ...} be the set of cones in Σ ′ σ that contains τ ′ as a face. Then each R constructed above will be called the relative star of τ ′ over σ and will be denoted by Star σ (τ ′ ) A cone τ ′ ∈ Σ ′ σ is called primitive with respect to ψ if none of the faces of τ ′ are in Σ ′ σ . Let X Σ be a toric variety, we call the divisor D Σ := X Σ \ T the toric boundary of X Σ .
Theorem 3.1. [HLY02, Proposition 2.1.4] Letψ : X Σ ′ → X Σ be a toric morphism induced by a map of fans ψ : Σ ′ → Σ. Then: • The imageψ(X Σ ′ ) ofψ is a subvariety of X Σ . It is realized as the toric variety corresponding to the fan Σ ψ := Σ ∩ ψ(N ′ R ). • The fiber ofψ over a point y ∈ X Σ ψ depends only on the orbit O σ , σ ∈ Σ ψ , that contains y. Denote this fiber by F σ , then it can be described as follows.
Define Σ ′ σ to be the set of cones σ ′ in Σ ′ , whose interior is mapped to the interior of σ. Let Ind(σ) be the index ofψ over O σ . Then ψ −1 (y) = F σ is a disjoint union of Ind(σ) identical copies of connected reducible toric variety F c σ , whose irreducible components F τ ′ σ are the toric variety associated to the relative star Remark 3.2. Here the term reducible toric variety means a reducible variety obtained by gluing a collection of toric varieties along some isomorphic toric orbits. For any toric variety X Σ , it is well known that there is a refinement ψ : Σ ′ → Σ, i.e. each cone of Σ is a union of cones in Σ ′ . So thatψ : X Σ ′ → X Σ is a resolution of singularities.
By comparing the dimension of exceptional locus, it is easy to see that the codimension 1 where the irreducible components of F ′c σ are the toric variety associated to the relative stars where F c1 σ is the toric variety associated to the relative star Star σ (σ ′ 1 ). Because every toric variety is birational equivalent to P r for some r ∈ N, the result follows.
3.2. Toroidal embedding. Given a normal variety X and an open subset U X ⊂ X, the embedding U X ⊂ X is called toroidal if for every closed point x ∈ X, there exist a toric variety X σ , a point s ∈ X σ , and an isomorphism of complete local k-algebraŝ such that the ideal of X \ U X maps isomorphically to the ideal of X σ \ T σ . In this paper we will assume that every irreducible component of X \ U X is normal, that is U X ⊂ X a strict toroidal embedding.
Proposition 3.5 ([ KKMS73,Page 195]). Let U ⊂ X be a toroidal embedding of varieties and x a closed point of X. Then there exists an affine toric variety X σ and anétale morphism ψ from an open neighborhood of x ∈ X to X σ , such that locally at x (for the Zariski topology) we have U = ψ −1 (T ), where T is the big torus of X σ .
A dominant morphism f : (U X ⊂ X) → (U B ⊂ B) of toroidal embedding is called toroidal if for every closed point x ∈ X there exist local models (X σ , s) at x, (X τ , t) at f (x) and a toric morphism g : X σ → X τ so that the following diagram commuteŝ wheref # andĝ # are the algebra homomorphisms induced by f and g.
) be a projective morphism between projective normal log pairs with connected fibers, we say f is semistable if • the varieties X and Z admit toroidal structures U X := X \ D ⊂ X and U Z : • with this structure, the morphism f is toroidal, • the morphism f is equidimensional, • all the fibers of the morphism f are reduced, and • X and Z are nonsingular.
Theorem 3.8 (Semistable Reduction). Let X → Z be a projective morphism between projective normal varieties and D ⊂ X be a closed subset. Then there exists a proper, surjective, generically finite morphism of irreducible varieties b : Z ′ → Z, a projective birational morphism of irreducible varieties a : Proof. This is a direct result of [ALT19, Theorem 4.7].
Lemma 3.9 ([AK00, Lemma 6.2]). Let f : (X, D) → (Z, B) be a semistable morphism. Let g : C → Z be a morphism such that C is nonsingular and g −1 (B) is a normal crossing divisor. Let X C = C × Z X and g C : is an equidimensional toroidal morphism with reduced fibers.
Lemma 3.10. Let X be a projective normal variety, D a reduced divisor on X, and U X := X \ D ⊂ X a toroidal embedding. Suppose ∆ ≤ D is a Q-divisor such that (X, ∆) is sub-log canonical.
If P is a log canonical place of (X, ∆), then P is birational equivalent to P r × V , where V is the image of P in X and r = dimX − dimV − 1.
Proof. Let P be a log canonical place of (X, ∆), suppose x is a general point of the image of P on X. For the rest of the proof, we consider Zariski locally near x by replacing X with an open neighborhood of x.
Let x be a general point of V ⊂ X and X σ the affine toric variety defined in Proposition 3.5. Let σ ⊂ σ ′ be a subdivision such that X σ ′ → X σ is a resolution. Because π isétale, X 1 := X σ ′ × Xσ X is a log resolution of (X, D). We have the following diagram Let D 1 be the strict transform of D on X 1 plus the h-exceptional divisor, then h : (U 1 := X 1 \ D 1 ⊂ X 1 ) → (U X ⊂ X) is a toroidal morphism. By an easy computation of discrepancies on snc divisors, it is easy to see that P can be obtained by a sequence of blows up along strata of (X ′ , D ′ ). We will show that such morphism isétale locally equal to a toric morphism between toric varieties.
Suppose we have a sequence of blows up h i : is the strict transform of D i plus the h i -exceptional divisor, so that P is a divisor on X k . Next, we show that there is a Cartesian diagram • the horizontal arrows areétale morphisms, • σ j is a subdivision of σ 1 , • X σ j → X σ 1 is the corresponding toric morphism, and • near any closed point of g −1 j x 1 , we have U j = π −1 j T j , where T j is the big torus of X σ j , for all 1 ≤ j ≤ k.
Suppose it is true for j = i. Let X σ i+1 → X σ i be the toric morphism determined by blowing up X i along the image of V i on X σ i . Because blowing up is uniquely determined by local equations and both X i+1 → X i and X σ i+1 → X σ i are obtained by blowing up the same subvarietyétale locally, then there is a naturalétale morphism π i+1 : X i+1 → X σ i+1 such that near any closed point of g −1 j x, we have U i+1 = π −1 i+1 T i+1 , where T i+1 is the big torus of X σ i+1 . Because the composition of X σ i+1 → X σ i and X σ i → X σ 1 is a toric morphism, the claim is true for j = i + 1. Now we have the following Cartesian diagram By assumption, P is a divisor on X k and π k isétale near the general point of P . Then P is equal to the pull back of a divisor P σ k on X σ k . Because σ k → σ is a subdivision, by Lemma 3.4, f σ | Pσ k is birationally equivalent to a P r -bundle. Because the diagram is Cartesian, f | P is also birationally equivalent to a P r -bundle. Therefore, P is birationally equivalent to f (P ) × P r .
Moduli of polarised Calabi-Yau pairs
In this section we recall some definitions and results on moduli of stable pairs and polarised log Calabi-Yau pairs, see [Kol23], [KX20], [Bir22], and [Bir23]. We fix natural numbers d, n and positive rational numbers c, v.
Definition 4.1. A log Calabi-Yau pair is a semi-log canonical pair (X, ∆) such that K X +∆ ∼ Q 0.
A polarised log Calabi-Yau pair consists of a log Calabi-Yau pair (X, ∆) and an ample integral divisor N ≥ 0 such that (X, ∆ + uN ) is semi-log canonical for some real number u > 0. Fix a natural number d and positive rational numbers c, v.
A (d, c, v)-polarised log Calabi-Yau pair is a polarised log Calabi-Yau pair (X, ∆), N such that dimX = d, ∆ = cD for some integral divisor D, and vol(N ) = v. Proof. This is Lemma 7.7 in the first arxiv version of [Bir23].
The following definition comes from Chapter 7 in the first arxiv version of [Bir23]. Let t be as in the lemma. To simplify notation, let Θ = (d, c, v, t, r, P n ). Let S be a reduced scheme. A strongly embedded Θ-polarised Calabi-Yau family over S is a (d, c, v)-polarised Calabi-Yau family f : (X, B), N → S together with a closed embedding g : X → P n S such that • (X, B + tN ) → S is a stable family, • f = πg where π denotes the projection P n S → S, • letting L := g * O P n S (1), we have R q f * L ∼ = R q π * O P n S (1) for all q, and • for every s ∈ S, we have L s ∼ = O Xs (r(K Xs + B s + tN s )).
We denote the family by f : (X ⊂ P n S , B), N → S. Define the functor E s PCY Θ on the category of reduced schemes by setting E s PCY Θ (S) = {strongly embedded Θ-polarised slc Calabi-Yau families over S}.
By Proposition 7.8 in the first arxiv version of [Bir23], the functor E s PCY Θ has a fine moduli space, which is a reduced separated scheme S := E s P CY Θ , and a universal family (X ⊂ P n S , D), N → S.
Proof of Main Theorem
Lemma 5.1. Fix a natural number d and a positive rational number c. Suppose (X, ∆) is an ǫ-lc of dimension d, −(K X + ∆) is ample and coeff∆ ≥ c. Then (X, ∆) is log bounded.
Proof. By the main theorem of [Bir21], X is bounded. Then there exist a natural number n, two constants V 1 , V 2 depending only on d and ǫ, and a very ample divisor H on X defining an embedding X ⊂ P n such that H d ≤ V 1 and H d−1 · K X ≥ −V 2 . Because coeff∆ ≥ c, we have By the boundedness of the Chow variety, both X and Supp∆ are parametrized by a subvariety of the Hilbert scheme. Then (X, ∆) is log bounded.
Proof. Let f : (X, ∆) → C be a fibration over a curve such that • coeff∆ ⊂ cN, • −(K X + ∆) is nef and big over C, and • the general fiber (X g , ∆ g ) of f is ǫ-lc Fano. Fix a closed point 0 ∈ C, for any log canonical place P of (X, ∆ + lct(X, ∆; f * 0)f * 0), by Lemma 2.4, there exists a birational map Y X and a log pair (Y, Λ Y ), such that • P is a log canonical place of (Y, Λ Y ), • (Y, Λ Y ) is log canonical over a neighborhood of s, and • the general fiber Y g is isomorphic to X g . Let ∆ Y be the strict transform of ∆ on Y . Because the general fiber (X g , ∆ g ) is ǫ-lc, −(K Xg + ∆ g ) is ample and coeff∆ g ≥ c, by Lemma 5.1, (X g , ∆ g ) is log bounded, then (Y g , ∆ Yg ) is also log bounded.
By log boundedness, there exists a natural number m and an open subset U ⊂ C such that −m(K Yu + ∆ Y,u ) is very ample without higher cohomology for any u ∈ U . Choose a general member N ∈ |−m(K Y +∆ Y )| U . By very ampleness, (Y g , Λ Y,g +tN g ) is log canonical for a positive rational number t. Hence (Y g , Λ Y,g ), N g is polarised Calabi-Yau pair. We have transferred the condition of Theorem 1.2(1) to (2).
Next, we show that Theorem 1.2 implies Theorem 1.1. Suppose f : (X, ∆) → C is a locally stable morphism. By definition, X s = f * (s) is reduced and (X, ∆ + f * (s)) is log canonical for every closed point s ∈ C. Then every irreducible component of X s is a log canonical place of (X, ∆ + f * (s)).
Suppose Theorem 1.2 is true. Because mult P f * (s) = 1 for every irreducible component P ⊂ X s , there exists a bounded family W → T and a finite dominant rational map W t P whose degree is a factor of min{l, mult P f * (s)} = 1. Then W t P is a birational map, which means P is birationally bounded.
Suppose (X, ∆) → C is a fibration, N is a divisor on X such that • K X + ∆ ∼ Q,C 0, and • the general fiber (X g , ∆ g ), N g is a (d, c, v)-polarised Calabi-Yau pair.
By Lemma 4.4, there exists t ∈ Q ≥0 and r ∈ N such that (X g , ∆ g + tN g ) is slc and r(K Xg + ∆ g + tN g ) is very ample without higher cohomology. Then by cohomology and base change, r(K X U + ∆ U + tN U ) is very ample over an open subset U ⊂ C, and it defines a closed embedding g : X U ֒→ P n U . Also because (X U , ∆ U + tN U ) → U is a stable family, f U : (X U ⊂ P n U , ∆ U ), N U → U is a strongly embedded polarised slc Calabi-Yau family over U . Since E s PCY Θ has a fine moduli space with the universal family (X ⊂ P n S , D), N → S, we have (X U , ∆ U ) ∼ = (X , D) × S U , where U → S is the moduli map defined by f U .
After passing to a stratification of S, we may assume that there is a fibrewise log resolution ξ : (Y, D Y ) → (X , D), where D Y is defined by K Y + D Y = ξ * (K X + D). By Theorem 3.8, there is a generically finite morphismS o → S and a closureS o ֒→S such that the pull back (Y, D Y ) × SS o extend to a semistable morphism SuppD ′′ Y ⊂D ′Ȳ ,h , and by the property of weakly semistable morphism, (Ȳ, SuppD ′′ Y ) →S is locally stable.
It is easy to see that there is an effective Q-divisorD !Ȳ which is supported on χ −1 (S \S o ) but does not contain the whole fiber over any generic point ofS \S o , such that KȲ +D ′′ Y ∼ Q,SD !Ȳ .
LetX be the normalization of X × CC and denote the natural morphismX →C by π X . Write α := lct(X, ∆; f * 0). By the Hurwitz's formula, there is a Q-divisor∆ α such that KX +∆ α ∼ Q π * X (K X + ∆ + αf * 0). Suppose P is a log canonical place of (X, ∆ + αf * 0). Let X ′ → X be a dlt modification of (X, ∆ + αf * 0) such that P is a divisor on X ′ ,X ′ be the normalization of X ′ × CC , andP be an irreducible component of the preimage of P onX ′ . By [Kol13, 2.41],P is a log canonical center of (X,∆ α ). And by Lemma 2.6, the degree of the finite morphismP → P is a factor of min{deg(π)!, mult P f * 0}. Define l to be the factorial of the degree of the finite morphism S → S, which is clearly greater or equal to deg(π)!, then min{deg(π)!, mult P f * 0} is a factor of min{l, mult P f * 0}. Thus we only need to prove thatP is birationally bounded.
Recall thatȲ \D ′ Y ⊂Ȳ is a toroidal embedding and Supp(DȲ +Ȳ0) ≤D ′ Y , by Lemma 3.10,P is birationally equivalent toV × P r , whereV is the image ofP onȲ . It is easy to see thatV is a log canonical center of (Ȳ ,DȲ +Ȳ0). By applying Lemma 6.2 to a local model of (Ȳ \ SuppD ′Ȳ ⊂Ȳ ) near a general point ofV , we can see that there is an irreducible component Y 1 0 of (DȲ +Ȳ0) =1 , which maps to0, such thatV is a subvariety ofȲ 1 0 . To proveP is birationally bounded, we only need to prove all log canonical centers of (Ȳ ,DȲ + Y0) are in a bounded family.
Remark 5.3. In Theorem 1.2(1), if we further assume (X, ∆) to be klt, we will show that the condition −(K X + ∆) being ample over C can be replaced by −(K X + ∆) being nef and big over C.
Assume −(K X + ∆) is nef and big over C, because (X, ∆) is klt, by [BCHM10, Corollary 1.3.2], −(K X + ∆) is semiample, hence defines a contraction h : X → Y . Let ∆ Y := h * ∆, then we have , and a divisor P is a log canonical place of (X, ∆ + lct(X, ∆; f * 0)f * 0) if and only if it is a log canonical place of (Y, . Then we can replace (X, ∆) by (Y, ∆ Y ) and the result follows.
In Theorem 1.1(1), for the locally stable morphism f : (X, ∆) → C over a smooth curve, because the general fiber (X g , ∆ g ) is klt, by [Kol23, Proposition 2.14], (X, ∆) is a klt pair. Then for the same reason, the condition −(K X +∆) being ample over C can be replaced by −(K X +∆) being nef and big over C.
Boundedness of the boundary
In this section, we prove the following technical result. We think it is useful in the study of singularities of Fano fibrations. Let X be a normal quasi-projective variety, (X, ∆) a log canonical pair, and f : X → C a fibration of relative dimension d over a smooth curve C. Suppose • coeff∆ ⊂ cN, • −(K X + ∆) is ample over C, and • the general fiber (X g , ∆ g ) is ǫ-lc. Then for any closed point 0 ∈ C and any divisor P over X which is a log canonical place of (X, ∆ + lct(X, ∆; f * 0)f * 0), there is a diagram and a Q-divisorΛ onX with the following properties: • l(KX +Λ) ∼C 0.
•C → C is finite of degree ≤ l.
• P is dominated by a divisorP onX, that is, if X ′ → X is a birational map and extracts P , then P is dominated byP via the natural rational mapX X ′ . •P is an irreducible component ofΛ =1 .
• (X,Λ) is sub-log canonical in a neighborhood ofP .
• there is a closed point t ∈ T , such that (W t , D t ) is crepant birationally equivalent to (P , DiffP (Λ −P )).
To prove this theorem, we need to generalize some results in section 3 to the log pair case. First, we consider a toric variety with a boundary.
Lemma 6.2. Let (X Σ , D = i d i D i ) be a sub-pair where X Σ is a normal toric variety and D i is torus-invariant. It is well known that if d i ≤ 1 for all i, then (X Σ , D) is sub-log canonical.
Proof. Let {B 1 , ..., B j } be the set of linearly independent 1-dimensional cones in Σ ′ σ that contains τ ′ in their closure. Then by the proof of [CLS11, Proposition 11.4.24], coeff Because P is an orbit closure, then P is normal and Diff P (D ′ ) is well defined. By Theorem 3.1, the general fiber P g ofψ| P : P →ψ(P ) is the toric variety associated to the relative star Star σ (τ ′ ). Let {τ ′ 1 , ..., τ ′ j } ⊂ Σ ′ σ be the 1-dimensional cones whose corresponding divisors intersect P , then {τ ′ 1 , ..., τ ′ j } corresponds to the 1-dimensional cones in Star σ (τ ′ ) whose corresponding divisors intersect P g . Because P g is toric, its singular locus is torus invariant. By adjunction, Diff P (D ′ )| Pg is supported on the union of SuppD ′ ∩ P g and the singular locus of P g , then SuppDiff P (D ′ )| Pg is torus invariant.
By adjunction, (P g , Diff P (D ′ )| Pg ) is sub-log canonical and K Pg + Diff P (D ′ )| Pg ∼ Q 0. Because P g is a general fiber, by the property of toric variety, we have coeff D P (Diff P (D ′ )) = 1 for everỹ ψ| P -horizontal prime divisor D P of SuppDiff P (D ′ ). Lemma 6.3. With the same notation as in Lemma 6.2. Suppose P is a log canonical place of (X Σ , D), then there is an open subset U ⊂ Z :=ψ(P ) such that (P, Diff P D ′ ) × Z U is crepant birationally equivalent to (P r , D ∆r ) × U , where P r is the projective space of dimension r := dimP − dimZ also as the toric variety associated to the standard simplex ∆ r , D ∆r is the corresponding toric boundary.
Proof. Recall that by Theorem 3.1, the general fiber of f | P : P → Z is the toric variety corresponding to the relative star Star σ (τ ′ ).
We denote the strict transform of P on X Σ ′′ still by P . Write K X Σ ′′ +P +D ′′ ∼ Q f * (K X Σ +D). By adjunction and Lemma 6.2, the general fiber of (P, Diff P D ′′ ) → Z is (X Starσ(τ ′′ ) , D Starσ(τ ′′ ) ) and there is an open subset U ′ ⊂ Z such that P U ′ := P × Z U ′ ∼ = X Starσ(τ ′′ ) × U ′ . The birational morphism X Starσ(τ ′′ ) → P r defines a birational morphism P U ′ → P r × U ′ . Let D P r U ′ be the pushforward of D ′′ on P r × U ′ , then by lemma 6.2, the restriction of D P r U ′ to the general fiber of P r × U ′ → U ′ is D ∆r , which is the union of r + 1 hyperplanes in P r with transversal self intersections.
Next we show that there is an open subset U ⊂ U ′ such that (P r × U ′ , D P r U ′ ) × U ′ U ∼ = (P r , D ∆r ) × U . Denote the two projections by p : P r × U ′ → P r and π : P r × U ′ → U ′ . Define L := p * O P r (1) ⊗ π * O U ′ , then L is π-very ample and π * L is a free sheaf of rank r + 1. Choose an open subset U ⊂ U ′ such that D P r U ′ × U ′ U form a base of section of π * L, then these sections define a trivialization (P r × U ′ , D P r U ′ ) × U ′ U ∼ = (P r , D ∆r ) × U . Finally because K P U + Diff P D ′ | P U ∼ Q,U 0 and K P r ×U + D P r U ′ | P r ×U ∼ Q,U 0 and D P r U ′ is the pushforward of D ′′ on P r × U ′ , then (P U , Diff P D ′ ) is crepant birationally equivalent to (P r , D ∆r ) × U .
Next, we generalize Lemma 3.10 to the log pair case and study the boundary part.
Lemma 6.4. Suppose X is a normal variety, U X ⊂ X is a toroidal embedding and there is a Q-divisor D such that (X, D) is sub-log canonical and SuppD ⊂ X \ U X . Let f : Y → X be a birational morphism extracting a log canonical place P of (X, D), write K Y + P + D Y ∼ Q f * (K X + D). Define Z := f (P ) and denote the normalization of P and Z by P n , Z n . Then we have • there is a Q-divisor D Z n on Z n such that K P n + Diff P n (D Y ) ∼ Q (f | P ) * (K Z n + D Z n ), and • (P n , Diff P n (D Y )) is crepant birationlly equivalent to (Z n , D Z n ) × (P r , D ∆ r ), where P r is the projective space of dimension r := dimP − dimZ also as the toric variety associated to the standard simplex ∆ r , and D ∆r is the corresponding toric boundary.
Proof. Pick a general point x ∈ Z, by the proof of Lemma 3.10, locally near x we have the following Cartesian diagram • X Σ and X Σ ′ are toric varieties, • h σ is a birational toric morphism, • π ′ and π areétale, and • h is a birational morphism extracting P . | 2023-02-17T06:42:24.109Z | 2023-02-16T00:00:00.000 | {
"year": 2023,
"sha1": "88200a65e13b6156610900579b1300224d86edc3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "88200a65e13b6156610900579b1300224d86edc3",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
232076238 | pes2o/s2orc | v3-fos-license | Time-Varying Coefficient Model Estimation Through Radial Basis Functions
In this paper we estimate the dynamic parameters of a time-varying coefficient model through radial kernel functions in the context of a longitudinal study. Our proposal is based on a linear combination of weighted kernel functions involving a bandwidth, centered around a given set of time points. In addition, we study different alternatives of estimation and inference including a Frequentist approach using weighted least squares along with bootstrap methods, and a Bayesian approach through both Markov chain Monte Carlo and variational methods. We compare the estimation strategies mention above with each other, and our radial kernel functions proposal with an expansion based on regression spline, by means of an extensive simulation study considering multiples scenarios in terms of sample size, number of repeated measurements, and subject-specific correlation. Our experiments show that the capabilities of our proposal based on radial kernel functions are indeed comparable with or even better than those obtained from regression splines. We illustrate our methodology by analyzing data from two AIDS clinical studies.
Introduction
Statistical models for longitudinal data are powerful instruments of analysis when experimental units (subjects) are measured repeatedly over time in relation to a response variable along with static or time-dependent covariates. A very important feature of this kind of data that must be taken into account when fitting a statistical model, is the likely presence of serial correlation within repeated measurements on a given subject (observations between subjects are assumed to be independent). Typically, the main purpose of the analysis is to identify and characterize the evolution (mean tendency) of the response variable over time and quantify its association with the covariates. Parametric techniques for longitudinal data analysis have been exhaustively studied in the literature (see for example Molenberghs et al., 2014;Liu, 2015;Little et al., 2015, and references within). Though useful in many cases, questions about the adequacy of the assumptions of parametric models and the potential impact of model misspecification on the analysis often arise. For instance, one of the basic assumptions associated with parametric techniques, yet not always satisfied, establishes that the mean response must be a known function of both fixed and random effects, indexed by a set of unknown parameters. Thus, for many practical situations, parametric models may be too restrictive or even unavailable.
In order to overcome such difficulties, building on the contributions of Cleveland et al. (1991), Hastie and Tibshirani (1993), and Zeger and Diggle (1994), the work of Hoover et al. (1998) considered a nonparametric model that lets the parameters vary over time. Nonparametric models of this nature allow more flexible functional dependence between the response variable and the covariates, since they are based on time-dependent coefficients (smooth functions of time) instead of fixed unknown parameters. Due to their interpretability and flexibility, these models have been the subject of active research over the last twenty years. Early popular developments are given in Wu et al. (1998), Zhang et al. (1998), Fan and Zhang (1999), Wu and Chiang (2000), Cai et al. (2000), Fan and Zhang (2000), Lin and Carroll (2000), Lin and Ying (2001), Chiang et al. (2001), Rice and Wu (2001), Wu and Zhang (2002), and Huang et al. (2002). For applications and surveys, see Fan and Zhang (2008), Tan et al. (2012), Zhang (2013), and Wu and Tian (2018).
Specifically, consider the longitudinal dataset D = {(y i,j , x i,j , t i,j , ) : j = 1, . . . , n i , i = 1, . . . , n} , where y i,j ≡ y i (t i,j ) and x i,j ≡ x i (t i,j ) with x i (t) = (x 0,i (t), x 1,i (t), . . . , x d,i (t)) are the real-valued response variable and the (d + 1) column covariate vector, corresponding to measurement j of subject i, observed at time t i,j , n is the number of subjects, n i is the number of observations associated with subject i, and the total number of observations in the sample is N = n i=1 n i . Measurement times are often distinct and irregularly spaced in a fixed interval of finite length. In order to evaluate the mean joint effect of time t and the covariates x(t) on the outcome y(t), we use the structured nonparametric model where β(t) = (β 0 (t), β 1 (t), . . . , β d (t)) is a (d + 1) column vector of real-valued nonparametric functions of time t, called dynamic coefficients or dynamic parameters, and (t) is a zero-mean stochastic process, independent of x(t), with covariance function γ(s, t) = Cov [ (s), (t)]. The model in equation (1) is referred to as a (fixed-effects) time-varying coefficient model (TVCM). This model provides a parsimonious approach for characterizing time varying association patterns of a set of dynamic predictors on the expected value of a functional response. Notice that this model has a natural interpretation since for a fixed time point t, the TVCM (1) reduces to a multiple linear model with response variable y(t) and covariate vector x(t), for which a standard interpretation of the time-varying coefficients β r (t), r = 0, 1, . . . , d, holds. In our experiments, we take x 0 (t) ≡ 1 for all t, which means that β 0 (t) is the intercept coefficient describing the baseline time-trend. Finally, note that a discretized version of the TVCM can be obtain by simply substituting t by t i,j in equation (1), considering all data points given in D, in order to highlight the dependence on data when necessary.
In order to illustrate the full potential that a model such as a TVCM has, we provide below a fair revision on extensions of the model that have taken place over recent years. Motivated by applications rather than simply a desire to modify a statistical model, several extensions of the TVCM (1) have been proposed over the years in all sorts of directions, along with more complex structures and weaker assumptions. Some of these extensions typically share characteristics with each other. Here, we list some relevant instances in no particular order. A popular extension emerged naturally in order to efficiently capture both population and individual relationships. In that way, mixed-effects time-varying coefficient models extended TVCMs by dividing the error term into two parts, one part representing the subject-specific deviation from the population mean function, and the other representing the measurement error (Liang et al., 2003;Wu and Liang, 2004;Lu and Zhang, 2009;Sosa and Diaz, 2012;Chiou et al., 2012;Jeong et al., 2016); we consider a model of this sort in our first simulation study (see Section 5.1 for details). Another widespread extension took place when non-Gaussian responses where modeled directly, ranging from dichotomous and categorical outcomes to variables with skewed distributions, which provided a unified framework to do so. Thus, generalized time-varying coefficient models extended TVCMs by introducing a known link function to relate the dynamic linear predictor and the response process to each other (Biller and Fahrmeir, 2001;Cai et al., 2000;Senturk and Muller, 2008;Lu and Zhang, 2009;Senturk et al., 2013;Lu and Huang, 2017;Jeong et al., 2017). Also, more complex types of dynamic functional dependencies have been developed, such as the relationships provided in time-varying additive and nonlinear models (Fan et al., 2003;Qu and Li, 2006;Wang, 2007;Senturk and Muller, 2010;Wu and Tian, 2013). In addition, more adaptations were developed to deal with the same issues that non-varying models deal with. For instance, quantile regression Andriyana et al., 2018), variable selection and shrinkage estimation (Fan and Huang, 2005;Li and Liang, 2008;Wang et al., 2008;Wang and Xia, 2009;Wei et al., 2011), and even spatial modeling (Assuncao, 2003;Gelfand et al., 2003;Waller et al., 2007;Wu et al., 2010;Serban, 2011;Nobles et al., 2014;Jeong et al., 2016).
On the other hand, several smoothers can be used to estimate the dynamic coefficients of TVCM (1). The key idea behind the estimation process relies on rewriting each β r (t) through a linear expansion of parametric functions in order to make possible parametric-like inference as in standard models. In general, each smoother is indexed by a smoothing parameter vector that controls the trade-off between goodness-of-fit and model complexity. Thus, smoothing parameter selection criteria are in order. Some of the most popular smoothers include local polynomial smoothers, regression spline, smoothing spline, and P-splines (Wu and Zhang, 2006;Wu and Tian, 2018). Different smoothers have different strengths in one aspect or another. For example, smoothing spline may be good for handling sparse data, while local polynomial smoothers may be computationally advantageous for handling dense designs.
The purpose of this paper is twofold. First, in order to estimate the time-varying coefficients β r (t), we propose a linear smoother based on kernel functions, by treating them as if they were radial basis functions (see Buhmann, 2004, for a complete catacterization of radial basis funcions). This approach has been used in both the semiparametric regression (Ruppert et al., 2003) and statistical learning literature (Hastie et al., 2009;Harezlak et al., 2018) but, to the best of our knowledge, it has not been fully exploited yet in the context of longitudinal data analysis, apart from the work of Serban (2011) on space-time varying coefficients models. Our proposal applies to both time-invariant and time-dependent covariates as well as regular and irregular placed design times, and also allows for different amounts of smoothing for different coefficients. Second, since a radial kernel expansion resembles very closely an approximation using spline basis functions, we compare these smoothing alternatives with each other in terms of goodness-of-fit and prediction using both Frequentist and Bayesian inference frameworks. To that end, from a Frequentist perspective, we consider weighted least squares and bootstrap methods. From a Bayesian perspective, we consider Markov chain Monte Carlo (MCMC) along with variational methods. The Bayesian approach has become more popular in recent years (Biller and Fahrmeir, 2001;Waller et al., 2007;Hua, 2011;Memmedli and Nizamitdinov, 2012;Jeong et al., 2016;Lu and Huang, 2017;Franco-Villoria et al., 2019), but variational algorithms have not been explored to ease the computational burden under this framework.
The remainder of the paper is structured as follows: Section 2 introduces the estimation of time-varying coefficients through radial kernel functions and regression spline. Section 3 discusses different approaches to statistical inference. Section 4 discusses the choice of knots and smoothing parameter selection. Section 5 compares the estimation alternatives through an extensive simulation study. Section 6 illustrates our proposal by analyzing AIDS data coming from two clinical studies. Finally, Section 7 presents some concluding remarks and directions for future work.
Estimation using radial kernel functions
The main idea behind estimation through radial kernel functions consists in expressing each dynamic coefficient in model (1) as a linear combination of kernel functions by treating them as radial basis functions. A radial smoother can be constructed using the following set of radial basis: where ξ(·) is a kernel function, | · | is the Euclidean norm in R, and κ 1 < . . . < κ k are k knots covering the time domain. The smoother performance strongly depends on the proper selection of both location and number of knots (see Section 4 for details). The basis degree g is usually less crucial and it is typically taken as 1, 2, or 3, for computational convenience.
On the other hand, note that the first g + 1 basis functions of (2) are polynomials of degree up to g, and the others are all kernel functions, which satisfy the property ξ(t) = ξ(|t|). Such functions are known as radial functions (Buhmann, 2004). Different kinds of kernel functions are commonly used in practice, such as Gaussian or Epanechnikov kernels, among many others (see Wasserman, 2006, for a review). This choice is less significant in terms of smoothing.
Using the radial basis (2), we can express each time-varying coefficient β r (t), r = 0, 1, . . . , d, as where Ξ r (t) = (1, t, . . . , t g , ξ(|t − κ 1 |), . . . , ξ(|t − κ kr |)) and α r = (α r,0 , . . . , α r,kr+g+1 ) are p r × 1 column vectors with p r = k r + g + 1, composed of basis functions evaluated at time t and unknown parameters, respectively. Such a representation is able to accommodate a variety of shapes and smoothness for the dynamic coefficients without overfitting the data, since separate number of knots are allowed. This means that, for a fixed degree basis g, the number of knots k 0 , . . . , k d play the role of smoothing parameters.
In this way, the dynamic vector β(t) in model (1) becomes where Note that α is a column vector of size p × 1, p = d r=0 p r , whereas Ξ(t) is a rectangular matrix of size p × (d + 1). Now, substituting β(t) in model (1) for its equivalent expression given in (4), it follows that the TVCM (1) can be approximately written as where Similar to α, each z i,j is a p×1 column vector of covariate values times radial basis functions. For the i-th subject, i = 1, . . . , n, we denote the response vector, the random error vector, and the design matrix as Consistently, we denote the response vector, the random error vector, and the design matrix for the whole dataset as which allow us to express model (5) in matrix form as a standard linear model: Once an estimate for α, α = ( α T 0 , . . . , α T d ), is obtained; it is straightforward to get an estimate for β(t), β(t) = ( β 0 (t), . . . , β d (t)), by simply letting β(t) = Ξ(t) T α. Therefore, our task reduces to estimate α in model (7) using an appropriate method.
Similarly, the key concept working with spline functions is to represent each dynamic coefficient through a regression spline basis, such as the truncated power basis, B-spline basis, or wavelet basis, among others (see Ramsay et al., 2009, for a review). The B-spline basis is a powerful choice due to its simplicity and capability to capture local features of dynamic relationships. In this way, emulating the methodology described above, in order to estimate the time-varying coefficient β r (t), we consider the following expansion based on truncated power functions: where x g + denotes the g-th power of the positive part of x, x + = max(0, x), and κ 1 , . . . , κ kr are k r knots (in an increasing order) scattered in the range of interest. As before, note that Φ r (t) = 1, t, . . . , t g , (t − κ 1 ) g + , . . . , (t − κ kr ) g + and α r = (α r,0 , . . . , α r,kr+g+1 ) are p r × 1 column vectors, p r = k r + g + 1, composed of basis functions evaluated at time t and unknown parameters, respectively. Then, it is simple to obtain an estimate for β(t) as and α is an estimate of α in model (7), whose design matrix Z is constructed from (6) Finally, the reader should note that our proposal given in equation (3) is a direct reformulation of the expansion based on truncated power functions (which have been extensively investigated in the literature as in Wu and Zhang, 2006, for example; and therefore, we use them as a baseline), obtained by using other kind of basis functions. We argue that this is a sensible thing to do since radial functions (kernels in particular) have very desirable properties (e.g. the semiparametric regression expansion considered here based on radial functions are kernel machines within the reproducing kernel Hilbert space framework; see Harezlak et al., 2018, for example) to representing (smoothing) all sorts of functional behaviors. That is why we believe that our approach constitutes a reasonable choice to represent dynamic coefficients in any TVCM. As a final comment, we note that, within a given inference paradigm, the "computational complexity" of either expansion is equivalent, because each basis is composed of 1 + g + K real-valued functions.
Inference methods
According to the previous section, TVCM (1) is locally equivalent to standard linear model (7) in which it is required to estimate the parameter vector α. In what follows, we consider both Frequentist and Bayesian approaches to carry out statistical inference on α, and as a consequence, on β(t). First, we motivate a very popular sampling distribution as well as widely known classical bootstrap methods for performing statistical inference. Then, we consider information external to the dataset by means of a conjugate prior distribution, along with our proposal for quantifying uncertainty based on simulation and variational methods. We discuss in detail implications, challenges, and algorithms for each protocol, but focus our attention on our estimation approach embedded in the Bayesian paradigm.
Frequentist inference
From a likelihood point of view, in its simplest form, we can consider the sampling distribution where W = diag [W 1 , . . . , W n ] and W i = w i I ni is the weight matrix for the i-th experimental unit, i = 1, . . . , n, which is equivalent to assuming | W, σ 2 ∼ N(0, σ 2 W −1 ) in model (7), in a way that (t) ∼ GP(µ, γ) is a Gaussian process with µ(t) = 0 and γ(s, t) = σ 2 1 {s=t} in model (1). The weights w 1 , . . . , w n are known positive constants such that n i=1 n i w i = 1, which quantify the relative importance of experimental units. In our experiments, we follow Wu and Tian (2018) and consider the "subject uniform weight", w i = 1/(nn i ), where each subject is inversely weighted by its number of repeated measurements n i , so that the subjects with fewer repeated measurements receive more weight than the subjects with more repeated measurements. The above independence assumption is convenient mathematically and works well when longitudinal data tend to be sparse. However, in our experience, and also, considering empirical evidence from both Wu and Zhang (2006) and Wu and Tian (2018), such an assumption can be robust to some deviations from data sparsity. Therefore, the sampling distribution (9) is an appealing choice in practice.
Under this setting, the resulting maximum likelihood estimator of α is given by which is equivalent to the estimator obtained as a result of minimizing the weighted least squares (WLS) criterion Note that according to the Gauss-Markov theorem, the estimator provided in (10) is the best linear unbiased estimator (BLUE) of α. Furthermore, it can be shown that an unbiased estimator of σ 2 is where N = n i=1 n i is the total number of observations, p = d r=0 p r is the expansion dimension, and α is the estimator of α given in (10).
Under the Frequentist paradigm, confidence intervals can be computed based on either asymptotic distributions or bootstrap methods (Efron and Hastie, 2016). However, given the compound structure of longitudinal data, inferences that are based on asymptotic distributions are typically difficult to justify in practice, since they heavily rely on assumptions that are difficult to meet. Thus, we consider bootstrap methods, which can always be implemented based on the available data regardless of sample sizes and sampling distributions. See Appendix A.1 for a detailed description of the bootstrap algorithm. The main advantage of using a bootstrap procedure is that it does not rely on asymptotic distributions and can be used to construct confidence intervals. For instance, at a given time t, the 100(1 − α)% percentile-based confidence interval for β r (t), r = 0, 1, . . . , d, is given by where β r,α/2 (t) and β r,1−α/2 (t) are the α/2 and 1 − α/2 quantiles of the bootstrap samples β r (t) (1) , . . . , β r (t) (B) , which are computed based on α (1) , . . . , α (B) and a given set of basis functions. Other types of confidence intervals are available (e.g., normal-based confidence intervals; see Efron and Hastie, 2016, for a review). We highlight that the confidence intervals given above correspond to pointwise confidence sets that only work for β r (t) at a given time t. In most practical situations, such pointwise inferences are sufficient. However, in some studies, we might require a confidence band that simultaneously includes the true dynamic coefficient β r (t) for a range (typically large) of time values. In such situations, we need to construct a simultaneous confidence band for β r (t) for t within a given time interval.
We refer the reader to Wu and Tian (2018) for details about this matter.
Even though theoretical properties of bootstrap procedures in this setting have not been systematically investigated, we are quite confident about the coverage rates in this case, given previous simulations studies about this matter as in Hoover et al. (1998) and Wu and Chiang (2000) (see also Wu and Tian, 2018, and refereces therein for a comprehensive review and also more empirical evidence in this regard).
Bayesian inference
Under a Bayesian framework, in order to obtain an estimate for α under the sampling distribution (9), it suffices to consider the standard normal regression model since it can be easily obtained from (9) by means of a linear transformation on y based on a Choleski factorization of W −1 (see Faraway, 2014, for details). In what follows, we consider the sampling distribution (14), having in mind that a preprocessing step is required before fitting the model. In order to complete the model we choose the so-called independent Zellner's g-prior as a simple semiconjugate prior distribution on α and σ 2 to be used when there is little prior information available. Under this invariant g-prior, we let where a σ and b σ are known hyperparameters.
Regarding the hyperparameter elicitation, we need a prior distribution to be as minimally informative as possible in the absence of real external information. We recommend setting g = N , a σ = 2, and b σ = σ 2 as in (12). This choice of g makes g g+1 very close to 1, and therefore, we are practically centering α around α a priori. Similarly, the prior distribution of σ 2 is also weakly centered around σ 2 , since a σ = 2 implies an infinite variance on σ 2 a priori. Such a distribution cannot be strictly considered as a real prior distribution, as it requires knowledge of y to be constructed. However, it only uses a small amount of the information in y, and can be loosely thought of as the prior distribution of a researcher with unbiased but weak prior information (see Hoff, 2009, for a discussion). Refer to Appendix A.2 for details about the Gibbs sampler.
Even though the MCMC algorithm is straightforward in this case, inference may become impractical as the number of experimental units and the number of covariates grow. For this reason, we also implement a variational Bayes alternative that can potentially alleviate the computational burden in big data scenarios. See Appendix A.3 for details regarding the variational algorithm.
Location and number of knots
The smoothers' quality strongly depends on both knot locations and the number of knots. The degree of the expansion g is usually less crucial and it is often taken as 1, 2, or 3. In terms of knot location, we distinguish two widely-used alternatives. The first method locates equally spaced points in the range of interest, independently of the design time points. It is usually employed when the design time points are uniformly scattered in the range of interest. The second method locates equally spaced quantiles of the design time points as knots. It locates more knots where more design time points are scattered. These methods are equivalent when the design time points are uniformly scattered. However, in our experience, the equally spaced method to locate knots is very convenient due to both its simplicity and proclivity to work well even in all sort of situations.
Another essential feature that we need to handle in practice is how to choose the smoothing parameter vector p = (p 0 , . . . , p d ). A popular method to do so is the so called leave-onepoint-out cross-validation (PCV, Eubank et al., 2004). This approach aims to select a good smoothing parameter vector via trading-off the goodness-of-fit and the model complexity. The idea behind this criteria consists in choosing the smoothing parameter vector p that minimizes the expression where β is an estimate of β(t i,j ) using the entire dataset except the j-th measurement of the i-th experimental unit. It can be shown that expression (15) is equivalent to where tr(A) is the trace of the smoothing matrix A, which is a square matrix such that y = Ay. Even though the PCV criteria does not account for the within-subject correlation effectively, it is a suitable method since the computational performance is substantially better than that provided by other alternatives (e.g., leave-one-subject-out cross-validation; see Wu and Tian, 2018, for details). As discussed in Section 7, other alternatives relying on model-based knot introduction or deletion are available.
Simulation Study
In this section, we present two benchmark simulation scenarios to evaluate the performance of our proposed methodology and compare the different inferential methods. The first simulation scenario is inspired on an experiment originally proposed by Wu and Liang (2004) and Wu and Zhang (2006). The second experiment follows very closely a simulation study performed by Wu and Chiang (2000), Huang et al. (2002), and Wu and Tian (2018).
Simulation scenario 1
In order to test our methodology with challenging real-life like datasets, and also, evaluating the robustness of the model to typical deviations from the true data generating process, we consider in this experiment a mixed-effects time-varying coefficient model with no covariate information. Such a model takes a (fixed-effects) TVCM with d = 0 and x 0 (t) ≡ 1 for all t, and decomposes the error term into two random parts: the first one, which is subject-specific, describes the characteristics of each individual that deviate from the mean population behavior; whereas the second one, which handles directly pure random error, encompasses all those factors out of reach by the modeler (such as error measurement). Thus, we generate synthetic datasets as follows: where β 0 (t) is a known time-varying coefficient, υ i (t) = a i,0 + a i,1 cos(2πt) + a i,2 sin(2πt) is a subject-specific random effect, with (ai,0, ai,1, ai,2) . We assume that σ 2 1 = σ 2 2 = σ 2 = σ 2 , and therefore, the correlation between repeated measurements ρ within each experimental unit is bounded by (σ 2 0 + 2σ 2 ) −1 (σ 2 0 − σ 2 ) and (σ 2 0 + 2σ 2 ) −1 (σ 2 0 + σ 2 ). We consider three cases in order to simulate different correlation levels, namely, weak withinsubject correlation, σ 2 = 0.01 and σ 2 0 = 0.01, which corresponds to 0.00 ≤ ρ ≤ 0.67; medium within-subject correlation, σ 2 = 0.01 and σ 2 0 = 0.04, which corresponds to 0.50 ≤ ρ ≤ 0.83; and strong within-subject correlation, σ 2 = 0.01 and σ 2 0 = 0.09, which corresponds to 0.73 ≤ ρ ≤ 0.91.
Additionally, design times are simulated as t i,j = j/(m + 1), i = 1, . . . , n, j = 1, . . . , m, where m is a positive integer. In order to simulate unbalanced datasets for each subject, repeated measures are randomly removed with a rate r = 0.5; thus, we expect m(1 − r) repeated measurements per experimental unit and nm(1 − r) measurements in total. In addition, number an location of knots are chosen according to the PCV criteria and the equally spaced method described in Section 4, respectively. We generated 250 datasets with two dynamic parameters, β 0 (t) = 2e t and β 0 (t) = 1 + cos(2πt) + sin(2πt), and also, three sample sizes, n = 25, n = 50, and n = 100. Each time, once the number and location of knots are fixed, we fitted model (1) using both radial kernel and regression spline functions setting g = 2 as a degree, with Frequentist, Bayesian, and variational methods. Setting the prior distribution as discussed in Section 3.2, Bayesian and variational estimates are based on 2, 000 samples from the posterior distribution. For Bayesian inference, we use a burn-in period of 500 samples; whereas for variational inference, we use a negligible increased in ELBO of 1e-06. Such a setting showed no evidence of lack of convergence in any case. Figure 1: AMSE distribution of β 0 = 2 e t corresponding to 250 synthetic datasets generated according to TVCM (17). Scenarios are delimited by correlation level in rows (weak, medium, and high within-subject correlation) and sample size in columns (n = 25, n = 50, and n = 100). The model is fitted each time using both radial kernel (K) and regression spline (S) functions, with Frequentist (blue), Bayesian (black), and variational (green) methods.
In this case, the performance of an estimate is measured by means of the average mean square error (AMSE), which is defined as Figures 1 and 2 show the AMSE distribution of β 0 (t) = 2e t and β 0 (t) = 1 + cos(2πt) + sin(2πt), respectively, corresponding to 250 synthetic datasets generated according to TVCM (17), in each of nine scenarios delimited by correlation level (weak, medium, and high within-subject correlation) and sample size (n = 25, n = 50, and n = 100). In general, the AMSE distribution is quite consistent across inference paradigms, which is particularly evident in the first case. Such a behavior was somewhat predictable because the number of measurements, even for the smallest datasets, is big enough (about 375 observations) to allow the likelihood to overcome the prior distribution. Even though AMSEs are also n = 25 n = 50 n = 100 Weak corr. Figure 2: AMSE distribution of β 0 (t) = 1+cos(2πt)+sin(2πt) corresponding to 250 synthetic datasets generated according to TVCM (17). Scenarios are delimited by correlation level in rows (weak, medium, and high within-subject correlation) and sample size in columns (n = 25, n = 50, and n = 100). The model is fitted each time using both radial kernel (K) and regression spline (S) functions, with Frequentist (blue), Bayesian (black), and variational (green) methods.
very similar across basis functions, error rates are slightly smaller in the second case when Bayesian methods along with radial functions are employed.
Furthermore, Frequentist inferences are equivalent regardless of the smoothing approach. Also, we observe that the variational approximation to the posterior distribution under the Bayesian paradigm is very precise. This is the case because the mean field assumption is breaking negligible correlations in the posterior distribution. Moreover, as expected, estimates are more consistent as the sample size increases since the variability of the AMSE distribution decreases and its center remains stable. On the contrary, such variability increases as the within-subject correlation becomes higher, which strongly suggests that despite our approach not taking into account within-subject correlation directly, estimates are robust enough to produce accurate results. Figure 3: MADE distribution and dynamic parameter Bayesian estimates of β 0 (t), β 1 (t), and β 2 (t) (bold lines are the true coefficient functions and gray lines correspond to ten randomly selected estimates), corresponding to 250 synthetic datasets generated according to TVCM (18). Sample sizes are displayed in columns (n = 25, n = 50, and n = 100). The model is fitted each time using both radial kernel (K) and regression spline (S) functions, with Frequentist (blue), Bayesian (black), and variational (green) methods.
In order to measure the performance of an estimate fairly, we define the mean absolute deviation of errors (MADE) as , r = 0, . . . , d. Figure 3 shows the MADE distribution along with the dynamic parameter Bayesian estimates of the coefficients, corresponding to 250 synthetic datasets generated according to TVCM (17), in each of three scenarios delimited by sample size (n = 25, n = 50, and n = 100). Again, it is quite obvious the effect of the sample size on the error rates. This behavior is evident from both the decreasing variability of the MADE distribution and the consistent estimates of the coefficients around their true value. Clearly, these simulation results demonstrate that both estimation approaches independently of the inference paradigm, provide reasonably good estimators, at least for interior time points.
Execution time
Following a suggestion given by one of the referees, here, we provide a comparison in terms of execution time between our two competing approaches to carry out Bayesian inference, namely, MCMC methods and variational methods, i.e., simulation-based methods and optimization-based methods.
In this spirit, Table 1 contains mean running times (in milliseconds) using a single core of an AMD A12-9730P processor, when generating 2,000 samples of the posterior distribution for the model based on radial kernel functions (our proposal) using both MCMC and variational methods, for each synthetic dataset under all simulation settings considered in Section 5. We see that the variational approach clearly greatly outperforms its simulation-based counterpart in terms of execution time. Such an effect is particularly clearer for bigger samples sizes, where variational methods can even be 45 faster than MCMC methods. Lastly, note that MCMC execution times increase notoriously as the sample size grow, whereas variational execution times remain roughly constant.
Case study 1
Our first illustration is based on an AIDS clinical trial developed by the AIDS Clinical Trials Group 1 (ACTG). In this group, Fischl et al. (2003) evaluated two different 4-drug regimens containing indinavir with either efavirenz or nelfinavir for the treatment of 517 patients with advanced HIV disease (i.e., patients with high HIV-1 RNA levels and low CD4 cell counts). This study was a randomized, open-label study and initially planned to last 72 weeks but later increased to 120 weeks beyond the enrollment of the last subject. The randomization was carried out by using a permuted block design and stratified according to CD4 cell count and HIV-1 RNA level at screening, as well as previous antiretroviral experience. In addition, clinical assessments, HIV-1 RNA measurements, CD4 cell counts, and routine laboratory tests were performed before study entry, at the time of study entry, at weeks 4 and 8, and every 8 weeks thereafter. More details about design, subjects, treatments and outcome measurements of this study are given in Fischl et al. (2003).
Here, we model the CD4 cell count, which is an essential marker for assessing immunologic response of an antiviral regimen, in one of the two treatment arms. This group includes 166 patients treated with highly active antiretroviral therapy for 120 weeks, during which CD4 cell counts were monitored along with other important markers. Patients might not exactly follow the designed schedule, and missing clinical visits for CD4 cell measurements frequently occurred, which makes this dataset 2 (named ACTG 388) a typical longitudinal n = 25 n = 50 n = 100 Table 1: Mean running times (in milliseconds) using a single core of an AMD A12-9730P processor, when generating 2,000 samples of the posterior distribution for the model based on radial functions (our proposal) using both MCMC (MC) and variational (V) methods, for each synthetic dataset under all simulation settings considered in Section 5. dataset. The main goal in this study is to model the mean CD4 cell count trajectories over the treatment period for the entire treatment arm.
In this specific group of patients, the number of CD4 cell count measurements per patient varies from 1 to 18 observations, and the CD4 cell count ranges from 0 to 1,364. Figure 4 shows CD4 cell counts (in logarithmic scale) for each one of the n = 166 patients during the 120 weeks of treatment. Even though individual cell counts are quite noisy and there is evidence of some atypical trajectories associated with low counts, this plot suggests that cell counts tend to stabilize around the middle of the treatment. Thus, it is not possible to ensure that the antiviral treatment was quite effective since there are no apparent reasons to believe that CD4 cell counts profiles are either increasing continuously or at least remaining stable.
In order to estimate the mean trajectory of CD4 cell counts over the treatment period, we fit the TVCM under a Bayesian approach with the prior distribution given in Section 3.2, employing both Gaussian kernel and regression spline functions with g = 2, where y i,j is the CD4 cell count (in logarithmic scale) of the j-th measurement of the i-th patient, and β 0 (t) is a unknown time-varying parameter describing the mean dynamic trend of CD4 cell counts over time. Again, the number and location of knots are chosen according to the PCV criteria and the equally spaced method described in Section 4, respectively.. According to this criteria, the optimal number of knots are k K 0 = 4 and k S 0 = 8, respectively. Once the number and location of knots are fixed, Bayesian estimates are based on 2, 000 samples from the ACTG388Data1Arm.cfm. posterior distribution after a burn-in period of 500 iterations. Convergence was monitored by tracking the variability of the joint distribution of data and parameters using the multichain procedure discussed in Gelman and Rubin (1992). Estimates of β 0 (t) (in logarithmic and natural scale) along with their corresponding 95% credible intervals are shown in Figure 5. Both estimates are very similar, except for the small jump at the beginning of the treatment exhibited by the regression spline-based estimate. Such trajectories, which are very precise since the credible intervals are quite narrow, reveal that the mean CD4 cell counts increase quite sharply during the first 40 weeks of treatment, and continue to increase at a slower rate until about week 100, and then dropped towards the end of the study. This makes evident that under this antiviral regimen, the overall CD4 counts increased dramatically during the first 40 weeks, but the effect of the drug therapy fades over time and completely disappeared after about week 100, when cell counts begin to drop. Almost identical results were obtained by Wu and Zhang (2006); the only difference is that they concluded that the inflection point after which the CD4 cell count started to drop was on week 110. A residual analysis (not shown here) indicates that the model fits the data adequately because there are no signs of particular shapes, patterns or significant deviations.
Case study 2
We consider another AIDS clinical study carried out by the ACTG. In this case, Lederman et al. (1998) evaluated a highly active antiretroviral therapy containing zidovudine, lamivudine, and ritonavir, for the treatment of patients with moderately advanced HIV-1 infection. This study was designed to ascertain if administration of highly active antiretroviral therapy to patients with moderately advanced HIV-1 infection was associated with evidence of immunologic restoration. More details about design, subjects, treatments and outcome measurements of this study are given in Lederman et al. (1998).
The viral load (plasma HIV RNA level) and immunologic response (CD4 cell counts) are negatively correlated and their relationship is approximately linear during antiviral treatments. However, their relationship may not be a constant during the whole period of treatment Liang et al. (2003). Thus, the main goal in this study is to model the dynamic relationship between the viral load and the immunologic response over the treatment period, which plays an essential role in evaluating the antiviral therapy. Fifty-three patients were enrolled in the trial, out of which n = 46 received the treatment for at least 9 of the first 12 weeks and were therefore eligible for analysis. Intolerance of the treatment regimen was responsible for almost all treatment discontinuations. Patients might not exactly follow the designed schedule, and missing clinical visits occurred frequently, which makes this dataset 3 (named ACTG 315) quite unbalanced. Additional analyses of this and other trajectories, as well as more scientific findings of the study, can be found in Lederman et al. (1998), Connick et al. (2000, Liang et al. (2003), and Wu and Liang (2004).
After starting treatment, the plasma HIV RNA level and the CD4 cell count were measured simultaneously (both of them were reported in logarithmic scale) at days 0, 2, 7, 10, 14, 28, 56, and 86. Figure 6 shows the corresponding cell counts. The number of repeated measurements per subject varies from 4 to 8, and the total number of observations is 328.
Simple linear regressions of cell counts against plasma HIV RNA level at each visit (not shown here) evidence that the slope associated with the viral load changes over time because some days the slope is significantly different from zero. This simple observation motivates fitting a TVCM in order to characterize and quantify such relationship.
Once again, under a Bayesian setting, employing both Gaussian kernel and regression spline functions with g = 2, we fit the TVCM given by where y i,j and x 1,i (t ij ) are the viral load (in logarithmic scale) and the CD4 cell count (also in logarithmic scale) associated with the j-th measurement of the i-th patient, respectively. The time-varying coefficient β 1 (t) characterizes the dynamic relationship between the viral load and the immunologic response over the treatment period. Interestingly, according to PCV criteria, the optimal number of knots in both cases are k K 0 = k S 0 = k K 1 = k S 1 = 1. Following exactly the same setting as in the case study #1, the results we report are based on 2,000 samples obtained after a burn-in period of 500 iterations. Figure 7 shows estimates of the dynamic slope. Both estimated trajectories are quite smooth. Here, we see a significant negative correlation between viral load and immunologic response at the beginning of the treatment. Then, the relationship consistently attenuates until reaching zero about the fourth week. At that point, the negative correlation gradually strengthens again, and continuously to do so towards the end of the treatment period considered in this analysis. Almost identical results were obtained by Wu and Liang (2004).
Goodness-of-fit and predictive performance
The modeling literature has largely focused on both Akaike Information Criteria (AIC) and Bayesian Information Criteria (BIC) as a tool for model selection (e.g., see Wu and Zhang, 2006). However, under a Bayesian setting, the BIC is inappropriate for hierarchical models since it underestimates the complexity of the model. An alternative to BIC that addresses this issue is the Deviance Information Criterion (DIC), DIC = −2 log p(y | Υ) + 2p DIC , where Υ is the posterior mean of model parameters and p DIC = 2 log p(y | Υ) − 2 E [log p (y | Υ)] is the model complexity (see Gelman et al., 2013, for a discussion). Table 2 Table 2: DIC and AMSE to assess goodness-of-fit and out-sample predictive performance, respectively, of both radial kernel and regression spline functions.
On the other hand, in order to compare the ability of each alternative to predict missing observations, we evaluate their out-of-sample predictive performance by means of a crossvalidation (CV) experiment. Thus, for each combination of dataset and model, we performed an L-fold CV in which L randomly selected subsets of roughly equal size in the dataset are treated as missing and then predicted using the rest of the data. We summarize our findings in Table 2, where we report the average mean square error (AMSE) corresponding to the prediction of missing measurements in the datasets. In this context, the AMSE is a measure of how well a given model is capable of predicting missing observations. We can see from this table that both alternatives have comparable predictive capabilities.
Discussion
In this paper, we review two simple but powerful alternatives based on linear expansions to estimate time-varying coefficients: a new approach using radial kernel functions and a more standard method using regression spline functions. We framed the estimation procedure under both Frequentist and Bayesian inference paradigms using bootstrap techniques, Gibbs sampling and a variational Bayes method. From an empirical perspective, we provide two simulations studies. These experiments strongly suggest that either combination of basis representation and inference approach are comparable and mostly equivalent. From a practical perspective, the first case study shows that the overall CD4 counts increased dramatically during the first 40 weeks under a specific anti-viral treatment, but the effect of the drug therapy faded over time and completely disappeared after about week 100. On the other hand, the second case study evidences a strong negative correlation between the viral load and the immunologic response at the beginning of an anti-viral treatment, and then shows evidence of a weak correction about the fifth week, when gradually strengthened again and reached the largest value at the end of the treatment period.
As part of the revision process, one of the referees suggested that, in order to avoid inconsistencies in terms of exposition, the model needed to be presented in a general fashion as in a mixed-effects time-varying coefficient model (see Section 5.1). Even though we agree on the convenience of working with a more general model, we consider that our approach should be treated in terms of a "standard" TVCM as in Equation (1) since it makes exposition straightforward, given that our main contribution rely on an estimation protocol based on radial functions, along with inference strategies according to the Bayesian paradigm. Nonetheless, we exhort the reader to pursue such an extension employing the ideas discussed in this manuscript.
The estimation protocol presented here is susceptible of many extensions. For instance, to avoid the curse of dimensionality, the model can be extended to account for longitudinal inhomogeneity of varying coefficients via Bayesian basis selection or adaptive knot selection which is an integral part of the data generating mechanism. Another interesting extension involves the incorporation of specific working correlation matrices in the probabilistic structure of the random error using more convoluted covariance functions. Extensions to more complex situations, including multivariate or spatial data are also possible, and are the subject of future work.
Posterior summaries along with point and interval estimates can be approximated based on the Monte Carlo samples. As before, for a given t, the 100(1 − α)% percentile-based credible interval for β r (t), r = 0, 1, . . . , d, can be computed as in (13).
A.3 Variational Bayes algorithm
The Markov chain defined in the previous section is guaranteed to converge eventually to the posterior distribution p(α, σ 2 | Z, y) given in (19). Here, we consider the problem of finding a function q(·) in a family of functions closest to the posterior distribution p(·), according to a given dissimilarity measure. This idea is known as variational Bayes (see Ormerod and Wand, 2010, for a review).
Hence, the algorithm for obtaining the parameters in q(α) and q(σ 2 ) is the following: 1. Initialize b * > 0.
Posterior summaries can be obtained via Monte Carlo simulation using standard random number generation routines.
B Notation
Matrices and vectors with entries consisting of subscripted variables are denoted by a boldfaced version of the letter for that variable. For example, x = (x 1 , . . . , x n ) denotes an n × 1 column vector with entries x 1 , . . . , x n . We use 0 and 1 to denote the column vector with all entries equal to 0 and 1, respectively, and I to denote the identity matrix. A subindex in this context refers to the corresponding dimension; for instance, I n denotes the n × n identity matrix. The transpose of a vector x is denoted by x ; analogously for matrices. Moreover, if X is a square matrix, we use tr(X) to denote its trace and X −1 to denote its inverse. The norm of x, given by √ x x, is denoted by x . | 2021-03-02T02:15:35.301Z | 2021-02-27T00:00:00.000 | {
"year": 2021,
"sha1": "a69dec3a012e0840fa1b5b53f834afa28b7b43b9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2103.00315",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a69dec3a012e0840fa1b5b53f834afa28b7b43b9",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
192013659 | pes2o/s2orc | v3-fos-license | THE FANTASTIC SHAKESPEARE: CHARACTER’S PASSIONARY CONFOCALITY IN THE ASPECT OF RECEPTION
Based on J. Baudrillard’s methodology on the beginning of the era of hyperreality as the “world of simulation”, the article under discussion substantiates the expansion of science fiction horizons by means of “reversing the imaginary”. The latter notion is mostly marked with the inter-penetration of fictional worlds, which are genealogically revealed only in their connection with new genre forms. Particular emphasis in the “hyperreal indifference” of science fiction narratives has been laid on intertextual ties. The article updates the issue of intertextual potential of the personosphere of science fiction and fantasy, which, according to Tz. Todorov, presupposes “reader’s active integration into the world of characters”. In this way, the specifics of including the “fantasy” characters of Shakespeare’s plays into the intertextual space of science fiction has been analyzed. Much attention has been paid to the figure of William Shakespeare as a character in literary texts by American science fiction writer Clifford Simak (1904–1988) “The Goblin Reservation” (1968) and “Shakespeare’s Planet” (1976). Another emphasis has been laid on ______________________________________ © Тичініна А., Паранюк Д., 2018 Alyona Tychinina, Dan Paranyuk / The Fantastic Shakespeare 192 the peculiarities of synthesizing science fiction and fantasy that form the so-called “simulative hyperreality” by means of combining several models of personosphere – fairy, fantastic, fantasy, mystical, and other – in the creative activities of C. Simak. They function in accordance with the principle of combining the image fields, whose imagological vectors are constantly intersecting with each other. What is more, the personosphere has been attracted not by the protagonist, but by some confocal figure (a sage or a sentinel, according to C. Jung), who is absolutely neutral, however has a reliable “point of view”, thus winning reader’s receptive trust. In this case, W. Shakespeare is regarded as a confocal and, at the same time, passionary character, for he is presented as an imaginative nucleus of a personosphere, and not only as an intertextual phantasm (according to R. Barthes) or an atroponimic allusion. Therefore, this “penetration” of Shakespeare into science fiction may be considered as an essential intertextual ideologeme (according to J. Kristeva). Entering the world of other characters, his passionary status pushes away the center of the personosphere, thus generating the development of plot events. This is why the chronotope version, suggested by the American writer (whereby realistic, fantastic, fantasy and even mystical characters coexist quite peacefully), stands out as rather logical for Shakespeare’s timeless image, whose idiorhythmic nature is able to fit any context, ironically refuting the so-called “Shakespeare’s Question”. The article under studies also points out Shakespeare’s interrelations with a mystical anthropomorphic character Spirit, whose “traces” (in J. Derrida’s interpretation) frequently “run into” the figure of Shakespeare. Hence, it might be concluded that Shakespeare’s immanent presence strengthens the integrity of a literary text, as well as denounces the inferiority of its function in the personosphere, whereas in the aspect of reception, it intercepts the readers’ attention, shifting away the rest of the imaginative centers of the novel.
the peculiarities of synthesizing science fiction and fantasy that form the so-called "simulative hyperreality" by means of combining several models of personosphere -fairy, fantastic, fantasy, mystical, and other -in the creative activities of C. Simak. They function in accordance with the principle of combining the image fields, whose imagological vectors are constantly intersecting with each other. What is more, the personosphere has been attracted not by the protagonist, but by some confocal figure (a sage or a sentinel, according to C. Jung), who is absolutely neutral, however has a reliable "point of view", thus winning reader's receptive trust.
In this case, W. Shakespeare is regarded as a confocal and, at the same time, passionary character, for he is presented as an imaginative nucleus of a personosphere, and not only as an intertextual phantasm (according to R. Barthes) or an atroponimic allusion. Therefore, this "penetration" of Shakespeare into science fiction may be considered as an essential intertextual ideologeme (according to J. Kristeva). Entering the world of other characters, his passionary status pushes away the center of the personosphere, thus generating the development of plot events. This is why the chronotope version, suggested by the American writer (whereby realistic, fantastic, fantasy and even mystical characters coexist quite peacefully), stands out as rather logical for Shakespeare's timeless image, whose idiorhythmic nature is able to fit any context, ironically refuting the so-called "Shakespeare's Question". The article under studies also points out Shakespeare's interrelations with a mystical anthropomorphic character Spirit, whose "traces" (in J. Derrida's interpretation) frequently "run into" the figure of Shakespeare. Hence, it might be concluded that Shakespeare's immanent presence strengthens the integrity of a literary text, as well as denounces the inferiority of its function in the personosphere, whereas in the aspect of reception, it intercepts the readers' attention, shifting away the rest of the imaginative centers of the novel.
Despite J. Baudrillard"s ambiguous statement that "the «good old» SF imagination is dead" [12, p. 126]which has outlined the borders of "expanding universes" of classic science fiction [12, p. 128] in his work "Simulacra and Simulation" (1981)the scholar still dwells on the socalled "reversion of the imaginary": "when there is no more virgin ground left to the imagination, when the map covers all the territory, something like the reality principle disappears" [12, p. 129]. The French philosopher means, above all, the expansion of borders and the reconstruction of the science fiction discourse episteme. The latter has long been associated with keeping to certain criteria, which immanently contradicts the very essence of fiction. It is the time formation of space that J. Derrida referred to as Differance. At the same time, it is worth mentioning that at its basis lies the notion of "science fiction method", which only in genealogical respect may be divided into science fiction, post-apocalypses, (anti) utopia, the so-called "horror", as well as fantasy with its numerous modifications.
Since the notion of fantastic is usually defined in the correlation with the notions of real and imaginary [15], and science fiction "has always played upon the double, on artificial replication or imaginary duplication" [12, p. 131], J. Baudrillard speaks of the beginning of the era of hyperreality as the "world of simulation" [12, p. 129-130]. He is sure that "it is the hyperrealist indifference that constitutes the true «science-fictional» quality" [12, p. 132]. In literature, this idea might be implemented as an interaction, or even integration, of different worlds, numerous realities, as well as stratification of several chronological dimensions and genealogical dynamics from science fiction to fantasy. Undoubtedly, in such cases, the authors of fictional narrations most often appeal to the classic literary heritage.
Updating the paradigm of genre in the field of Literary Studies, O. Chervinska emphasizes on the sources of fantasy, which reach as far back as the Ancient times, "mostly denoting one of classic and ancient techniques of literary fantasizing" [10, p. 45]. The researcher is convinced that the very History of Literature "proves rather the metamorphic nature of a quite limited number of genre forms than the systematic enrichment with any genre-making experience" [10, p. 45]. At the same time, the phenomenon of intertextuality is an immanent quality of literature on the whole, and the metamorphic nature of the meta-genre of fantasy in particular [10, p. 46]. Together with the active application of reminiscences and literary allusions, the issue of the intertextual potential of personosphere is getting more and more important. L. Heckman, N. Nikoriak, S. Namestiuk, G. Khazagerov have investigated the issue in the context of various literary problems. Nevertheless, it lacks sufficient consideration at the level of literary science fiction and fantasy.
Taking into account Tz. Todorov"s theory regarding "readers" predictable integration into the world of characters" [15, p. 30] in science fiction and fantasy (for instance, the texts by P. Anderson, T. Pratchett, J. Crawley, N. Hayman, J. Tolkien, T. Williams), it is worth noting that this intertextual circle contains the characters of Shakespeare"s plays. The way the latter have been introduced there, is best described by the phrase "Shakespeare"s genius" [13, p. 60]. Shakespeare"s images, introduced into a science fiction context, are the objects of reconsideration. In addition, they are able to reflect "the secondary world" [7, p. 176]. Naturally, in such cases, science fiction authors most frequently use "fantasy" pretexts ("A Midsummer Night"s Dream", "Macbeth" and "The Tempest") due to the fact that "Shakespeare"s play always touches upon the most crucial issue of fantasythe issue of interaction between the bordering worlds and their inhabitants, particularly between the immortal creatures <…> and mortal humans" [7, p. 176]. Therefore, the personosphere of classic fantasy, more seldomthat of science fiction, is usually formed relying on the images of various magic creatures that perform the abovementioned functions (according to the terminology of V. Propp).
For example, E. Kanchura, while analyzing T. Pratchett"s alternative worlds, refers to Shakespeare"s comedy "A Midsummer Night"s Dream" and comes to a conclusion about the metatextual effect of "a double parody" [2, p. 275]. In other words, Apuleius -Shakespeare -Pratchett: A charming smile of Shakespeare"s elves turns into masters" grin at the mortal. Pratchett deprives the images of fairy-tale heroes, who administer happy destinies, of a romantic flare and reminds of the primary folklore reception of elves as an alien and incomprehensible folk [2, p. 278].
Thus, the transitive images of extraordinary creatures (whose presence determines the respective science fiction genre), removed by Shakespeare from mythological or folklore contexts and introduced into a literary space, constitute the basis of the personospere of classic and modern fantasy.
As a rule, the popularity of intertextual potential of the Great Bard"s literary texts is closely associated with the fact that W. Shakespeare (1564-1616) is presented (according to H. Bloom) not only as the center of the Canon, substantiated by "cognitive acuity, linguistic energy, and power of invention" [13, p. 46], but also as a creator of "an enormous number of metaphors that have entered the Western civilization and get permanently updated in various field of its activities" [9, p. 178]. N. Torkut points out that Shakespeare is becoming the founder of "new discourse" (term by M. Foucault) [9, p. 179]. "The name of Shakespeare or, to be more specific, the concept of Shakespeare, as a cultural metaphor that functions in a sociocultural field" [9, p. 179], determines the extratextual level of interpretative metaphorization. This may be regarded as a significant culturological indicator of the personosphere of a literary text.
However, W. Shakespeare, as an intertextual character of a science fiction metagenre, is an exceptional phenomenon. In the 60s-70-s of the XX century, his image was actively involved in the texts by an American science fiction writer Clifford Simak : the novel "The Goblin Reservation" (1968), later "Shakespeare"s Planet" (1976). The creative activities of this author mostly revealed the peculiarities of "contacts between the representatives of different galactic civilizations" [6, p. 471]. It is important that the above-mentioned period of literature is considered to be "the golden age" of science and social-philosophical fiction [3, p. 13]. It was mostly presented by the works of A. Azimov, C. Simak, H. Kuttner, T. Sturgeon, O. Stapledon, R. Heinlein, K. Chapek, as well as was marked with growing popularity of fantasy (H. Evers, M. Eliade, H. Lovecraft, G. Meyrink). In particular, American literature of this genre has faced an anthropological "turn to a human being", the activation of social-critical motives [3, p. 75], as well as the synthesis of basic elements of science fiction and fantasy, especially in the works by C. Simak.
O. Kovtun deals with the efficiency of combining science fiction and fantasy, as two reality models, related, above all, to the respective types of world perception, emphasizing on a slight difference between them. She substantiates this point of view by the fact that there exists a considerable number of works, where these two genre models are joined together, interpenetrate, and even "germinate" [3, p. 118-119], creating the so-called "simulative hyperreality" (according to J. Baudrillard). Not only C. Simak was engaged in synthesizing these two literary genres, but also V. Berestov, I. Varshavsky, H. Kuttner, A. and B. Strugatsky, R. Sheckley, J. Rowling, and others.
Keeping in mind J. Baudrillard"s concept of simulative hyperreality, this genre situation might be explained by the fact that science fiction appeals to the resurrection of the "historical" worlds of the past, trying to reconstruct in vitro and down to its tiniest details the various episodes of bygone days: events, persons, defunct ideologiesall now empty of meaning and of their original essence, but hypnotic with retrospective truth [12, p. 137].
Here, we might even speak of the so-called "simulation field": Models no longer constitute an imaginary domain with reference to the real; they are, themselves, an apprehension of the real, and thus leave no room for any fictional extrapolationthey are immanent, and therefore leave no room for any kind of transcendentalism [12, p. 137].
It would be expedient to note that the personosphere of C. Simak"s novels has been built in a very peculiar way. Despite the generally accepted genre canons, it includes both humans (fiction characters, real historic figures, artists, literary heroes), animals, anthropo-and zoomorphic simulacra, characters of science fiction type (inventors, biomechs, aliens) and fairy-tale-fantasy images (ghosts, goblins, fairies, trolls, magicians, dinosaurs, dragons). In fact, in "The Goblin Reservation", all these characters study on the Earth, so that the planet has turned into "a great galactic university": "Earth was the galactic melting pot, a place where beings from the thousand stars met and mingled to share their thoughts and cultures" [14]. Since the fantastic is viewed as "the border experience" [15, p. 80], C. Simak builds his personosphere in accordance with the principle of combining the image fields, whose imagological lines are constantly intersecting. What is more, the personosphere has been attracted not by the protagonist, but by some secondary figure (a sage or a sentinel, according to C. Jung), who is absolutely neutral, although has a reliable "point of view", thus winning the so-called "reader"s receptive trust".
It is interesting that in this context, William Shakespeare looks, at first sight, as a secondary, though passionary character. His image functions as an imaginative nucleus of the personophere and not only as an intertextual phantasm (according to R. Barthes) or an atroponimic allusion. Since passionarity of the secondary is "logically regarded as an internal constituent of crisis situations" [11, p. 228], it may be also extrapolated into the field of genres: here, we mean the crisis of science fiction. We refer to the peculiarity of a passionary character as to "the energetic surplus that exceeds, at a given moment, the needs of a certain individual or entirety" [11, p. 227]. Consequently, any character, as "a visible and identified individuality" (according to M. Bakhtin) with a passionary status, entering the world of other protagonists, shifts away the center of the personosphere and thus generates plot events.
Readers The very fact of Shakespeare"s presence goes beyond the frames of his biography, being supplemented with exact coordinates of his future quasi-lecture. In this way, the recipient becomes involved in the plot events and determines his further horizon of expectations, later specified by Shakespeare"s name on the commercial cloth of the museum. Shakespeare"s name is also closely related to the activities of the English Department at the Institute of Time, whose staff have proved that "the Earl of Oxford, not Shakespeare, had been the author of the plays" [14]. The so-called "Shakespeare"s Question", articulated by C. Simak, generates numerous interpretants of the image.
Due to the fact that science fiction discourse presupposes not only the existence of "some strange event causing a wide range of emotions of both a reader and a hero, but also a peculiar manner of reading <…>: it should be neither «poetic» nor «allegoric»" [15, p. 31], we interpret Shakespeare"s presence in C. Simak"s works in an intertextual manner. When the English Classic suddenly turns up in the future, everyone starts expressing respect for him, especially «an awful lot of creeps from English Lit [14], united by the common object of their scientific research -Shakespeare himself. In Simak"s text, Shakespeare"s indisputable authority is often considered as "a measure of all things", as a constant object of comparison of different historical epochs.
The American science fiction writer interprets the speculations about the difference between real and recorded history from the point of view of biased judgments and tendentiousness. He treats them as the collapse of "cozy little worlds" [14]. A vivid example of this is Shakespeare"s authorship. He has violated the peace and harmony at the Institute of Time and "is forced to make a sideshow out of history to earn a little money" [14]. That is the reason why Shakespeare"s promoted lecture on how he did not write his plays becomes a huge problem for the University Administration: [14].
William Shakespeare is not any easy man to handle. He wanted at once to go out and have a look at this new age of which he'd been told so much. Time had a rough time persuading him to change his Elizabethan dress for what we wear today, but they positively refused to let him go until he agreed to it. And now Time is sweating out what might happen to him. They have to keep him in tow, but they can't do anything, that will get his back up. They have sold the hall down to the last inch of standing room and they can't take the chance that anything will happen
Shakespeare"s figure covers an integral storyline canvas. The passionarity of this "confocal" character is revealed through the tendency to its symbolic disappearing and returning, which is of rather systematic nature. In this way, his escape turns into "Shakespeare circus we are putting on": "Can you envision the ruckus there would be if a man like Shakespeare should not be returned to his proper age" [14]. What is more, Simak"s Shakespeare does not even plan to attend his own lecture: "Forsooth, and if I did attend it, they would forthwith, once that I had finished, whisk me home again" [14], thus ascertaining the cunningness of his plan to stay in the timeless context. Particularly indicative is the scene when Shakespeare"s "confocal" image comes across "the world of principal characters". The relaxation of a man with "a white-toothed smile flashed above the beard", enthusiastic about the taste of ale ("stuff soft to the palate and pleasing to the stomach") [14], is narrowed down to his pondering over the attempt to stay in the present time for good. In addition, the plausibility of Shakespeare"s image is strengthened by the personal details from his biography, which produces an impression of realism of his mystified image upon the readers: "I left at home, said Shakespeare, a wife with a nagging tongue and I would be rather loath to return to her" [14].
It is interesting that Shakespeare expresses his thoughts in the spirit of language stylistics of the XVI century: "I deem me fortunate <…> to have fallen in with such rough and rowdy fellows" [14]. Hereby, he prohibits his companions to call him a bard, because "I be no more than an honest butcher and a dealer in the wool" [14]. In H. Bloom"s work we find an explanation for such a principled position of Simak"s Shakespeare: "Actors in Elizabethan England were, by statute, akin to beggars and similar lowlife, which doubtless pained Shakespeare, who worked hard to be able to go back to Stratford as a gentleman" [13, p. 45].
Thus, Shakespeare"s image is presented as a passionary intertextual ideologeme (according to J. Kristeva), which "materializing" at various levels of the text structure, expands over its whole trajectory and assigns it certain historical and social coordinates" [4, p. 136-137]. The image of Shakespeare, introduced by Simak into the hyperreal time and space, is marked with its own historical epoch. Nevertheless, due to the desire to stay in the future, it ironically proves the continuity of its being: "My teeth are bad <…> they hang loosely in the jaw and at times pain exceedingly. I have intelligence that hereabout are marvelous mechanics who can extract them with no pain and fabricate a set to replace the ones I have" [14]. The chronotope version, suggested by the American writer, whereby realistic, fantastic, fantasy and even mystical characters coexist quite peacefully, stands out as rather logical for Shakespeare"s timeless image, whereas its idiorhythmic nature is able to fit any context: "I hear tell that you have arrived at understanding with goblins and with fairies, which is a marvelous thing. And to sit at meat with a ghost is past all understanding, although one has the feeling here he must dig close at the root of truth" [14]. Shakespeare"s immanent presence "solidifies" the integrity of a literary text, as well as denies the inferiority of its functions in the personospere.
Relying on the concepts of L. Ginsburg, N. Tamarchenko, R. Wellek, and A. Warren, R. Dzyk concludes that "it is rather problematic to classify any character as secondary (inferior) in the aspect of reception", because the primary and the secondary are "relative notions and often interchange each other", thus "eliminating the status border" [1, p. 130-131]. The meeting of Shakespeare and mystic Ghost is particularly important for realizing the border between these two relative notions, as well as in determining the character"s status. The "traces" of anthropomorphic Ghost (in J. Derrida"s interpretation) frequently "run into" Shakespeare"s figure, indicating a hidden connection that exists between the two. It was not accidentally that H. Bloom also compared Shakespeare to "a spirit that permeates everywhere, that cannot be confined" [13, p. 52]. At first sight, the image of the creature with extraordinary abilities, introduced by C. Simak, seems to be a certain simulacrum: "The guy gets drunk on moonbeams. He can dance on rainbows. He has a lot of advantages <…> For one thing, he's immortal" [14]. However, when Ghost adds "From England" [14], reader"s receptive attention is directed to the figure of W. Shakespeare. The proof of this is the café visitors" chanting an ironic song: Hurrah for Old Bill Shakespeare; He never wrote them plays; He stayed at home, and chasing girls, Sang dirty rondelays [14]. Similar quasi-folklore intermedial inclusions that "by all possible means reproduce the wide-dimensionality of the world of variable realities" [8, p. 44], demonstrate, in this case, the "fermentability" of Shakespeare"s image in the holistic context of the novel. They point at the temperamental splash of the character"s passionary energy. The French philosopher is certain that simulative systems are related, above all, to the experience of science fiction, the latter "only being, most often, an extravagant projection of, but qualitatively not different from, the real world of production" [12, p. 156], with constant accumulation of mechanic or energetic abilities. In this way, the encounter of an active, mystified natural forcea simulacrum (Ghost) and a passionary "preliminary" (Shakespeare) is a display of enormous energetic power that alters the course of the narration. Shakespeare is not embarrassed by the conversation with Ghost (his immaterial beginning): "He accepted Ghost much more readily than would have been the case, say, with a twentieth-century man. In the sixteenth century they believed in ghosts and ghosts were something that could be accepted" [14]. This conversation may symbolize Ancient England as the inversion of Noah"s Ark: "A goodly country to the eye <…> but filled with human riffraff. There be poachers, thieves, murderers, footpads, and all sort of loathsome folk..." [14]. Substance-Ghost, visualized through "the sleeves of his robe, if robe it was", solemnly announces: «I am William Shakespeare's ghost!" [14], which frightens Shakespeare-Character: "If Shakespeare sees him following he'll set new records running" [14], though till that moment the talk and relaxation with Ghost did not bother the writer: "He never got the wind up until he found that Ghost was his ghost and then..." [14]. Eventually, Shakespeare"s figure, which has previously disappeared from the storyline, is found at a climax moment beside the most crucial problems: "The Artifact is gone and the museum is wrecked and Shakespeare has disappeared" [14].
The final scene of the novel, offered by C. Simak, distinctly proves the genre metamorphicality of this textfrom science fiction to fantasy: on a lawn, "facing one another, dancing to the music of the fairy orchestra, were Ghost and William Shakespeare" [14]. The latters" merger emphasizes the reincarnation idea of Shakespeare"s image in the novel, which makes up the so-called "architectonic ring". In fact, C. Simak applies a technique of metempsychosis in the text of "Shakespeare"s Planet". It is a remake of the previous novel that, on the contrary, starts with poet"s death and ends up in his transformation into a spiritualized skull, thus implying the cult tragedy "Hamlet". This enables us to state that W. Shakespeare, as C. Simak"s character, is a transitive image, an efficient paradigm of author"s creative method.
In addition, the intertextual specifics of the personosphere of C. Simak"s novel "The Goblin Reservation" is one of the most significant examples of genre metamorphicality, which describes the dynamics of transition from science fiction to the format of fantasy. Hence, the passionary energy of a confocal character (implicated into the pesonosphere and endowed with his personal point of view) does not only guide the plot development, but also shifts the previously assigned receptive vectors into the space of simulative hyperreality. The figure of W. Shakespeare, activated by C. Simak in the form of a character, is not a mere antroponimic allusion or intertextual phantasm. It is the center of the personosphere; it intercepts readers" attention and puts away all other imaginative centers of the novel. | 2019-06-19T13:25:14.177Z | 2018-12-28T00:00:00.000 | {
"year": 2018,
"sha1": "72f881f9b4c750b0e30c672219304305d99e9d0e",
"oa_license": "CCBY",
"oa_url": "http://pytlit.chnu.edu.ua/article/download/154756/pdf_10",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "29eed129190620690412e0382dc591b8c009b6df",
"s2fieldsofstudy": [
"Art"
],
"extfieldsofstudy": [
"Art"
]
} |
11273410 | pes2o/s2orc | v3-fos-license | Orientifold 4-plane in brane configurations and N=4 USp(2Nc) and SO(Nc) theory
We consider brane configurations in elliptic models which represent softly broken N=4 USp(2 N_c) and SO(N_c) theory. We generalize the notion of the O4 plane, so that it is compatible with the symmetry in the covering space of the elliptic models. By using this notion of the O4 plane, we find the curve for softly broken N=4 USp(2 N_c) and that for SO(N_c) theory as infinite series expansions. For the USp case, we can present the expansion as a polynomial.
Introduction
Supersymmetric Yang-Mills theory are interpreted as low-energy worldvolume theory on branes [1]. By considering brane configurations, some properties of SYM such as dualities are explained in terms of branes [2,3,4]. Conversely, we may know behaviors of branes by studying corresponding supersymmetric field theory.
In particular N=2 super QCD is given by worldvolume theory on D4 branes ending on NS 5-branes in type IIA string theory. In M-theory, NS5 and D4 branes are the same object, namely M5-brane, which represents R 1,3 ⊗ Σ [5]. Here Σ is complex Riemann surface which is described by Seiberg-Witten curves [6]. The N=2 Higgs branch is also studied in terms of M-theory [7,8]. The softly broken N = 4 SU(N c ) model has been also considered in [5]. This model is given by compactifying the direction along D4 branes which begin from and end on single NS5 brane (elliptic models). This space is twisted, which means that even if we go around the compactified direction, we do not come back to the same point but to the shifted point along the NS5 brane. The mass of the matter in the adjoint representation is given by the magnitude of the shift.
The curves for N = 2 SO(N c ) and that for USp(2N c ) theory have been derived [9,10] by introducing an orientifold 4-plane in brane configurations. How to introduce orientifold 4-planes in elliptic models with twist is,however, remains as a question till now.
In this paper we consider brane configurations with the 'O4 plane' in elliptic models representing softly broken N = 4 USp(2N c ) and SO(N c ) theory. In the conventional definition of O4 planes, the mass to the matter is not permitted. This is because Z 2 projection of the O4 plane and the shift which corresponds to the mass are not compatible globally. So we need to generalize O4 plane projection. To do this, we first of all consider the O4 plane in each fundamental region in the covering space of elliptic models. Curves for softly broken N = 4 USp(2N c ), SO(2N c ) and SO(2N c + 1) are given by infinite series expansions. They are given by (3.5), (3.42) and (3.67) in the text respectively. These equations are consistent with the decoupling limits. In the USp case, we can read off the symmetry which the 'O4 plane' induces in the compact space since we can represent the curve as the polynomial. This is given by (3.26).
Although we have not represented SO curve as polynomials in this paper, we may see that the curve has the symmetry as well. In this 'O4 plane' case, the mass of the matter is given by the shift along NS5 brane as in the SU case.
Elliptic Models and the interpretation as the covering space
In this section we review [5] how N = 4 SU(N c ) theory is realized in the brane configuration.
This model is equivalent to what is provided by considering the integrable system [11,12,13].
In particular D'Hoker and Phong [13] give the curve as the expansion in q , where q = e 2πiτ . (2.1) Here τ is complex gauge coupling constant, This expansion of the curve in q may be interpreted as the brane configuration in the covering space.
First of all, we explain the space in which the M5 brane is embedded. We consider only single NS5 brane and N c D4 branes. In IIA picture, the NS5 brane has worldvolumes in the x 0 , x 1 , x 2 , x 3 , x 4 , x 5 directions and its location is specified by the x 6 , x 7 , x 8 , x 9 . D4 branes have worldvolumes in the x 0 , x 1 , x 2 , x 3 , x 6 directions and their locations are specified by the x 4 , x 5 , x 7 , x 8 , x 9 . These branes are merely an M5 brane in M-theory picture. Nonetheless we call D4 or NS5 depending upon whether M5 is winding in the x 10 direction or not. The x 10 direction is compactified on S 1 of radius R. We introduce holomorphic coordinates such that We compactify the x 6 direction on a circle of radius L. Let us consider the following configuration: the NS5 brane is local on this circle and N c D4 branes begin from and end on this NS5 brane and extend along this circle.
Here E is a genus one Riemann surface with an arbitrary complex structure. However we should consider a C bundle over E, Nc D4 x 6 m The softly broken N = 4 SU(N c ) curve is interpreted as an N c -fold cover of E, the N c branches being the positions of the D4 branes in C. The NS5 brane appears as a simple pole in v. This description is equivalent to the integrable system [5].
We begin by saying that the explicit shape of the M5 brane is given by the integrable system.
If we set double period (2w 1 , 2w 2 ) = (1, τ ) 2 the curve is given by [13] where Here L(z) is the gauge transformed Lax operator in the elliptic Calogero-Moser system [14]. This expression is faithfully represented by the brane configuration in the covering space of X m (Fig 2). The reason is that in noncompact x 6 space the curve for N = 2 n SU(N c ) gauge group with bifundamental matters theory is given by the following form [5], Here y = e s and B i (v) are some polynomials of degree N c in v. Dividing the curve by y [ n+1 2 ] , we obtain (2.16) 3 Notice that h 1 (z + τ ) = h 1 (z) + β If we set B l (v) = B(v − lm), D4 branes located between a pair consisting an NS5 brane and the next NS5 brane get shifted by m in the v direction when they move to the adjacent pair of NS5 branes. The mass of each bifundamental matter is m. Let n → ∞ and D4 branes located between each pair of NS5 branes be the same. Therefore n SU(N c ) is reduced to single SU(N c ) and bifundamental matters become a hypermultiplet in the adjoint representation with mass m (plus a neutral singlet). This curve is equivalent to (2.14) and this configuration represents the covering space of the softly broken N = 4 SU(N c ) in the elliptic model.
Orientifold 4-plane in elliptic models
Let us introduce an O4 plane to the elliptic model by considering the covering space. We treat an O4 plane as a nondynamical D4 brane which carries appropriate R-R charges and which operates Z 2 space projection [9,10]. Being nondynamical means that the O4 plane does not The difference in worldsheet parity projection is the difference in R-R charges. If the R-R charge of the O4 plane is +1 then the gauge group is USp, and if -1 then SO.
It is clear that we cannot put an O4 plane as the projection into the elliptic model when the mass does not vanish. If we give a (v ↔ −v) mirror symmetry to the configuration as a constraint, there is a conflict between this symmetry and another symmetry (2.5). The reason is clear in the covering space (Fig 2). Mirrors of D4 branes around v = nm are produced around v = −nm in n th fundamental region and around v = (n + 1)m and v = −(n + 1)m in the next one. We must, however, have D4 branes around v = (n + 1)m and v = −(n − 1)m in the next one according to (2.5).
Hence we require the 'O4 plane' projection as the O4 plane projection in each fundamental This seems a natural generalization of the O4 plane since we have normal O4 plane when m = 0.
Turning on m, each fundamental region is shifted by m along the v direction. This time each O4 plane is shifted with other D4 branes together. See (Fig 3). This configuration apparently corresponds to softly broken N = 4 USp and SO theory with the O4 plane whose R-R charge is +1 and -1 respectively. In what follows we read off curves from this configuration and check
N = 4 U Sp(2N c ) theory
Let us consider the case that the gauge group is USp(2N c ). There are N c D4 branes, N c mirror D4 branes and an O4 plane whose R-R is +1 in each fundamental region. Let k a > 0, a = 1, . . . , N c be positions of N c D4 branes in the n = 0 fundamental region. The positions of mirror D4 branes are −k a . Hence H(v − nm) in (2.14) has Nc a=1 ((v − nm) 2 − k 2 a ) part for D4 branes. This polynomial is further deformed by the O4 plane. There are two effects of O4 plane in H(v − nm) whose R-R charge is +1 between two NS5 branes. The one is to introduce The other is to introduce some constant shift which is independent of both v and s. For example [9,10], pure N = 2 USp(2N c ) case, the curve with an O4 plane whose position is v = 0 is where y = e s . The first v 2 is caused by the O4 plane with the following assumption [15]: if the orientifold crosses a NS5 brane its charge changes sign. In our case, to be exact, the O4 plane does not cross, but the same term should be induced in each fundamental region. Hence ) + a n , where a n = a n (q, m, k a ). This is invariant under (3.1) for each n as expected. The curve (2.14) is deformed to for softly broken N = 4 USp(2N c ). Let us consider the term a n . Notice that this curve should have all symmetries that the curve (2.14) has for arbitrary N c because this configuration with the O4 plane is seen as the special case of N = 4 SU(2N c + 2) configuration. 4 Since the curve must be invariant under (2.5), we have a n = a n+1 , which means that a n (q, m, k a ) = a(q, m, k a ). The curve (2.14) is also invariant under both (n, m, s) → (−n, −m, −s + 2πiτ ) and . The O4 plane is nondynamical. When m = 0, the effects except for Z 2 projection of the O4 plane in each n th fundamental region will be canceled by the ones from n + 1 th and n − 1 th fundamental regions. This means that a(q, m 2 , k 2 a ) = m 2ã (q, m 2 , k 2 a ). The mass dimensions of (q, m, k a , a) are (0, 1, 1, 2N c + 2), respectively. Of courseã must be invariant under Weyl transformation of USp(2N c ). These constraints are satisfied, where f (q) is some function such that f (q) → q 1 2 for q → 0. This formula will be checked in the next subsection. We find softly broken N = 4 USp(2N c ) curve as the infinite series expansion
Decoupling limits
We check the consistency of the curve (3.5) by the diagram below.
The limit (4) is already studied by D'Hoker and Phong [13]. It is known that when r 1 = r 2 the flavor group is gauged so that we have N = 2 SU(r 1 ) × SU(r 2 ) theory with bifundamental matters. We will see the similar situation in the limit (2).
Therefore we need further checks for the case that the flavor symmetry is gauged.
Let us check the limit (1). We take all D4 branes far away from the O4 plane. In this where v 0 is the center of D4 branes not including mirrors in v-plane.
We also shift the origin, which induces v → v + v 0 and k a → k a + v 0 . The limiting form of ).
Since we do not require k a = 0 yet, we can absorb the last term into the first term by some redefinition of k a , and then we may take This is the same as (2.14), the curve for N = 4 SU(N c ).
(m − m − y j )(m + m + y j ). 5 Notice that we ignore the immaterial factor '2' which comes from the shifts of coordinates in this paper.
We have the leading large m and small q behavior of H(v − nm) , The limiting form of the curve is where ω = e s . Since we need at least one NS5 brane, we keep y fixed as m → ∞, (3.14) For finiteness, we assume −6N 1 + 4N 2 − 6 ≤ 0. We obtain the limiting form of the curve as m → ∞ and q → 0 , When −6N 1 + 4N 2 − 6 < 0, we get the following curve as m → ∞, This is the curve for N = 2 USp(2N 1 ) with N 2 flavors [16] as expected. To be more explicit, by y → −1 B + (v) y, we obtain In particular, we take N 2 = 0 which means pure N = 2 USp(2N 1 ) theory. The constraint, −6N 1 + 4 · 0 − 6 < 0, is satisfied. This curve is valid in this limit as well.
The flavor group is gauged as in the SU case [13]: . This is the curve for N = 2 USp(2N 1 ) × SU(N 2 ) with bifundamental matters. To check consistency, we try the limit (3).
We consider the decoupling limits such that Here we require r 2 ≤ r 1 as in [13]. while keepingΛ fixed. We obtain the limiting form of the curve of (3.18): Making the shift y → M k−r 1 (−y) and throwing an overall factor away, the curve becomes Notice that the term ofỹ 4 always vanishes for r 2 ≤ r 1 as M → ∞. This result is in agreement with the result which is studied in [13]. When r 2 < r 1 , as M → ∞, this curve corresponds to N = 2 SU(r 1 ) with r 2 flavor theory. When r 1 = r 2 , the curve becomes that for N = 2 SU(r 1 ) × SU(r 2 ) with bifundamental matter theory.
Solution in compact space and the symmetry
For USp case, we may easily present the infinite series expansion curve (3.18) as a polynomial by using (2.7), (2.11), (2.12). The polynomial H(v) in (3.18) is Taking care of the degree of the H(v) for v which is 2N c + 2, we find the curve for softly broken where This appears to be the same symmetry as the O6 planes which extend along the directions the Nevertheless, the interpretation is totally different. Let us consider (3.32) in a little more detail. To begin with, the O4 plane is produced by dividing the directions the v, x 7 , x 8 , x 9 by Z 2 . Other directions are the product space globally. This means that when m = 0 since X m = C × E globally we may take normal O4 plane. As seen in (Fig 3 (a)), this is trivial in the figure. In the curve level, when m = 0 the H(v − nm) becomes H(v) which does not depend on n. Therefore (3.5) is merely H(v) = 0. Accordingly, (3.32) means simply v → −v. Now our space X m is C×E not globally but locally. We take the generalized O4 plane as C/Z 2 × E locally. 6 This operation is not, however, permitted globally except for m = 0 in X m . Therefore there must be some effects for the E part globally. Actually the effect is seen as (3.32).
We give the explicit curve as the polynomial. For brevity, we use k not v. For N c = 1, wherem = m β . Let k → k +mh 1 to represent the curve in terms of x, y. Here x is Weierstrass ℘(z) function and y = 1 2 ℘ ′ (z). The above curve becomes where a 2 is constant which does not depend on z. All combinations of h n in the elliptic Calogero Moser system are written in terms of x, y.
Since USp(2) = SU(2), this curve must be equivalent to the one for N = 4 SU(2). Let us confirm at least for m → 0, ∞. First, form → ∞ , we already know that (3.38) becomes pure N = 2 USp(2) curve by the argument in the previous subsection. It is known that the N = 2 USp(2) curve is equivalent to N = 2 SU(2) one [16]. Form → 0, throwing the overall factor away, (3.38) becomes This is equivalent to the curve for N = 4 SU(2) one with m = 0.
For any N c , we may construct the curve for softly broken N = 4 USp(N c ) as polynomials except for an ambiguity of f (q) by (3.26). This ambiguity is fixed by comparing the discriminants of the SU(2) and USp(2) curves since f (q) is the same function for any N c . The USp (2) curve is, however, very complicated to look for the discriminant. In this subsection, we look for the softly broken N = 4 SO(2N c ) curve as the infinite series expansion by repeating the similar procedure. In this case the R-R charge of O4 plane is -1.
Therefore, the effect of the O4 plane between NS5 branes appears as v −2 . Notice that there is no constant shift unlike the USp case. For example, the pure N = 2 SO(2N c ) curve is To obtain the standard form, let y → v −2 y. The curve becomes For that reason, we find the curve for the softly broken N = 4 SO(2N c ), where As in the SU and USp cases, the flavor symmetry is gauged in (2) for some N 1 , N 2 .
For the limit (1), let Here v Nc−2 0 factor is thrown away since this is independent of n. We obtain the limiting form of the curve, (3.54) is required to keep at least one NS5 brane in the limiting configuration as in the USp case. We take A(v) and B(v) as (v ± y j ). (3.56) In this limit, The limiting form of the curve is When −6N 1 + 4N 2 + 6 < 0, (3.60) becomes Let v 2 B + (v)y → y, then we have the standard curve for N = 2 SO(2N 1 ) with N 2 flavors [16,17], In particular, let us take N 2 = 0 , which satisfies the condition −6N 1 + 4 · 0 + 6 < 0. We obtain the pure N = 2 SO(2N 1 ) curve.
The SO(2N c + 1) case
For the SO(2N c + 1) case, the procedure is almost same as the SO(2N c ) case. The only difference is that we need one more D4 brane which is not dynamical and which is on the O4 plane. Therefore we multiply (3.43) by (v − nm) to obtain (3.66) The curve for the SO(2N c + 1) case is (3.67) satisfies also these checks. We do not put the details except for some comments on the limit (2), because the procedure is same as the other cases.
When −6N 1 + 4N 2 + 3 = 0, the flavor group is gauged. However there is no solution for integers N 1 , N 2 . Therefore the flavor group is not gauged unlike the other cases.
Conclusions for the SO(N c ) case
We find the curves for softly broken N = 4 SO(N c ) as the infinite series expansion (3.42) and (3.67) for SO(2N c ) and SO(2N c + 1), respectively. In this case, we have not recast these expansions into polynomials. Recall that H(v) is a polynomial in obtaining (2.14) from (2.7). | 2014-10-01T00:00:00.000Z | 1998-03-15T00:00:00.000 | {
"year": 1998,
"sha1": "ca67ed1292f8ce6afe8c5271c77b7aa1dbcd4818",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/9803123",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ca67ed1292f8ce6afe8c5271c77b7aa1dbcd4818",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
257804855 | pes2o/s2orc | v3-fos-license | Role of ADAM33 short isoform as a tumor suppressor in the pathogenesis of thyroid cancer via oncogenic function disruption of full-length ADAM33
Thyroid cancer is the most prevalent endocrine malignancy globally; however, its underlying pathogenesis remains unclarified. Reportedly, alternative splicing is involved in processes such as embryonic stem and precursor cell differentiation, cell lineage reprogramming, and epithelial-mesenchymal transitions. ADAM33-n, an alternative splicing isoform of ADAM33, encodes a small protein containing 138 amino acids of the N-terminal of full-length ADAM33, which constructs a chaperone-like domain that was previously reported to bind and block the proteolysis activity of ADAM33. In this study, we reported for the first time that ADAM33-n was downregulated in thyroid cancer. The results of cell counting kit-8 and colony formation assays showed that ectopic ADAM33-n in papillary thyroid cancer cell lines restricted cell proliferation and colony formation. Moreover, we demonstrated that ectopic ADAM33-n reversed the oncogenic function of full-length ADAM33 in cell growth and colony formation in the MDA-T32 and BCPAP cells. These findings indicate the tumor suppressor ability of ADAM33-n. Altogether, our study findings present a potential explanatory model of how the downregulation of the oncogenic gene ADAM33 promotes the pathogenesis of thyroid cancer. Supplementary Information The online version contains supplementary material available at 10.1007/s13577-023-00898-3.
Introduction
Thyroid cancer accounts for approximately 2.5% of all diagnosed malignancies and 95% of all endocrine tumors, thereby making it the most predominant endocrine malignancy worldwide [1]. Based on different biological behaviors and pathological processes, thyroid cancer can be divided into the following four subtypes: papillary, follicular, undifferentiated, and medullary carcinoma [2]. Approximately 90% of thyroid cancers are differentiated, in which the most common histological subtype is papillary thyroid cancer (PTC). According to data from the National Cancer Institute in the United States, PTC contributes to a majority of over 56,000 new thyroid cancer cases every year [1]. Nevertheless, it has a 98% cure rate if diagnosed early and treated appropriately. Although thyroid cancer is the most curable among all malignancies, it requires research attention owing to its annually increased incidence (~ 6.3%) and mortality (~ 0.8%) [2,3]. Studies have shown that KAP-1 [4], eIF5A2 [5], and MEIS2 [6] are involved in the progression of thyroid cancer. However, the exact pathogenesis underlying thyroid cancer remains unclear.
ADAM33 belongs to the ADAM family of membraneanchored proteins that have a unique disintegrin and metalloprotease-containing domain structure [7]. ADAM family is highly conserved among animals from Drosophila to mammalian species [8]. ADAM33 was initially cloned and characterized by Yoshinaka et al. in 2002 in mouse and human tissues [7]. They found that the human ADAM33 was located on chromosome 20p13 and comprised 22 exons, which are ubiquitously expressed in tissues other than the liver. Furthermore, ADAM33 is expressed in bronchus tissue and bronchial smooth muscle cells, rendering it a highly susceptible gene involved in asthma and other airway disorders Jing Lan and Yehui Zhou are contributed equally.
* Yuqiu Wan lanjing20082022@163.com * Jianbo Cao 598442626@qq.com 1 3 [9][10][11]. In 2009, Kim et al. demonstrated that ADAM33 contributes to the pathogenesis of gastric cancer by promoting the secretion of IL-18, thereby increasing cell migration and proliferation [12]. In 2016, Stasikowska et al. revealed that ADAM33 was overexpressed in laryngeal cancer and sinonasal inverted papillomas, suggesting that ADAM33 is potentially implicated in their tumorigenesis [13]. These observations indicate that ADAM33 may be oncogenic in multiple cancer types. However, an investigation of 212 breast cancer samples indicated that ADAM33 is silenced by DNA hypermethylation in breast cancer and that low ADAM33 level is associated with short overall and metastasis-free survival [14]. This observation contrarily reveals that ADAM33 also possesses tumor suppressive function. Therefore, the relationship between these two entirely different functions of ADAM33 and their underlying mechanisms in cancers remain unclear.
In the present study, we aimed to investigate the function of ADAM33 in thyroid cancer. To this end, we intended to determine the effect of ectopic ADAM33 on papillary thyroid cancer cell lines and, for the first time, report the downregulated ADAM33 expression levels in thyroid cancer. The findings of our study provide information on the mechanism by which the oncogene ADAM33 contributes to the pathogenesis of thyroid cancer.
Patient data
From January 2016 to December 2019, 139 patients diagnosed with differentiated thyroid cancer and 91 normal controls were enrolled in the First Affiliated Hospital of Soochow University. This study included 113 papillary thyroid cancer cases and 11 follicular thyroid cancer cases. The basic clinical manifestations and baseline characteristics are summarized in Table 1. Briefly, 26-84-year-old patients (median, 54) were diagnosed and categorized into 104 stages I-II and 35 stages III-IV based on the TNM staging system [15]. Here, 83 (59.7%) patients were positive for the BRAF V600E mutant. The biopsies obtained through surgery were directly immersed into RNAlater ™ Stabilization Solution (AM7021, Invitrogen, USA) for DNA and RNA extraction. All operations in this study were performed in accordance with the guidelines of the Declaration of Helsinki, and all experimental protocols were approved by the Ethics Committee of First Affiliated Hospital of Soochow University (no. 2022192).
CCK8 assay for cell growth curve
The growth of MDA-T32 and BCPAP cells was determined using a cell counting kit-8 (CCK-8) assay from MedChem-Express (Cat: HY-K0301, Shanghai, China) following the manufacturer's instructions. Briefly, the cell suspension (100 μL/well) was seeded into a 96-well plate and maintained in an incubator for 24 h. Subsequently, 10 μL CCK-8 solution was added into each well, while being careful not to generate bubbles, and the plate was then incubated for 3 h. The absorbance of each well was measured using a Synergy LX Microplate Reader at 450 nm.
RNA isolation and real-time quantitative polymerase chain reaction (PCR) [17]
The total RNA of the cell lines and tissues was extracted using TRIzol reagent (Cat: 15,596,026, Thermo Fisher Scientific, USA) according to the manufacturer's instructions. Next, 1 µg of total RNA was reverse-transcribed to first-strand complementary DNA (cDNA) using PrimeScript IV 1st strand cDNA Synthesis Mix (Cat: 6215A, Takara) following the manufacturer's instructions. The cDNA products were diluted to 1/10 using ddH 2 O for real-time PCR. Thereafter, TB Green Premix Ex Taq (Cat: RR420Q, Takara) was used in a 10 µL final volume containing 1 µL diluted cDNA. PCR involved the following steps: (1) initial denaturation at 96℃ for 5 min; (2) 40 cycles of denaturation at 96℃ for 15 s, annealing at 60℃ for 20 s, and extension at 72℃ for 20 s; and (3) melting curves in progressive heating from 65℃ to 95℃. The gene expression level of the targets was normalized to that of GAPDH. The primers involved in the real-time PCR are presented in Table 2.
Colony formation [18]
MDA-T32 and BCPAP cell lines (10 3 cells/well) were seeded in six-well plates. All the experiments were repeated at least thrice. Every 3 days, the medium in each well was changed until visible colonies were formed. Thereafter, the colonies were fixed using absolute methanol for 15 min and stained using 0.5% crystal violet for 30 min. Finally, the stained colonies were visualized using a Leica microscope and then quantified using the ImageJ software. Forward CCG GGG CGA GTA AGG GGC TTC CCC CTC GAG GGG GAA GCC CCT TAC TCG CCT TTT TG Reverse AAT TCA AAA AGG CGA GTA AGG GGC TTC CCC CTC GAG GGG GAA GCC CCT TAC TCG CC shADAM33-short#2 Forward CCG GGG GAG AGG AGG CTG GGC CTG CTC GAG CAG GCC CAG CCT CCT CTC CCT TTT TG Reverse AAT TCA
Western blot analysis [19]
RIPA lysis buffer (Beyotime Institute of Biotechnology) was used to extract protein from the cells, and the protein concentration was determined using the BCA kit (Nanjing Jiancheng Bioengineering Inc.) according to the manufacturer's instructions. Protein (20 μg) was then separated using 10% SDS-PAGE and transferred onto PVDF membranes (MilliporeSigma). The membranes were blocked in 5% skimmed milk for 1 h at room temperature followed by incubation with primary antibodies against the following: ADAM33 (1:2000, Cat: PA5-103,573, Thermo Fisher Scientific) and GAPDH (1:1500, Cat: HRP-60004, ProteinTech Group, Inc.) at 4℃ overnight. The membranes were then incubated with HRP-labeled goat anti-rabbit secondary antibody (Abcam, Cat: ab7090; 1:5000) at room temperature for 1 h. Thereafter, an enhanced chemiluminescence kit (Thermo Fisher Scientific) was used to determine the protein expression.
Co-Immunoprecipitation (IP)
HA-tagged ADAM33-n plasmid (ADAM33-N-HA) was constructed by inserting full-length ADAM-33-n cDNA into pCAGGs vector (including a C-terminal HA tag). MDA-T32 and BCPAP cells were transfected with the blank and ADAM33-n-HA vector for 48 h, respectively. Co-IP was subsequently carried out using the PierceTM HA Tag IP/ Co-IP kit (26,180, Thermo Fisher, MA, USA), following the producer's protocol. Then precipitated proteins and Input were separated by SDS-PAGE and analyzed by immunoblotting with anti-ADAM33 antibody (ab113740, Abcam, Shanghai, China).
Data mining
The differential analysis of ADAM33 expression in 512 thyroid cancer samples from the public database of the Cancer Genome Atlas (TCGA) was performed using the online bioinformatics tool gene expression profiling interactive analysis (GEPIA, http:// gepia2. cancer-pku. cn/# index) [20] with a cutoff value of p < 0.01. Furthermore, the matched TCGA and genotype-tissue expression project (GTEx) normal tissues were used as control. The data on ADAM33 (including short and full-length isoform) expression levels in different human tissues were derived from the GTEx database (https:// gtexp ortal. org/ home/). The crystal structure of the catalytic domain of human ADAM33 (1R54) was cited from the publication of Orth et al. in 2004 [21].
Statistical analysis
All the data are expressed as mean ± standard deviation. GraphPad software version 8.0 (San Diego, USA) was used for statistical analysis. For two sample comparisons, the Student's t-test was used. One-or two-way analysis of variance was used for more than two-sample comparisons. We used the Wilson/Brown method in the receiver operating characteristic (ROC) analysis [22]. Statistical significance was set at p < 0.05.
ADAM33 is downregulated in thyroid cancer
To investigate the role of ADAM33 in thyroid cancer, the GEPIA online tool was employed for the differential analysis of high throughput RNA-seq data of 512 tumors, 59 tumor-related tissues from TCGA, and 317 normal controls from the GTEx database. The results showed that ADAM33 expression in thyroid cancer was substantially decreased compared with that in the two normal controls ( Figure S1A). Therefore, we collected 139 thyroid cancer samples, including 113 papillary and 15 follicular subtypes, and 11 others to validate these data using real-time PCR. Consistently, the results revealed that ADAM33 levels in tumors decreased to 49.6% (p < 0.001) of that in normal controls ( Figure S1B). Subsequently, we performed ROC analysis to explore the possible clinical significance of ADAM33 expression levels.
The results of this analysis indicated that diagnosis using ADAM33 expression to distinguish tumors from normal controls exhibited a 77.7% (95% CI: 70.1-83.8%) sensitivity and 72.5% (62.3-80.6%) specificity with an area under curve score of 0.801 (95% CI 0.750-0.862), highlighting the clinical diagnostic significance of ADAM33 ( Figure S1C). Thus, it may be a potential therapeutic target for the treatment of thyroid cancer. Collectively, our findings demonstrate that ADAM33 RNA level is decreased in thyroid cancer.
ADAM33 contributes to the pathogenesis of thyroid cancer
Considering the aberrant ADAM33 expression in thyroid cancer, we stably overexpressed its coding sequences in two PTC cell lines (MDA-T32 and BCPAP) using a doxycycline-inducible lentivector. The results of real-time PCR showed that ADAM33 mRNA expression levels were increased to 5.3 and 7.1 fold of those in the control group after doxycycline treatment in MDA-T32 and BCPAP cells, respectively (Fig. 1A). Furthermore, ADAM33 levels showed a similar trend (Fig. 1B). Therefore, we used these two cell lines to conduct a CCK-8 assay for determining the effect of ADAM33 expression on cell growth. We observed that, since the second day, the growth of doxycycline-treated MDA-T32 and BCPAP cells was significantly faster than the control group, suggesting that ectopic ADAM33 substantially promotes cell growth (Fig. 1C). We constructed ADAM33 knockdown MDA-T32 and BCPAP cell lines to further validate these results using a modified pLKO.1 plasmid, a widely-used shRNA-delivering vector. In the MDA-T32 and BCPAP cells, ADAM33 mRNA expression levels were largely downregulated by three independent shRNA targets, determined using real-time PCR (Fig. 1D). Additionally, ADAM33 levels were found to be decreased in the sh-ADAM33 groups (Fig. 1E). Consistently, the CCK-8 assay results indicated that ADAM33 expression decreased in response to cell growth in MDA-T32 and BCPAP cells (Fig. 1F). On performing a colony formation assay using cell lines with downregulated and over-expressed ADAM33, we observed that over-expression of ADAM33 by doxycycline treatment in MDA-T32 and BCPAP cell lines substantially increased the colony formation percentage from 12.1% to over 23.7% in a dose-dependent manner (Fig. 1G). Meanwhile, the knockdown of ADAM33 by shRNA remarkably restrained the colony formation in the MDA-T32 and BCPAP cells (Fig. 1H). Taken together, our findings demonstrate that ADAM33 possesses oncogenic function in thyroid cancer cells in vitro.
A novel isoform of ADAM33 is aberrantly expressed in thyroid cancer samples
A contradiction exists between ADAM33 downregulation in thyroid cancer biopsy samples and its oncogenic function in thyroid cancer cells, and its underlying mechanism remains unelucidated. To this end, we systemically analyzed ADAM33 expression in 53 types of human tissue samples using RNA-seq data from the GTEx database. ADAM33 was ubiquitously and highly expressed in almost all human tissues except the brain ( Fig. 2A) reported several ADAM33 mRNA splice variants in bronchial biopsies and embryonic lungs using PCR, which was further confirmed using western blotting [24]. These findings indicate that different ADAM33 isoforms may exhibit diverse functions in thyroid cancer. Therefore, we analyzed the expression pattern of ADAM33 in 53 different human tissues using the RNA-seq data from the GTEx database. As shown in Fig. 2B, the usage frequency of different exons in ADAM33 is inhomogeneous among tissue types, suggesting that alternative splicing isoforms of ADAM33 are common. Notably, the top two highly expressed isoforms were ENST00000466620 and ENST00000617732, and not the full-length ENST00000356518 ( Fig. 2B and C). Concerning protein-coding potential, the transcript NST00000466620 does not have a coding protein, whereas ENST00000617732 codes for a protein with 138 amino acids. Therefore, we focused on the role of the transcript ENST00000617732 in thyroid cancer; this transcript was named ADAM33-n based on its amino acid sequences. We quantified the ADAM33-n expression level in the collected thyroid cancer biopsies and found that the aberration of ADAM33 in tumors is primarily attributable to the downregulation of ADAM33-n (Figure S1D). Moreover, we compared the expression level of ADAM33-n and full-length ADAM33 ( Figure S2), and the results showed that the ADAM33-n level was more dominant than the full-length one.
ADAM33 short isoform exhibits anti-oncogenic roles in thyroid cancer
To explore the role of ADAM33-n in the pathogenesis of thyroid cancer, we stably transfected the coding sequences of ADAM33-n in MDA-T32 and BCPAP cells using a doxycycline-inducible lentivector. After doxycycline treatment, real-time PCR results revealed that ADAM33-n expression level was upregulated to 7.9 and 8.1 fold of that in the control in MDA-T32 and BCPAP cells, respectively (Fig. 3A). By contrast, we observed that doxycycline treatment on these two cell lines significantly inhibited cell growth compared with that in the PBS group, determined using the CCK-8 assay (Fig. 3B). We further validated the findings using ADAM33-n knockdown cell lines. Therefore, we designed three independent shRNA for ADAM33-n transcripts at its 3′-UTR, which is distinct from other transcripts in sequences (Fig. 3C). These three shRNAs were stably transfected into MDA-T32 and BCPAP cells using pLKO.1 plasmid. Realtime PCR results indicated that ADAM33-n was specifically knocked down without interfering with full-length ADAM expression (Fig. 3D). Similarly, with ADAM33-n overexpressed cells, the CCK-8 assay revealed that downregulation of the short isoform of ADAM33 enhanced cell growth ability compared with that in the scramble group (Fig. 3E). Furthermore, we performed a colony formation assay using the ADAM33-n downregulated and over-expressed cell lines mentioned earlier. Our observations showed that ectopic ADAM33-n induced by doxycycline treatment substantially decreased the colony formation percentage from 30.8% to less than 24.5% in MDA-T32 and BCPAP cell lines in a dose-dependent manner (Fig. 3F). Meanwhile, down-regulation of ADAM33-n using shRNA substantially promoted B The protein level of ADAM33 in the MDA-T32 and BCPAP cells was determined by Western blot. Cells were treated with doxycycline at a dose of 0.1 ug/ml for 2 days. GAPDH was used as the internal control. N = 3, ***P < 0.001 by student's t-test. C CCK-8 assay determined the cell viability of MDA-T32 and BCPAP cells. Cells in A were treated with doxycycline at a dose of 0.1 ug/ml since they were seeded. Every two days, the culture medium was changed. N = 3, **P < 0.01 and ***P < 0.001 by two-way ANOVA. D The expression level of ADAM33 in the MDA-T32 and BCPAP cells was determined by real-time PCR. #1/2/3 indicates an independent shRNA tar-get. GAPDH was used as an internal control in real-time PCR. Scr, scramble shRNA; N = 6, n.s, no significance and ***P < 0.001 by one-way ANOVA. E The protein level of ADAM33 in the MDA-T32 and BCPAP cells was determined by Western blot. #1/2/3 indicates an independent shRNA target. GAPDH was used as an internal control in Western blot. Scr, scramble shRNA; N = 3, ***P < 0.001 by one-way ANOVA. F CCK-8 assay determined the cell viability of MDA-T32 and BCPAP cells. Cells in C were used. Every two days, the culture medium was changed. N = 3, **P < 0.01 and ***P < 0.001 by two-way ANOVA. G-H. Colony formation ability of ADAM33 down-and over-expressed MDA-T32 and BCPAP cells. MDA-T32 and BCPAP cell lines in A and C (10 3 per well) were seeded in sixwell plates. Every 3 days, the medium in each well was changed until the visible colonies formed. N = 6; n.s, no significance, ***P < 0.001 by one-way ANOVA (Fig. 3G). Collectively, our data demonstrated that, unlike full-length ADAM33, ADAM33-n is a tumor suppressor in thyroid cancer cells in vitro.
ADAM33 short isoform interferes with the oncogenic function of full-length ADAM33
ADMA33 is a type I transmembrane zymogen glycoprotein, which belongs to the family of disintegrin and metalloprotease [25]. ADAM33 protein comprises several domains, such as pro-metalloprotease, cysteine-rich, disintegrin-like, transmembrane, EGF-like, and cytoplasmic domains, which facilitate many critical biological processes, including cell activation, adhesion, proteolysis, signaling, and fusion [26][27][28][29]. In the quiescent status, a chaperone-like prodomain in the amino-terminal extracellular fragment of ADAM33 binds to the metalloproteinase domain to inhibit the proteolytic activity of ADAM33 [21,30] (Fig. 4A). Therefore, we speculated that the ADAM33-n isoform may form a chaperon-like protein that directly binds to the metalloproteinase domain of full-length ADAM33, and unlike that of the chaperone-like prodomain, the inhibitory effect of ADAM33-n could not be reversed (Fig. 4A). In addition, in order to explore the direct interaction between full-length ADAM33 and ADAM33-n, we expressed HA-tagged ADAM33-n in MDA-T32 and BCPAP cells, and performed a Co-IP assay.
The results clearly showed the interaction between fulllength ADAM33 with ADAM33-n (Fig. 4B). Thus, the ADAM33-n isoform may be a constitutive inhibitor of fulllength ADAM33. To validate this hypothesis, we co-transfected ADAM33-n and full-length ADAM33 in MDA-T32 and BCPAP cells, and ADAM33-n was over-expressed in a concentration gradient. The results of real-time PCR revealed that full-length ADAM33 and ADAM33-n levels increased to about 5.1 and 1.9-7.3 folds, respectively, of those in the control group ( Fig. 5A and B). In the CCK-8 assay, we observed that the elevated cell growth ability by ADAM33 over-expression in MDA-T32 and BCPAP cells was substantially reversed by ectopic ADAM33-n in a dose-dependent manner (Fig. 5C). Furthermore, the colony formation assay results revealed that ectopic ADAM33-n overcame the oncogenic effect of full-length ADAM33 in MDA-T32 and BCPAP cells in vitro (Fig. 5D). In addition, when we overexpressed or knocked down only ADAM33 in MDA-T32 and BCPAP cells, the expression of ADAM33-n mRNA was not significantly changed (Fig. 5E). In contrast, when we transfected only ADAM33-n in MDA-T32 and BCPAP cells, the over-expression of ADAM33-n failed to influence the endogenous full-length ADAM33 mRNA level (Fig. 5F). Our results demonstrated that the ADAM33 short isoform interferes with the oncogenic function of full-length ADAM33 without influencing its mRNA level.
Discussion
During the maturation of pre-RNA precursors, alternative splicing is a critical posttranscriptional step to ensure that one gene produces multiple mature mRNAs that are ultimately translated into different proteins [31,32]. The pervasive cellular process of alternative splicing expands the utilization efficiency of the genome to contribute to proteome complexity [33,34]. Among higher eukaryotes, alternative splicing is frequently implicated in modulating the patterns of gene expression that play a critical role in cell fate decisions [35]. However, aberration or errors in alternative splicing often produce a deleterious impact on cells and even lead to cell death as well as cancerization [36,37]. Alternative splicing events are regarded to be key markers of tumor progression and prognosis, including those of bladder [38] and liver cancer [39]. Lin et al. performed survival analysis in 496 patients with PTC and found that 2799 splicing events harbor prognostic significance in distinguishing TNM stage, tumor stage, distant metastasis, and tumor status of papillary thyroid cancer [40]. In 2009, Kim et al. reported that ADAM33 is implicated in the pathogenesis of gastric cancer and that its overexpression results in increased cell migration and proliferation [12]. In 2017, Manica et al. showed that ADAM33 is downregulated in breast tumor samples (n = 212) and that its low levels are associated with triple-negative breast cancer, basal-like markers, and shorter overall survival [14]. In our study, the real-time PCR results of 139 thyroid cancer biopsy samples support that ADAM33 is downregulated in tumor Fig. 3 ADAM33 short isoform exhibits anti-oncogenic roles in thyroid cancer. A The expression level of ADAM33-n in the MDA-T32 and BCPAP cells was determined by real-time PCR. Cells were treated with doxycycline at a dose of 0.1 ug/ml for 2 days. GAPDH was used as an internal control in real-time PCR. N = 6, ***P < 0.001 by student's t test. B CCK-8 assay determined the cell viability of MDA-T32 and BCPAP cells. Cells in A were treated with doxycycline at a dose of 0.1 ug/ml since they were seeded. Every two days, the culture medium was changed. N = 3, *P < 0.05 and ***P < 0.001 by two-way ANOVA. C The location of ADAM33-n specific shR-NAs. CDS, coding sequence, 3'-UTR, 3'-untranslated region. D The expression level of ADAM33 in the MDA-T32 and BCPAP cells we collected was determined by real-time PCR. #1/2/3 indicates an independent shRNA target. GAPDH was used as an internal control in real-time PCR. Scr, scramble shRNA; N = 6, n.s, no significance and ***P < 0.001 by one-way ANOVA. E CCK-8 assay determined the cell viability of MDA-T32 and BCPAP cells. Cells in D were used. Every two days, the culture medium was changed. N = 3, *P < 0.05 and **P < 0.01 by two-way ANOVA. F-G. Colony formation ability of ADAM33 down-and over-expressed MDA-T32 and BCPAP cells. MDA-T32 and BCPAP cell lines in A and D (10 3 per well) were seeded in six-well plates. Every 3 days, the medium in each well was changed until the visible colonies formed. N = 6; n.s, no significance, **P < 0.01 and ***P < 0.001 by one-way ANOVA ◂ tissues. This observation gives rise to the following question: how does the downregulation of an oncogene promote cell growth or proliferation? To investigate this, we systematically analyzed the alternative transcripts of ADAM33 using the RNA-seq data of 53 different human tissues from the GTEx database. We found that the alternative splicing isoform of ADAM33, ENST00000617732 (ADAM33-n), was ubiquitously and highly expressed in most human tissues, comparable to that of the full-length ADAM33. Therefore, we verified the real-time PCR primers for ADAM33 and found that the ones we used can target the exons 15 and 16 of ADAM33, indicating that they can detect the expression of almost all transcripts. Accordingly, we suspected that ADAM33-n expression may be dysregulated in thyroid cancer. After determining its expression in the collected thyroid cancer biopsies, we observed that ADAM33-n was downregulated in tumors compared with the normal controls. Additionally, we compared the expression level of ADAM33-n and full-length ADAM33 ( Figure S2), and the results showed that the ADAM33-n level is more dominant than the full-length one. These findings demonstrate that ADAM33 is the dominant contributor to the aberration of ADAM33 in thyroid cancer.
Reportedly, alternative splicing networks occur in numerous processes such as embryonic stem and precursor cell differentiation, cell lineage reprogramming, and epithelialmesenchymal transitions [41][42][43]. For instance, MAP4K4 mRNA was reported to be alternatively spliced in papillary thyroid cancer samples by RBM17, which causes phosphorylation of downstream signaling pathways [44,45]. Structurally, ADAM33-n loses exons 5-12 of full-length ADAM33 and encodes only a small N-terminal peptide (1-138 amino acids) that reserves the chaperone-like prodomain. As the chaperon-like prodomain of ADAM33 binds to the catalytic domain to inhibit its proteolytic activity, we hypothesized that ADAM33-n is a natural inhibitor The expression level of ectopic ADAM33 B/ADAM33-n C in the MDA-T32 and BCPAP cells we collected was determined by real-time PCR. GAPDH was used as an internal control in real-time PCR. VEC, empty vector; N = 6, ***P < 0.001 by one-way ANOVA. C CCK-8 assay determined the cell viability of MDA-T32 and BCPAP cells. Cells in B and C were used. Every two days, the culture medium was changed. N = 3,***P < 0.001 by twoway ANOVA. D Colony formation ability of ADAM33 down-and over-expressed MDA-T32 and BCPAP cells. MDA-T32 and BCPAP cell lines in B and C (10 3 per well) were seeded in six-well plates. Every 3 days, the medium in each well was changed until the visible colonies formed. N = 6; ***P < 0.001 by one-way ANOVA. E. The expression of ADAM33-n when ADAM33 was overexpressed or knocked down. VEC, empty vector; N = 6, n.s, no significance by one-way ANOVA. F. The expression level of endogenous ADAM33 in the MDA-T32 and BCPAP cells we collected was determined by real-time PCR. Cells were stably over-expressed with ADAM33-n in a concentration gradient. GAPDH was used as an internal control in real-time PCR. VEC, empty vector; N = 6, n.s, no significance by one-way ANOVA of full-length ADAM33 and that the downregulation of ADAM33-n consequently restores the oncogenic function of ADAM33 in thyroid cancer (Fig. 4A). We observed that ectopic ADAM33-n overcame the effect of ADAM33 overexpression in cell growth and colony formation in MDA-T32 and BCPAP cells in a dose-dependent manner. Moreover, our results revealed that ectopic ADAM33-n in MDA-T32 and BCPAP cells failed to influence the expression level of endogenous ADAM33, implying that ADAM33-n may interfere with the oncogenic full-length ADAM33 at the protein level. This observation is consistent with our hypothesis. Therefore, we concluded that ADAM33-n acts as a tumor suppressor by blocking the oncogenic role of full-length ADAM33 in thyroid cancer.
In summary, we found that ADAM33-n, an N-terminal isoform of ADAM33, is the dominant contributor to ADAM33 aberration in thyroid cancer. Unlike the full-length ADAM33, ectopic ADAM33-n inhibits cell growth and colony formation, indicating its tumor suppressor ability. Altogether, our study findings demonstrate how the downregulation of an oncogenic gene, ADAM33, promotes the pathogenesis of thyroid cancer, thereby indicating its potential as a therapeutic target. However, our study has some limitations. Further experiments are warranted to validate the function and relationship of ADAM33 and ADAM33-n and to substantiate the results based on tumor samples and clinical data.
Data availability
The data in this research are available upon request from the corresponding author.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2023-03-30T06:16:30.660Z | 2023-03-28T00:00:00.000 | {
"year": 2023,
"sha1": "78af4641a7ad69407d1fd3d77d847004538cd53d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13577-023-00898-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "bb1e4a32f1692eae8e9215fc61bed4723e2a2751",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
207899157 | pes2o/s2orc | v3-fos-license | PCL-ZnO/TiO2/HAp Electrospun Composite Fibers with Applications in Tissue Engineering
The main objective of the tissue engineering field is to regenerate the damaged parts of the body by developing biological substitutes that maintain, restore, or improve original tissue function. In this context, by using the electrospinning technique, composite scaffolds based on polycaprolactone (PCL) and inorganic powders were successfully obtained, namely: zinc oxide (ZnO), titanium dioxide (TiO2) and hydroxyapatite (HAp). The novelty of this approach consists in the production of fibrous membranes based on a biodegradable polymer and loaded with different types of mineral powders, each of them having a particular function in the resulting composite. Subsequently, the precursor powders and the resulting composite materials were characterized by the structural and morphological point of view in order to determine their applicability in the field of bone regeneration. The biological assays demonstrated that the obtained scaffolds represent support that is accepted by the cell cultures. Through simulated body fluid immersion, the biodegradability of the composites was highlighted, with fiber fragmentation and surface degradation within the testing period.
Introduction
Diseases, wounds, and traumas can lead to the damage and degeneration of tissues in the human body, which require treatments to facilitate their repair, replacement, or regeneration [1]. Alternatively to the transplantation procedure, tissue engineering aims to heal the affected parts by developing biological substituents that restore, maintain, or improve the original functionality [2,3]. Usually, this field is based on the use of porous three-dimensional scaffolds so that to provide a suitable environment for cell adhesion, proliferation, and differentiation [4][5][6]. The biomimetic concept [6] was adopted for most scaffolds design, in terms of physicochemical properties, as well as bioactivity for superior tissue regeneration. A variety of scaffolds with appropriate features was created by employing different materials, such as polymers, ceramics, and their composites [7][8][9][10].
Among the wide variety of techniques available for producing scaffolds, the electrospinning process is the most commonly approached, showing promising results for tissue engineering applications, including bone reconstruction [11][12][13]. The method is simple and ensures the fabrication of long and continuous fibers, their diameter being possible to control over a wide range, from micrometer to nanometer [14], depending on the processing parameters and optimization. Moreover, the fibrous alcohol (C 3 H 8 O) were used as starting materials, distilled water was added for hydrolysis. The obtained precipitate was filtered, washed, dried, and calcined at 400 • C for 2 h, in the same atmosphere and heating/cooling conditions as previously described, so as to ensure the transition from titanium hydroxides or oxyhydroxides to the final oxide.
Fiber Preparation
The composites were prepared from an organic component, to which the inorganic powders were added one at a time. Thus, polycaprolactone ((C 6 H 10 O 2 ) n , 80.000 Da, PCL, Sigma-Aldrich, Merck KGaA, St. Louis, MO, USA) was selected as biodegradable matrix, chloroform (CHCl 3 , CF, Sigma-Aldrich) and N,N-dimethylformamide (C 3 H 7 NO, DMF, Sigma-Aldrich, Merck KGaA, St. Louis, MO, USA) as solvents, while the electrospinning technique as the procedure for generating one-dimensional structures; the volumetric ratio between CF and DMF was maintained at 4:1.
The precursor solutions for electrospinning were prepared in a two-stage approach. First, a suspension of inorganic powder in the solvent mixture was achieved by dispersing 0.5 g of solid in 10 mL of liquid, all being ultrasonicated for 5 min at 50% amplitude. Then, 1.6 g of polymer was dissolved in the previously prepared suspension, which led to a final solution with 16% PCL and 5% inorganic powder; this was maintained under magnetic stirring for 24 h for the purpose of PCL solubilization and general homogenization.
Each solution was loaded into a 2 mL syringe connected with a stainless steel 21 G needle, having 0.8 mm inner diameter. A direct current high voltage source was necessary to provide an electrostatic field of 15 kV. For the fibers deposition, a static collector was fitted to the equipment, to which glass substrates used as fibers support were attached. The distance between nozzle and collector was set at 25 cm, while the feed rate was 3 mL/h. The electrospinning process was performed at a room temperature of 18 • C and relative humidity of 30%.
The biological evaluation was accomplished through in vitro tests: simulated body fluid (SBF) immersion for 14 days, at 37 • C, the testing solution being prepared according to Kokubo [32], as well as optical fluorescence microscopy [33], MTT assay [34] and GSH assay [35], on mesenchymal stem cells. The biocompatibility was analyzed in accordance with the law in force and following standard procedures, after samples sterilization under UV irradiation for 30 min. In order to evaluate the cell proliferation and cytotoxicity of the obtained materials, the MTT biochemical assay was employed; this is a colorimetric method based on a reduction process correlated with the enzymatic cell activity, for which the absorbance was read at 570 nm with a Tecan spectrophotometer. The cellular response to oxidative stress was estimated on the basis of GSH assay, the luminescence being recorded with a Titertek-Berthold luminometer; this method indirectly detects and quantifies the amount of an antioxidant agent produced by the cells in different testing environments, giving information about the toxicological response and oxidative stress level. Furthermore, the viability of the cells in the presence of the investigated samples was assessed by fluorescent microscopy, using Red CMTPX fluorophore; the images were taken with a Carl Zeiss digital camera. A detailed description of the working protocols is available in the specification sheets of each testing kit [33][34][35].
Physicochemical Characterization
The inorganic powders were investigated by XRD, SEM, and UV-Vis spectroscopy in order to evaluate their crystalline structure and morphology, as well as bandgap in the case of the two semiconductor oxides (ZnO and TiO 2 ).
The XRD pattern of ZnO powder (Figure 1a) contains only the diffraction peaks corresponding to the crystalline planes of wurtzite-type ZnO with hexagonal symmetry. The situation is similar in the case of TiO 2 powder, for which the XRD pattern shown in Figure 1b indicates the obtaining of anatase-type TiO 2 with tetragonal symmetry. On another hand, the HAp commercial powder turned out to be highly crystalline and of hexagonal structure (Figure 1c). For all three inorganic materials, no diffraction maxima associated with secondary phases or impurities were distinguished.
Physicochemical Characterization
The inorganic powders were investigated by XRD, SEM, and UV-Vis spectroscopy in order to evaluate their crystalline structure and morphology, as well as bandgap in the case of the two semiconductor oxides (ZnO and TiO2).
The XRD pattern of ZnO powder ( Figure 1a) contains only the diffraction peaks corresponding to the crystalline planes of wurtzite-type ZnO with hexagonal symmetry. The situation is similar in the case of TiO2 powder, for which the XRD pattern shown in Figure 1b indicates the obtaining of anatase-type TiO2 with tetragonal symmetry. On another hand, the HAp commercial powder turned out to be highly crystalline and of hexagonal structure (Figure 1c). For all three inorganic materials, no diffraction maxima associated with secondary phases or impurities were distinguished. Going to the microstructural characterization, the SEM image of ZnO (Figure 1a) shows the existence of at least three families of particles: the first one with complex appearance, resulted by connecting two pyramid trunks with hexagonal base, with dimensions on the longest side of about 10 μm; the second one in the form of polyhedral structures with hexagonal cross-section, containing Going to the microstructural characterization, the SEM image of ZnO (Figure 1a) shows the existence of at least three families of particles: the first one with complex appearance, resulted by connecting two pyramid trunks with hexagonal base, with dimensions on the longest side of about 10 µm; the second one in the form of polyhedral structures with hexagonal cross-section, containing a cavity, probably the structural units from which the shapes of the first category have resulted; the third represented by ordinary polyhedral particles, with a relatively wide size distribution (from less than 1 to 4 µm). TiO 2 powder is made up of quasi-spherical particles with diameters below 50 nm and pronounced tendency of agglomeration due to the large specific surface area, as seen in Figure 1b. Also, it can be stated that the size distribution is relatively narrow, which could be an advantage in the subsequent production of composite materials with a high degree of homogeneity. Moving to HAp powder marketed by Sigma-Aldrich, the product data sheet claims the existence of particles with dimensions below 200 nm, an aspect mostly confirmed by the SEM image (Figure 1c). Most of the particles fall in the 50-100 nm size range, their shape being quasi-spherical or slightly faceted.
Using Scherrer's formula [36], the average crystallite size of all three powders was calculated by mediation on the first three most intense diffraction peaks. The resulting values are as follows: 49 nm for ZnO, 8 nm for TiO 2 , and 40 nm for HAp. As was expected, ZnO presented a larger value than TiO 2 , due to the fact that the calcining temperature was higher and promoted the crystallites development. However, all inorganic masses can be considered nanostructured, and, in this way, the achievement of fibrous composites containing such zero-dimensional structures is favored, as long as it is possible to disaggregate the agglomerations into individual entities. Moreover, the obtained results could be very well correlated with the information provided by the SEM images ( Figure 1) in terms of crystallite-particle dimensionality.
Further, by employing the UV-Vis spectra and Kubelka-Munk approach [37], the bandgap of ZnO and TiO 2 were determined to be around 3.1 eV, slightly lower than those reported in the scientific literature for different similar nanostructures [37,38]. Briefly, using the reflectance data, F(R) function was calculated and (F(R)·E) 1/2 function was plotted versus photon energy (E) in order to graphically estimate the band gap values; Kubelka-Munk function is expressed as F(R) = (1 − R) 2 /(2R), where R is the observed diffuse reflectance. Moreover, for the two oxides, the antimicrobial activity against two microbial strains was assessed, namely Staphylococcus Aureus (Gram-positive model) and Escherichia Coli (Gram-negative model). ZnO displayed an antimicrobial effect on both bacteria, the diameter of the inhibition zones being 8 and 7 mm, respectively. Shortly, the antimicrobial potential was assessed using the agar diffusion test; after 20 min of sterilization under UV irradiation, the powders were mixed with sterile saline solution, from which a defined volume was taken and placed on agar plates inoculated with the microorganism to be tested, the antibacterial effect is quantified by measuring the diameter of the inhibition zone after incubation at 37 • C for 24 h.
The electrospun composites were first analyzed from the microstructural and compositional point of view, the corresponding images and EDX spectra being exhibited in Figure 2. In order to be able to correctly evaluate the influence of the addition of the inorganic powders on the properties of the PCL fibers, the reference sample, without inorganic content, was also analyzed.
The electrospun composites were first analyzed from the microstructural and compositional point of view, the corresponding images and EDX spectra being exhibited in Figure 2. In order to be able to correctly evaluate the influence of the addition of the inorganic powders on the properties of the PCL fibers, the reference sample, without inorganic content, was also analyzed. PCL-ZnO composite (Figure 2a) has a quite high homogeneity due to the random distribution of ZnO particles among PCL fibers, mainly near the intersection areas. There is also a certain tendency of agglomeration, with aggregates of particles reaching dimensions up to 10 μm. It should also be emphasized that certain particles are embedded in the polymeric fibers, which leads to surface passivation and reduced sample efficiency in those types of determinations that are based on the active role of the surface. All well, the particle embedding also leads to an increase in the fiber PCL-ZnO composite (Figure 2a) has a quite high homogeneity due to the random distribution of ZnO particles among PCL fibers, mainly near the intersection areas. There is also a certain tendency of agglomeration, with aggregates of particles reaching dimensions up to 10 µm. It should also be emphasized that certain particles are embedded in the polymeric fibers, which leads to surface passivation and reduced sample efficiency in those types of determinations that are based on the active role of the surface. All well, the particle embedding also leads to an increase in the fiber diameter, which normally ranges between 2 and 3 µm.
Regarding PCL-TiO 2 composite (Figure 2b), the tendency of agglomeration and attachment in the form of aggregates to the polymeric fibers is higher than in the previous case, the particles are this time nanometric in size; this aspect also has a negative effect on the sample homogeneity in large areas. Moreover, the modification of the powder nature influences the diameter of the fibers. fibres diameter, in the sense that the emergence of fibres with much smaller diameters, below 500 nm, is favored.
The third category of composites, PCL-HAp (Figure 2c), has a slightly modified morphology, with the distribution of the inorganic particles predominantly in the volume of the polymeric fibers, at different depths, and less on the surface. The homogeneity is relatively good this time, since the tendency of agglomeration is reduced. The fibers size does not undergo substantial changes with the addition of HAp, maintaining it in the 2-4 µm range.
Goring to the bare fibers (Figure 2d), the SEM image shows a network of one-dimensional polymeric structures, non-woven and randomly distributed in the plane of each layer, the average diameter being of approximately 2 µm. Compared to the composite samples, the flexibility of this fully polymeric sample is higher, a claim supported by the snake-like arrangement of the fibers.
To demonstrate the loading of the fibers with particles of different compositions, EDX spectra were employed. As expected, to the elements specific to PCL (C and O), supplementary signals assigned to the elements of each inorganic powder are added (Zn, Ti, Ca, and P). The peaks of Au are due to the samples preparation protocol for SEM investigation, namely the deposition of a nanometric layer of conductive material on the entire surface.
From the complex thermal analyses performed on the fibrous composite scaffolds and presented in Figure 3, it was found that in the 20-800 • C temperature range, there is a total weight loss of 71% for PCL-ZnO, 40% for PCL-TiO 2 , and 73% for PCL-HAp, respectively. These losses were recorded below the temperature of 400 • C, mainly in the range 250-400 • C, representing 94-99% of the total mass loss. The main loss is always accompanied by an exothermic effect centered between 350 and 450 • C, generated by the combustion of the organic component. However, in the case of PCL-ZnO and PCL-HAp, the weight losses occur in two stages, the first one is endothermic and the second one is exothermic. The shift of the exothermic effect to higher temperatures when ZnO is present is most likely due to the fact that this oxide influences polymer stability. In the case of HAp, the endothermic effect can be correlated with the existence of less crystalline phases within the commercial product, which undergoes a dehydration process.
Given the use of a fixed concentration of inorganic powder, it was expected that the mass loss would be similar in all three situations. However, the differences are significant, both between the three types of composites and the expected value (around 76%). Analyzing comparatively, it can be observed that the losses recorded for the samples containing ZnO and HAp displayed the closest values to the one theoretically calculated, being only a few percent lower, a result that can be explained through the existence of a certain proportion of residual solvents in the fibers. On the other hand, the composite containing TiO 2 showed a much lower loss, which means that the concentration of inorganic powder in the final sample is higher than the designed one; this behavior can be associated with the stability of the precursor solution, as well as the nanometric size of TiO 2 particles. effect can be correlated with the existence of less crystalline phases within the commercial product, which undergoes a dehydration process. Given the use of a fixed concentration of inorganic powder, it was expected that the mass loss would be similar in all three situations. However, the differences are significant, both between the three types of composites and the expected value (around 76%). Analyzing comparatively, it can be observed that the losses recorded for the samples containing ZnO and HAp displayed the closest values to the one theoretically calculated, being only a few percent lower, a result that can be explained through the existence of a certain proportion of residual solvents in the fibers. On the other hand, the composite containing TiO2 showed a much lower loss, which means that the concentration of inorganic powder in the final sample is higher than the designed one; this behavior can be associated with the stability of the precursor solution, as well as the nanometric size of TiO2 particles.
Biological Characterization
Although the biodegradation time of PCL is well defined, the in vitro studies related to the biodegradability determination for the PCL scaffolds obtained by the electrospinning technique is limited [39,40]. Thus, the fibrous scaffolds realized in this work were characterized by the SBF test, so as to assess the behavior when in contact with the physiological environment for 14 days; the biodegradability of the composite materials was revealed through SEM images (Figure 4). A substantial change in the morphology of the fiber during the soaking period can be detected. The one-dimensional structures have multiple breaks, their surface getting a rough appearance, probably due to the chemical attack of the testing solution on PCL; this process involves reactions between the
Biological Characterization
Although the biodegradation time of PCL is well defined, the in vitro studies related to the biodegradability determination for the PCL scaffolds obtained by the electrospinning technique is limited [39,40]. Thus, the fibrous scaffolds realized in this work were characterized by the SBF test, so as to assess the behavior when in contact with the physiological environment for 14 days; the biodegradability of the composite materials was revealed through SEM images (Figure 4). A substantial change in the morphology of the fiber during the soaking period can be detected. The one-dimensional structures have multiple breaks, their surface getting a rough appearance, probably due to the chemical attack of the testing solution on PCL; this process involves reactions between the carboxyl groups on the polymeric chains and the cationic species in the testing solution, resulting in by-products that decrease the material stability in the aqueous medium. This aspect confirms that the degradation is triggered on the surface of the fibers and evolves towards the inner regions. Comparatively, the highest tendency to disintegration was observed in the case of the PCL-TiO 2 sample. carboxyl groups on the polymeric chains and the cationic species in the testing solution, resulting in by-products that decrease the material stability in the aqueous medium. This aspect confirms that the degradation is triggered on the surface of the fibers and evolves towards the inner regions. Comparatively, the highest tendency to disintegration was observed in the case of the PCL-TiO2 The composite fibers obtained by electrospinning were characterized in vitro using cellular assays, considering that it has been reported in the scientific literature that a concentration of oxide powder above a certain threshold may have a toxic effect on cells. Cell proliferation was assessed by the MTT assay (Figure 5a), while cell viability was determined by the GSH assay ( Figure 5b) in association with optical fluorescence microscopy ( Figure 4). The composite fibers obtained by electrospinning were characterized in vitro using cellular assays, considering that it has been reported in the scientific literature that a concentration of oxide powder above a certain threshold may have a toxic effect on cells. Cell proliferation was assessed by the MTT assay (Figure 5a), while cell viability was determined by the GSH assay ( Figure 5b) in association with optical fluorescence microscopy ( Figure 4). Regarding cell proliferation, the values recorded for the specimens and control were graphically represented in Figure 5a and indicate that the tested materials do not have a cytotoxic effect, the absorbance showing higher values compared to control (differences between 7 and 15%). In other words, a higher intensity of the recorded signal is translated into more metabolically active viable Regarding cell proliferation, the values recorded for the specimens and control were graphically represented in Figure 5a and indicate that the tested materials do not have a cytotoxic effect, the absorbance showing higher values compared to control (differences between 7 and 15%). In other words, a higher intensity of the recorded signal is translated into more metabolically active viable cells and conserved cellular integrity. In all three cases, cell proliferation shows a significant increase from 24 to 72 h; thus, the cell number increases with cell incubation time, which suggests that all composites sustain cell proliferation.
From the point of view of the oxidative stress, the results presented in Figure 5b confirm that the final scaffolds represent supports accepted by the cells. Comparatively, PCL-ZnO and PCL-HAp samples showed the lowest oxidative stress; however, PCL-TiO 2 exhibited a similar value with the control, but considering that it falls within the limit of errors, it is still a potential candidate for medical applications.
The optical fluorescence microscopy images from Figure 4 confirm the previous results, revealing that the investigated fibers have no cytotoxic effect; the cells are viable and have normal morphology.
The cellular viability is also demonstrated by the fact that their metabolism is active, the cells incorporating the dye into the cytoplasm.
Conclusions
Using the electrospinning technique, composite scaffolds based on polycaprolactone and inorganic powders (zinc oxide, titanium dioxide, and hydroxyapatite) were successfully obtained. The inorganic components were proved to be highly crystalline and composed of particles having dimensions in the nanometric or micrometric field, while the composite materials presented fibers with diameters below 5 µm and a relatively homogenous distribution of the powders within the polymeric fibrous networks. By means of simulated body fluid soaking, the biodegradability of the composite materials was highlighted, noting a considerable tendency of fragmentation and surface degradation during the 14 days of testing. Following the biological tests, it was found that the resulting fibrous composites represent support accepted by the cell cultures, displaying significant cell proliferation. Taking into account both the properties of biodegradability and biocompatibility, it can be concluded that the proposed systems represent candidates with considerable potential in the field of tissue engineering. Future improvements can be achieved by optimizing the processing parameters or by incorporating several types of mineral phases in order to achieve multifunctional composites. | 2019-11-07T14:09:34.195Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "e770c6369bac8bed9d803b14aa5c4cbb43e34237",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/11/11/1793/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9bdc674b373c771699a7d4cf301257b90b0b491b",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
118895732 | pes2o/s2orc | v3-fos-license | Electrically Tunable Wafer-Sized Three-Dimensional Topological Insulator Thin Films Grown by Magnetron Sputtering
Three-dimensional (3D) topological insulators (TIs) are candidate materials for various electronic and spintronic devices due to their strong spin-orbit coupling and unique surface electronic structure. Rapid, low-cost preparation of large-area TI thin films compatible with conventional semiconductor technology is key to the practical applications of TIs. Here, we show that wafer-sized Bi2Te3 family TI and magnetic TI films with decent quality and well-controlled composition and properties can be prepared on amorphous SiO2/Si substrates by magnetron cosputtering. The SiO2/Si substrates enable us to electrically tune (Bi1-xSbx)2Te3 and Cr-doped (Bi1-xSbx)2Te3 TI films between p-type and n-type behavior and thus study the phenomena associated with topological surface states, such as the quantum anomalous Hall effect (QAHE). This work significantly facilitates the fabrication of TI-based devices for electronic and spintronic applications.
thin films compatible with conventional semiconductor technology is key to the practical applications of TIs. Here, we show that wafer-sized Bi 2 Te 3 family TI and magnetic TI films with decent quality and well-controlled composition and properties can be prepared on amorphous SiO 2 /Si substrates by magnetron cosputtering. The SiO 2 /Si substrates enable us to electrically tune (Bi 1-x Sb x ) 2 Te 3 and Cr-doped (Bi 1-x Sb x ) 2 Te 3 TI films between p-type and n-type behavior and thus study the phenomena associated with topological surface states, such as the quantum anomalous Hall effect (QAHE). This work significantly facilitates the fabrication of TI-based devices for electronic and spintronic applications.
Three-dimensional (3D) topological insulators (TIs) have a bulk gap and gapless surface states including an odd number of Dirac cones in a surface Brillouin zone. 1,2 The topological surface states are spin-momentum-locked and are protected by time-reversal symmetry from perturbations such as structural defects, disorder and nonmagnetic impurities. Various exotic quantum effects have been observed or predicted in TI-based materials or structures, such as the quantum anomalous Hall effect (QAHE), topological magnetoelectric effect and chiral Majorana superconductivity. These effects can be used to develop low-energy-consumption electronic devices and topological quantum computers. Recently, TIs have also attracted much attention for their possible applications in spintronic devices. [3][4][5][6][7][8][9][10][11][12] Integrating TIs into the mature semiconductor technology is of key importance to realize their full potential for electronic or spintronic applications. Currently, 3D TI materials are mainly prepared by bulk Bridgman growth [13][14][15] , chemical vapor deposition (CVD) 16,17 or molecular beam epitaxy (MBE) [18][19][20] . It is difficult to obtain large-area 3D TI films with well-controlled properties by Bridgman growth, and CVD-grown TI films usually need to be transferred from the growth substrate onto other substrates for various electronic devices. On the other hand, the low growth rate and high cost (especially for large wafers) of MBE make it less favored for the mass production of TI-based materials and devices. Magnetron sputtering is a low-cost, high-yield growth method compatible with conventional semiconductor technology.
The method is particularly capable of fabricating thin films with complex structures and compositions, which is crucial for applications of TI materials in various devices. However, samples grown by magnetron sputtering are polycrystalline and usually have rather low carrier mobility, which restricts the applications of this technique to the growth of semiconductor materials. 21,22 In this study, we grew wafer-sized Bi 2 Te 3 family TI films on usual amorphous SiO 2 /Si substrates through magnetron sputtering and found that due to the layered structure of the materials, the films have decent quality with a carrier mobility up to 310 cm 2 /Vs, comparable to that of the films grown with MBE. Topological surface states were observed in the samples with angle-resolved photoemission spectroscopy (ARPES). The SiO 2 /Si substrates enable us to gate-tune the carrier density of TI films grown on them, and in (Bi,Sb) 2 Te 3 films with an appropriate Bi/Sb ratio, the ambipolar field effect is realized. With magnetron sputtering, we also realized the growth of magnetically doped TI Cr y (Bi x Sb 1-x ) 2-y Te 3 (CBST) films that show ferromagnetism with an out-of-plane easy magnetization axis. An anomalous Hall resistance (R AH ) up to 16.4 kΩ was observed at 2 K, promising the observation of the QAHE in the films. These results demonstrate magnetron sputtering as a suitable mass production method for growing TI-based films for various electronic and spintronic applications.
We grew (Bi 1-x Sb x ) 2 Te 3 films by cosputtering Bi 2 Te 3 and Sb 2 Te 3 alloy targets as well as Te targets on amorphous SiO 2 /Si(100) substrates, which were kept at 160 °C during growth. By modifying the Bi/Sb ratio in the films, one can control the density as well as the type of carriers in the films. The Bi/Sb ratio in the films was controlled by the sputtering powers of the Bi 2 Te 3 and Sb 2 Te 3 target guns. The Te target was also turned on at the same time to reduce the Te deficiency in the films. After growth, the films were annealed at 160 °C for 25 minutes. Figure 1a shows a photograph of a 10 cm (4 inch) diameter silicon wafer with a 10 nm thick Bi 2 Te 3 film grown on it, which maintains uniform properties across the whole film. Figure 1b displays a high-resolution transmission electron microscopy (HRTEM) cross-sectional image of a (Bi 1-x Sb x ) 2 Te 3 film, which shows the characteristic quintuple-layer structure of Bi 2 Te 3 . Although the film is polycrystalline, all the grains have the same c orientation (normal to the cleavage plane). The crystalline structure of the BST film was further confirmed by X-ray diffraction (XRD) measurements ( Figure 1d). The presence of only (0, 0, 3n) diffraction peaks suggests that the film plane is parallel to the cleavage plane. This postulation is reasonable because the material has a layered structure, which means that the cleavage surface has a quite low surface free energy. The above structural characterization results demonstrate that we can indeed obtain Bi 2 Te 3 family TI films with decent crystalline quality via magnetron sputtering.
We used ARPES to assess the electronic band structure of a 10 QL-thick Bi 2 Te 3 film grown by magnetron sputtering. Although the film is composed of domains of various in-plane crystalline orientations, the different domains have similar surface state band dispersion near the Dirac point because of the isotropy of the topological s surface states around here, which will give observable ARPES signals. We indeed identified energy bands with similar band dispersion to the topological surface states of Bi 2 Te 3 (Figure 1b). The observation further confirms that we obtained Bi 2 Te 3 TI films with magnetron sputtering, although the randomly orientated grains significantly broaden the spectra.
For transport studies, we prepared three (Bi 1-x Sb x ) 2 Te 3 samples, BST1, BST2 and BST3, with x = 0.74, 0.72 and 0.58, respectively. Figure 2b shows the R xx -T curve of the BST2 film. At high temperatures (in the region of ~88-300 K), R xx increases as T is decreased, showing semiconductor-like behavior and indicating that E F is in the bulk band gap. The R xx -T curve displays metallic behavior in the intermediate temperature region (~7-88 K), which can be the result of reduced electron-phonon scattering of the surface states. 15,23,24 A second increase in R xx when T is lower than 7 K can be ascribed to the quantum correlations to conduction, 15 , which is similar to those of the MBE-grown films of similar composition and thickness. 23,24 Moreover, Figure 2c shows the Hall resistance of sample BST2 as a function of magnetic field R xy (μ 0 H) at various gate voltages. The R xy -μ 0 H curves are always linear over the entire range of the magnetic field (±2 T), and the positive slopes of R xy -μ 0 H as the gate voltage V g increases from -120 V to -15 V suggest that the dominant carriers are p-type, while the negative slopes of R xy -μ 0 H as V g continues to increase show that the dominant carriers are n-type. The zero-field longitudinal resistance R xx and Hall coefficient R H extracted from R xy -μ 0 H curves of sample BST1-3 are plotted in Figure 2d-f as a function of gate voltage. In the case of BST1 (BST3), the R xx monotonically increases (decreases) with increasing V g , and the R H has positive (negative) signs, indicating that BST1 (BST3) is a p-type (n-type) semiconductor. On the other hand, in sample BST2, when V g is approximately 0 V, R H reverses its sign, while R xx reaches its maximum value. These results clearly demonstrate the ambipolar field effect of Dirac dispersions and suggest that the Fermi level can be controlled across the DP by tuning the ratio of Bi to Sb (Figure 2a) and back-gate voltages. We note that for sample BST2, the maximum value of R xx , corresponding to the gate voltage at the charge neutral point, is on the same order as the maximum values of BST samples deposited by CVD or MBE, as listed in table 1, and is also in the same order as the quantum resistance. The 2D carrier densities n 2D =1/(eR H ) (where e is the elementary charge) can be obtained from the linear ordinary Hall resistance, and the lowest carrier density is n 2D = 2.7×10 12 cm -2 at V g =60 V with a corresponding mobility of 270 cm 2 /Vs for electrons, while the lowest carrier density is n 2D = 2.3×10 12 cm -2 at V g = -120 V with a corresponding mobility of 310 cm 2 /Vs for holes. It can be concluded that magnetron sputtering is an effective method to prepare BST thin films with insulating bulk and low carrier densities, while the highly tunable Fermi level on SiO 2 allows us to explore the novel properties of topological surface states near the DP.
After the BST thin films with bulk insulating properties were successfully prepared, we focused on the Cr-doped TI thin films grown by magnetron sputtering to investigate the possibility of realizing long-range ferromagnetic order and tuning E F inside the surface gap. One way to generate long-range ferromagnetic order in TIs is magnetic doping, and magnetically doped TI materials have been grown by CVD, 25 bulk Bridgman growth [26][27][28][29] and MBE 30 . Until now, however, magnetron sputtering has rarely been adopted for this purpose. In our experiment, CBST films were prepared by the cosputtering of Sb 2 Te 3 , Bi 2 Te 3 , Te and Cr targets at a substrate temperature of 175 °C . Figure 3 presents an example of ferromagnetic properties observed in a 6 nm thick CBST film. At T=2 K, the conventional butterfly-shaped R xx -μ 0 H curve and nearly square-shaped Hall hysteresis loop with a coercive field of 340 Oe shown in Generally, the Hall resistance of the anomalous Hall effect is expressed as: 30, 31 where R 0 is the slope of the ordinary Hall background, H is the applied magnetic field, and M is the magnetization component in the perpendicular direction. Figure 4a and b present the longitudinal sheet resistance R xx and Hall resistance R xy, respectively, as a function of H of a CBST thin film at various back-gate voltages. R AH estimated by the intercept of the linear background at a high magnetic field and zero field R xx (0) are plotted in Figure 4c. We observe that the R AH and R xx (0) data point towards the same trend and that both reach their maximum at V g =-130 V and their minimum at V g =150 V. It is worth noting that the maximum of R AH in Figure 4c is 16.4 kΩ, which exceeds 60% of the quantum Hall resistance (h/e 2 ). More importantly, the existence of robust ferromagnetic order in both p-type and n-type Cr-doped TI thin films grown by magnetron sputtering and the V g -independent coercive field shown in Figure 4b demonstrate the occurrence of carrier-independent bulk van Vleck magnetism, which is consistent with the reports in the literature. 30,32 Nevertheless, the QAHE has not been observed in our Cr-doped TI films grown by magnetron sputtering. In the quantum anomalous Hall regime, R AH should reach a maximum, and R xx should exhibit a dip at the charge neutral point. 20,33,34 This signature has not been observed in our CBST thin films at T=2 K. The R AH /R xx (0) ratio of the CBST thin film, however, has a maximum of approximately 0.2 at V g =20 V (see Supplementary Information S6), which exceeds the Hall angle of usual diluted magnetic semiconductors. 30 Therefore, the quantum anomalous edge states in CBST might contribute to carrier transport. We believe that it is possible to observe the QAHE in CBST thin films grown by magnetron sputtering if the quality of the sample is improved further by reducing the sputtering rate, controlling the amount of Cr more precisely and adjusting the annealing process, among other measures.
In summary, we have demonstrated that it is possible to grow Bi 2 Te 3 family TI thin films on amorphous SiO 2 /Si substrates by magnetron sputtering. Our work provides a large-scale method to produce bulk insulating TI thin films with tunable transport properties. Both the ARPES data and the ambipolar field effect point to the conclusion that the topological surface states can very well exist in these sputtered, polycrystalline thin films. An unusually large value of 60% of the quantum anomalous Hall resistance is observed in Cr-doped BST films, and more efforts are needed to obtain an ideal quantum anomalous Hall insulator. The magnetron cosputtering growth provides a first yet important step towards the large-scale fabrication of multilayer films with complex structures for future spintronic devices.
Methods
Magnetron sputtering growth. Bi 2 Te 3 , BST and CBST thin films were grown on polished, thermally oxidized silicon substrates (300 nm SiO 2 /silicon) by magnetron sputtering. The cosputtering method with a high-purity (99.99%) Bi 2 Te 3 alloying target, Sb 2 Te 3 alloying target and Te target was adopted to control the films' composition. The BST thin films were deposited from Bi 2 Te 3 and Sb 2 Te 3 targets by direct current (DC) sputtering and from a Te target by radio-frequency (RF) sputtering.
The CBST thin films were deposited from Bi 2 Te 3 , Sb 2 Te 3 and Cr (99.95%) targets by DC sputtering and a Te target by RF sputtering. During growth, the Si(100) substrate was kept at 160 °C for BST thin films and at 175 °C for CBST thin films. The base pressure of the deposition chamber was below 3×10 -7 Torr, and the working argon pressure was set at 2×10 -3 Torr. After growth, the thin films were annealed for 25 minutes at the same temperature as the deposition. When the substrate temperature was cooled to room temperature, a 3-4 nm thick Al thin film was sputtered in situ on the topological insulator thin films, and the Al thin film was naturally oxidized to form Al 2 O 3 after the sample was removed from the chamber and exposed in air.
Material characterization. To avoid possible contamination of Bi 2 Te 3 thin films, a 2 nm thick Te capping layer was deposited on top of the films before we removed them from the high-vacuum growth chamber. The Bi 2 Te 3 thin films with a 2 nm Te capping layer were exposed in air for 2 hours before ARPES measurements. ARPES data were collected by a Scienta R4000 analyzer at a 3 K sample temperature. The structural characteristics of BST films were investigated by using TEM (FEI Tecnai F20; acceleration voltage, 200 kV) and XRD (Rigaku TTR-Ⅲ, Japan). | 2019-01-09T06:04:41.000Z | 2019-01-09T00:00:00.000 | {
"year": 2020,
"sha1": "b7fd9b597a0be4055241572862e524b890979e4e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1901.02611",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b7fd9b597a0be4055241572862e524b890979e4e",
"s2fieldsofstudy": [
"Physics",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
} |
10302536 | pes2o/s2orc | v3-fos-license | Determining optimal medical image compression: psychometric and image distortion analysis
Background Storage issues and bandwidth over networks have led to a need to optimally compress medical imaging files while leaving clinical image quality uncompromised. Methods To determine the range of clinically acceptable medical image compression across multiple modalities (CT, MR, and XR), we performed psychometric analysis of image distortion thresholds using physician readers and also performed subtraction analysis of medical image distortion by varying degrees of compression. Results When physician readers were asked to determine the threshold of compression beyond which images were clinically compromised, the mean image distortion threshold was a JPEG Q value of 23.1 ± 7.0. In Receiver-Operator Characteristics (ROC) plot analysis, compressed images could not be reliably distinguished from original images at any compression level between Q = 50 and Q = 95. Below this range, some readers were able to discriminate the compressed and original images, but high sensitivity and specificity for this discrimination was only encountered at the lowest JPEG Q value tested (Q = 5). Analysis of directly measured magnitude of image distortion from subtracted image pairs showed that the relationship between JPEG Q value and degree of image distortion underwent an upward inflection in the region of the two thresholds determined psychometrically (approximately Q = 25 to Q = 50), with 75 % of the image distortion occurring between Q = 50 and Q = 1. Conclusion It is possible to apply lossy JPEG compression to medical images without compromise of clinical image quality. Modest degrees of compression, with a JPEG Q value of 50 or higher (corresponding approximately to a compression ratio of 15:1 or less), can be applied to medical images while leaving the images indistinguishable from the original.
Background
Medical images are increasingly displayed on a range of devices connected by distributed networks, which place bandwidth constraints on image transmission. As medical imaging has transitioned to digital formats such as DICOM and archives grow in size, [1] optimal settings for image compression are needed to facilitate long-term mass storage requirements.
One definition of optimal medical image compression is a degree of compression that decreases file size substantially but produces a degree of image distortion that is not clinically significant. A more conservative definition of optimal image compression would require a degree of image distortion that cannot be perceived by the viewer at all. Other methods that have been used to distinguish degrees of medical image compression include pixel analysis and blinded measurements of diagnostic accuracy [2].
We assessed the crossover point for distortion of grayscale medical images (CT, MR, and XR modalities) by JPEG compression according to two different definitions: (1) the point at which distortion is clinically significant to the viewer and (2) the point at which any distortion can be reliably discriminated by the viewer. We additionally performed analysis of subtracted images to correlate the accumulation of increasing error pixel burden at lower JPEG Q values with the thresholds determined psychometrically.
Methods
Test Images 40 fully anonymized test images without any identifying features in DICOM format were subjected to JPEG compression as described in detail below using ImageJ64 software (version 1.45, http://rsbweb.nih.gov/ij/index.html). Single representative images with or without pathological features were chosen across a range of modalities and body regions, including CT, MR, and XR imaging modalities ( Figure 1, Additional file 1). Clinically standard window/level settings for each modality/body region were chosen for presentation. All images were grayscale, at 8bit depth (0 to 255 gray values), at source pixel dimensions (minimum pixel dimensions 512 x 512, maximum pixel dimensions 2328 x 2320).
Image Viewing by Clinicians
Because this study aimed to determine thresholds for image distortion by JPEG compression during viewing of images in a range of clinical contexts (e.g. on a personal or clinical office computer, using a web browser, or using a portable electronic device), test images were displayed to subjects using Macintosh and Windows PCs and using both image analysis software and HTML5compatible web browsers. Because background lux levels can impact radiological image interpretation [3,4], background lux levels were measured using a Mastech MS8229 lux meter and maintained throughout viewing in the range of 25-100 lux.
For the presentation of continuous 100 to 1 JPEG Quality image stacks, images were presented using ImageJ64 software on a Macintosh computer with LCD screen dimensions of 1280 x 800 pixels, with images rendered at full size up to the screen resolution. Image stacks consisted of 100 images created by successively compressing an original single DICOM image into the full range of JPEG compression from JPEG Quality 100 to 1. Viewers were instructed to view the entire range of image compression from JPEG Quality 100 to 1 by scrolling through the image stack continuously using left/right arrows on the computer keyboard or scroll gestures on the computer touchpad. Viewers did not have feedback as to the degree of compression while performing this task; determinations were made solely on the basis of image appearance.
For the presentation of pairwise image comparisons, images were displayed using LCD monitors with screen resolutions of 1280 x 800 to 1280 x 1024 pixels with image presentation by way of an HTML5-compatible web browser (Google Chrome version 15) with images displayed at full size up to the screen resolution. For each pairwise comparison, viewers used the left/right arrows on the computer keyboard to rapidly switch back and forth between the two images being compared.
Clinicians in the study were practicing physicians with board certification in their primary medical specialty (Radiology, Neurology, Neurosurgery, Pulmonary/Critical Care Medicine, and Internal Medicine). A total of 8 clinicians participated in the continuous compression experiment, and a total of 10 clinicians participated in the pairwise image comparison experiment. Clinician subjects were blinded to all aspects of study design and any indicators of image compression other than intrinsic image characteristics.
Psychometric Measurements
Viewers assessed distortion thresholds in two different experiments: (1) determination of clinically important distortion by assessment of continuous JPEG compression from JPEG Q Value 100 to 1, and (2) determination of the level of compression that can be reliably perceived by the viewer, by assessment of a range of differently compressed image pairs.
For the continuous assessment of JPEG compression, viewers scrolled through stacks of 100 images constructed as described above with a range of JPEG compression from JPEG Q Value 100 to 1. Viewers were asked to determine the approximate point at which the image was felt to be distorted to any clinically meaningful extent, and the Q Value corresponding to this point was recorded. Viewers were allowed as much time as needed to make this determination. Each viewer assessed 40 image stacks.
For the pairwise comparison of images, viewers were shown 7 pairs of images for 8 images randomly chosen from the overall set of 40 images. For each image pair, one image was JPEG Quality = 100 and the other image was JPEG Quality = (5, 20, 35, 50, 65, 80, or 95). Each viewer was shown 7 image pairs presented in randomly chosen order and asked to determine which image of each pair (also presented in randomly chosen order) was the lower quality image. Viewers were instructed to choose an image even if they could not tell the images apart (to guess if required), and also to indicate whether they felt that their choice was a guess or not.
Random choices for image selection and order of image presentation were made with the use of a true random number generator (www.random.org).
For ROC plot analysis, sensitivity and specificity were calculated based on correct or incorrect identification of "image 0" or "image 1" from each image pair. Because the presentation of image pairs was chosen by random number generator, the labeling of "image 0" or "image 1" for ROC analysis was randomly chosen and the subject's response of "image 0" or "image 1" was determined by whether the subject correctly identified the compressed image or not.
Image Pixel Difference Measurements
To determine the degree of absolute pixel differences between compressed images and a source JPEG Q Value 100 image, we performed subtraction of whole images across the range of JPEG compression from Q Value 99 to 1 using ImageJ64 software. Each successively compressed image was subtracted from the source image, yielding a stack of difference images from (Q Value 100-99) to (Q Value 100-1). Measurements were taken of the total density of difference pixels across each image in the stack, and this operation was then performed on all 40 images viewed by the subject as described above. The mean ± standard deviation total image pixel differences across the 40 images were displayed after normalization to the maximal difference in each image stack.
The conduct of this study was fully compliant with the World Medical Association (WMA) Declaration of Helsinki. Fully anonymized images without any identifying features were shown to physicians who volunteered their own time to participate. No identifying data about the individual physicians was used, stored, or transmitted as part of the study. Based on these specific study characteristics, the study was exempt from IRB review. Exempt status was confirmed by the Kaiser Foundation Research Institute IRB.
Psychometric Experiment 1: Clinically important distortion
When physician readers were asked to determine the degree of compression beyond which images were clinically compromised, the mean image distortion threshold was a JPEG Q value of 23.1 ± 7.0 ( Figure 2). The distribution of the data about this mean Q value was approximately Gaussian (Figure 2A, mean = 23 ± 8.1). The task in this experiment was a subjective one (determination of the point at which the reader felt the image was unacceptably distorted), so not surprisingly, there was variation in crossover point values from reader to reader ( Figure 2B, P < 0.001, Kruskal-Wallis test). Despite this, the highest crossover point for any image or reader was a JPEG Q value of 44.
Psychometric Experiment 2: ROC plot analysis of discrimination between compressed and original image pairs
In ROC plot analysis, compressed images could not be reliably distinguished from original images at any compression level between Q = 50 and Q = 95 ( Figure 3A). For Q values 50, 65, 80, and 95, the 95 % confidence intervals (CI) of the sensitivity and specificity estimates each crossed the line of unity where (sensitivity = [1 -specificity]), indicating no reliable discrimination between image pairs (Figure 3A). At a Q level of 20 or 35, discrimination between the compressed and original images improved beyond chance (sensitivity and specificity increased and the 95 % CI no longer crossed the line of unity, Figure 3A). However, high sensitivity and specificity for image discrimination was only encountered at the lowest JPEG Q value tested (Q = 5, Figure 3A).
As viewers were additionally asked in this experiment to record whether they felt that their choice was a guess, we also analyzed the relationship between JPEG Q value and the rate at which readers guessed or made the incorrect choice ( Figure 3B). Consistent with the ROC plot analysis, the rate of guessing or incorrect choice rose steeply across the Q = 5 to Q = 50 range, then plateaued ( Figure 3B).
Direct analysis of distortion pixels by image subtraction
To determine whether basic features of the JPEG compression algorithm might potentially explain the thresholds encountered in the psychometric experiments above, we performed software analysis of the magnitude of image distortion in subtracted image pairs across the full range of JPEG compression from Q = 99 to Q = 1. A visual demonstration of the effect of image subtraction to reveal error pixels is shown in Figure 4. Direct measurements of subtracted images showed that, as expected, the degree of total pixel error increased across the full range from Q = 99 to Q = 1 ( Figure 5). However, this increase in the degree of pixel error had a low slope at Q values above 50, and only at higher levels of compression did the slope show an upward inflection ( Figure 5). About 25 % of the error pixel accumulation occurred between Q = 50 and Q = 99, while the remaining 75 % of error pixel accumulation occurred between Q = 1 and Q = 50.
Discussion
Our data show that lossy JPEG compression can be applied to medical images without clinical image compromise. More subtle lossy JPEG compression (Q values of 50 or higher, roughly a compression ratio of 15:1 or less) can be applied without giving expert viewers the ability to reliably distinguish between the compressed image and the original.
The medical literature on JPEG image compression has typically presented data on compression ratios (e.g. 8:1 or 30:1). However, the software control of compression in the JPEG standard allows for direct manipulation only of Q values, not compression ratio; the compression ratio varies from image to image at a given Q value, depending on the complexity of the source image [5][6][7]. Since the relationship between Q value and compression ratio for a given image cannot be known a priori, it is more reasonable to present data on Q values, assuming software adherence to the standards of the Independent JPEG Group (www.ijg.org).
Previous work in this field has focused on relatively subtle degrees of medical image compression. For example, based on a review of the literature on compression of medical images, one group recommended a range of JPEG compression from 5:1 to 8:1. Another review of prior studies recommended this same range of compression. [8] Similarly, consensus-based approaches have yielded estimates of acceptable compression from 5:1 to 15:1 [9]. Another group tested higher degrees of compression following their own literature review [10], but they were unable to perform ROC analysis because the chosen range of compression ratios was too conservative [11]. Of note, in the same study, JPEG compression appeared to perform better than JPEG 2000 compression at the higher levels of compression tested Some work has suggested that higher degrees of compression may be acceptable. For example, one study examined the impact of JPEG 2000 compression on interpretation of mammographic digital images and found that images with compression ratios up to 60:1 were not distinguishable from source images [12].
Our study has limitations. We chose to focus on CT, MR, and XR modalities, all of which are grayscale, and therefore one cannot necessarily extrapolate our results to other imaging modalities, particularly color images. We also chose an approach to determine thresholds of clinically acceptable compression and the ability of readers to discriminate a compressed and original image; therefore, we did not specifically examine the ability of readers to distinguish pathology from normal anatomy, which represents a fundamentally different task.
From the data presented here and data from prior studies, [8,9,[11][12][13][14][15] it is reasonable to conclude that a modest degree of JPEG compression is acceptable for
Conclusion
It is possible to apply lossy JPEG compression to medical images (including CT, MR, and XR modalities) without significant compromise of clinical image quality. Regardless of whether one uses a threshold of clinically acceptable quality or a threshold of inability to distinguish the compressed image from the original, use of a JPEG Q value of 50 to 100 (an approximate compression ratio of 15:1 or lower) can be viewed as generally safe. Within the range of JPEG Q values from 50 to 100, trade-offs between quality and file size should be assessed based on the specific application or clinical need.
Additional file
Additional file 1: The supplemental video demonstrates the effect of continuously increasing JPEG compression from Q = 100 to Q=1 and then decreasing JPEG compression back up to Q=100 for an abdominal CT scan. The continuous range of Q values shown in the video is similar to what subjects viewed in Psychometric Experiment 1 (see Results), but in the actual experiment, the viewer was able to actively control the process of scrolling through the stack of images.
Competing interests
The author is Co-Founder and Chief Medical Officer of Interconnect Medical, LLC, a company that designs web-based software for sharing medical imaging. | 2016-05-06T13:40:09.052Z | 2012-07-31T00:00:00.000 | {
"year": 2012,
"sha1": "5cca2b55dd7c54800b7d25c4cb81f526305076b7",
"oa_license": "CCBY",
"oa_url": "https://bmcmedimaging.biomedcentral.com/track/pdf/10.1186/1471-2342-12-24",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3c070b5c2ee930411676cb3f2a6f4e410c314c12",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.