id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
255025434
pes2o/s2orc
v3-fos-license
The Application of Color Doppler Ultrasound in the Evaluation of the Efficacy of 595-nm Pulsed Dye Laser Combined with 755-nm Long-Pulse Alexandrite Laser in the Treatment of Hybrid Infantile Hemangiomas Purpose We used color Doppler ultrasound to conduct an objective evaluation of the 595-nm Pulsed Dye Laser (PDL) combined with 755-nm long-pulse alexandrite sequential laser treatment for hybrid IH. Patients and Methods A total of 116 cases of hybrid IH were selected for this study. The interval between laser treatments was around 4 weeks, and 6 laser treatments or complete removal of the tumor was the end point. All children underwent color Doppler ultrasonography at the 0th, 1st, 6th months of treatment. Children were grouped by gender, age (<6 months, ≥6 months), thickness (<8 mm, ≥8 mm), and location (face and neck, trunk, and extremities). Calculate the volume of IHs according to color Doppler ultrasound. The volume ratio before and after treatment was defined as the A-value. Treatment outcomes were defined as effective when the A-value <75%. Results In total, 74 cases (63.79%) had effective outcomes. Overall, the samples showed a statistically significant difference in the reduction of IH volume after 6 months of laser treatment (P < 0.001). The treatment of <6 months group had better efficacy than the ≥6 months group (P < 0.001), the treatment of thickness <8 mm group had better efficacy than the thickness ≥8 mm group (P < 0.001) and there was no significant difference in efficacy between the three different location groups (P > 0.05). Greater reduction in blood flow in the group with the effective outcome than in the group with the ineffective outcome (P < 0.001). Conclusion Color Doppler ultrasound can be applied to the diagnosis of hybrid IH and to the evaluation of treatment time and outcomes, and it can help clinicians recognize hybrid IH with greater accuracy. The earlier intervention for hybrid IH we perform, the better outcomes will be. Introduction Infantile hemangioma (IH) is one of the most common benign vascular tumors in children. Statistically, the incidence of IH ranges from 2% to 10%. 1 There has been a significant and steady increase in the incidence of IH over the past 30 years, attributing to the increasing incidence of prematurity and low birth weight. 1,2 The growth of IH is not linear, most IH do not generally exist at birth, but appear within 1 month after birth and then proliferate. The first 6 months after birth, especially the first 3 months, is the early proliferative phase of IH, and the late proliferative phase is 6-9 months after birth, while a few IH may continue to proliferate until 2 years after birth. 3 The regression phase is slower and may take 3-10 years. 4 Current clinical treatments for IH include systemic pharmacotherapy, topical pharmacotherapy, laser therapy, surgical, and injection therapy. In laser therapy, pulsed dye laser (PDL) is the most common device to treat IH. Because of its wavelength characteristics, PDL has high absorption coefficients for oxyhemoglobin (OHB) and deoxyhemoglobin (DHB). It has been extensively used to treat small IH lesions early, or to solve residual lesions during the regression phase. Due to the penetration depth of PDL, which is only 1-2 mm, it has limited effectiveness in treating thicker IH and may not be able to stop the proliferative growth of deeper IH. 5 In numerous studies, the 755-nm long-pulse alexandrite laser, penetration depth 50-75% deeper than the PDL, has been proven safe and effective in the treatment of deeper IH. 6,7 The clinical presentation of IH depends on its depth. The superficial IH is located in the superficial dermis and appears as a red lobulated plaque. The deep IH is located in the deep dermis and/or subcutaneous tissues and appears as a greenish-blue subcutaneous mass. The hybrid IH is located in both layers of the dermis and partially also includes subcutaneous tissues and exhibits clinical features that combine both of these categories. 8 IH is typically diagnosed by the clinician's subjective visual observation. Color Doppler ultrasound is a convenient, practical and non-invasive examination instrument that has become a routine diagnostic method for IH. It diagnoses various types of IH, measures the diameter, thickness and transverse diameter of the lesion, and shows internal echo and blood flow signal of the mass. It allows clinicians to choose the best treatment modality based on the depth and thickness of the IH and determine when to discontinue treatment. 9,10 Therefore, in this study, we objectively evaluate the efficiency of 595-nm PDL combined with 755-nm long-pulse alexandrite sequential laser treatment for hybrid IH with the regular color Doppler ultrasound. Study Design This is a retrospective study in which children with hybrid infantile hemangioma who visited the dermatology department of the Second Affiliated Hospital and Yuying Children's Hospital of Wenzhou Medical University for treatment from August 1, 2019 to August 1, 2022 were selected. This study complies with the Declaration of Helsinki and was approved by the ethics committee of the Second Affiliated Hospital and Yuying Children's Hospital of Wenzhou Medical University (2022-K-126-01), and informed consent was obtained from the guardians of all recruited children. Patients The inclusion criteria for this study were as follows: 1. hybrid with low or medium risk infantile hemangiomas that meet the relevant diagnostic criteria of the Diagnostic and Treatment Guidelines of Hemangioma and Malformation of Vessels, 2019 edition; 11 2. solitary infantile hemangiomas within 1 year of age; 3. first visit to our hospital, without any previous intervention; 4. children who underwent regular color Doppler ultrasound examinations during the treatment; 5. children who have been treated continuously for 6 months. Exclusion criteria were as follows: 1. infantile hemangiomas that have been treated with topical or/and oral ßadrenergic receptor inhibitors, lasers, injections and surgery in the past; 2. infantile hemangiomas with PHACE, LUMBAR, and other hemangioma and vascular malformation-related syndromes; 3. infantile hemangiomas with medium or high-risk features prone to ulceration, highly disfiguring damage, or functional impairment; 4. children with severe organic pathologies (eg, cardiopulmonary disorders); 5. children whose treatment regimen changed during the 6 months study period. Treatment Options Basic information about the children was gathered during the first visit, such as age, birth weight, gender and color Doppler ultrasound of IH. The interval between laser treatments was around 4 weeks, and 6 laser treatments or complete removal of the tumor was the end point. The Vbeam 595-nm pulsed dye laser (Candela Laser, Inc.) and 755-nm longpulse alexandrite laser (Candela Laser, Inc.) were used to treat the patients sequentially based on their specific conditions. 595-nm PDL was applied with the following settings: wavelength 595 nm, energy density 6.0-7.0J/cm 2 , spot diameter 5-10mm, pulse width 0.45-20ms, dynamic cooling device (DCD) synchronous dynamic cooling system for 30-40ms jets and 20-30ms intervals. 755-nm long-pulse alexandrite laser was applied with the following settings: wavelength 755 nm, https://doi.org/10.2147/CCID.S393962 DovePress Clinical, Cosmetic and Investigational Dermatology 2022:15 2832 pulse width 3 ms, energy density 45-60 J/cm 2 , spot diameter 6-8 mm, DCD synchronous dynamic cooling system with 20 ms injection and 20 ms interval; cold compresses were applied for 15-30 minutes after laser treatment to reduce postlaser pain and eliminate swelling. Fusidic acid cream (Bright Future) was then applied topically for 7 days in order to prevent infection. Moreover, photographs of the lesions were taken with a camera (Canon PowerShot G11) before each treatment for efficiency observation. Ultrasonography The Yum MyLab Twice eHD color Doppler ultrasound diagnostic instrument with LA-523 line array probe at 4-13 MHz was used to evaluate children's IH at the 0th (before laser treatment), 1st (after one laser treatment) and 6th months of the treatment. Color Doppler ultrasound was performed when the child was in a quiet state. If necessary, sedation (10% chloral hydrate enema, 0.5-1.0 mL/kg/dose) could be used for the child. The child received color Doppler ultrasound after falling asleep for 20 minutes. During ultrasonography, the type (superficial, deep or hybrid), size (diameter, thickness and transverse diameter), morphology, internal echogenicity, depth from the body surface and color Doppler flow imaging (CDFI) of the IH were determined. Efficacy Evaluation The estimated volume of the IH was diameter *thickness *transverse diameter * (π/6). 12 The effect of the treatment was determined according to the size of the IH measured by color Doppler ultrasound. The volume ratio before and after treatment was defined as the A-value. Treatment results were categorized into Grade 1 (A-value ≥100%), Grade 2 (75% ≤ A-value < 100%) and Grade 3 (A-value <75%). As a criterion for clinical efficacy evaluation, treatment outcomes were defined as effective (Grade 3) and ineffective (Grades 1 and 2). CDFI's outcomes were defined as reduced blood flow when the blood flow signal changed from hypervascularity to hypo vascularity or when the blood flow signal decreased by more than 50% compared with the first visit. Side Effects Parents were informed to observe and photograph the child's IH changes regularly after the laser and were asked about the occurrence of local adverse events (eg, erythema, blister, scab, erosion, ulceration, scar, hyperpigmentation and hypopigmentation) and systemic adverse events (eg, asthma, bradycardia, hypotension, hypoglycemia, gastrointestinal disorders, sleep disorders and diarrhea) at each clinic visit. All adverse events were documented in detail. Statistical Analysis All statistical analyses were performed using the MacOS version of SPSS 26.0 (IBM Corp., Armonk, NY) software for data analysis. Normality testing was performed using the Shapiro-Wilk method. Samples that conformed to a normal distribution were analyzed using the independent samples t-test; samples that did not conform to a normal distribution were analyzed using the paired Wilcoxon signed-rank test or Kruskal-Wallis H-test; and the chi-square test or Fisher's exact test was performed for count variables. P < 0.05 was considered statistically significant. Results A total of 116 children with hybrid IH were included in this study (Table 1). There were 56 males (48.28%) and 60 females (51.72%), and the ratio of male to female was 1:1.07; 68 cases (58.62%) in the <6 months group and 48 cases (41.38%) in the ≥6 months group; a total of 46 cases (39.66%) in the face and neck region, 38 cases (32.76%) in the trunk region, and 32 cases (27.59%) in the extremity region. Meanwhile, the hybrid IH thickness measured by color Doppler ultrasound before treatment was divided into two groups, with thickness <8 mm defined as the shallower group and thickness ≥8 mm defined as the deeper group, with 72 cases (62.07%) in the shallower group and 44 cases (37.93%) in the deeper group. The A-value of the total sample after 1 time laser treatment was 95.94% ± 13.1% (x ± SD), and 30.17% of the children had an increase in volume compared to the pre-treatment period. The A-value of the overall sample after 6 months of sequential laser treatment was 64.68% ± 24.86%. In total, 74 children had 2833 Grade 2 (32.76%), and 4 children had Grade 1 (3.45%). A total of 74 cases had an effective outcome (Figure 1), including 42 cases with color Doppler ultrasound suggestive of decreased blood flow and 32 cases with unchanged blood flow; 42 cases had an ineffective outcome, including 8 cases with decreased blood flow and 34 cases with unchanged blood flow. Among them, 4 children with increased volume were treated with oral propranolol or local sclerotherapy injection at the end of the study, and all achieved effective relief. The difference in IH's volumes before and after 1 time laser treatment in the overall sample was not statistically significant (P = 0.418>0.05). The overall sample showed a statistically significant difference in the reduction of IH's volume after 6 months of laser treatment compared to pre-treatment (P < 0.001). The difference in efficacy between the different gender groups was not statistically significant (P = 0.468>0.05). The treatment was more effective in the <6 months group, with a statistically significant difference (χ2 = 22.837, P < 0.001) ( Table 2). The efficacy of treatment in the shallower group was superior to that of the deeper group, with a statistically significant difference (χ2 = 16.260, P < 0.001) ( Table 3). There was no statistically significant difference in efficacy among the three different location treatment groups (P = 0.400>0.05). More blood flow reduction is in the group with effective outcomes than in the group with ineffective outcomes, with a statistically significant difference (χ2 = 15.535, P < 0.001) ( Table 4). None of the children in this study had systemic adverse events. In all, 28 children (24.14%) had adverse reactions, 9 had hyperpigmentation, 7 had hypopigmentation, 2 had scar, and 12 had blisters. One of them had blister combined with scar and one had hyperpigmentation combined with scar (Table 5). Children with hyperpigmentation and hypopigmentation showed the basic recoveries within 3 months of the follow-up after the end of the last treatment. 2 children with scars did not resolve significantly during the follow-up period after the end of treatment. Among the 12 children with blisters, excluding the 2 children with combined scarring or hyperpigmentation, the remaining 10 cases had only temporary skin lesions, and the blisters resolved after 1 week of topical fusidic acid cream. Discussion This study retrospectively analyzed the efficacy of 595-nm PDL combined with 755-nm long-pulse alexandrite laser on 116 cases of hybrid IH after 6 months of sequential treatment in our dermatology department and was objectively reflected by color Doppler ultrasound. IH may occur everywhere on the body but primarily on the head, face and neck. 13 2834 There is a female predominance in the development of IH, and the male-female ratio is generally considered to be 1:2 to 1:5. 14 In this study, the face and neck region predominated, consistent with the epidemiologically suggested results. The male-female ratio was nearly 1:1 in this study, which may be linked to the small sample size. 2835 Oral propranolol is considered the current treatment of choice for IH, but it is not an ideal treatment for localized, solitary IH due to the side effects of propranolol, such as: hypoglycemia and cardiac burden. 15 Topical ß-blockers, on the other hand, can only act on superficial IH due to the limitations of topical medications. In 1983, Anderson et al 16 discovered that the theory of selective photothermolysis and laser therapy for the treatment of limited thicker IH had become a popular research direction over the past few years. Studies have shown that plasma concentrations of vascular endothelial growth factor (VEGF), a growth factor that can promote vascular proliferation, can be significantly reduced after laser treatment, with ultrasound suggesting reduced blood flow. 17 However, many children with IH are not sensitive to laser treatment. 18,19 In this study, four children increased in volume after laser treatment, and they were treated with oral propranolol or local sclerotherapy injections, which provided effective relief. Therefore, laser treatment can combine with other treatment modalities to obtain better results for IH. Asilian et al 18 also demonstrated that timolol combined with 595-nm PDL had better efficacy and shorter treatment duration than 595-nm PDL alone. The high absorption coefficient of OHB and DHB is one of the mechanisms of PDL in the treating IH, and PDL allows rapid resolution of residual lesions at the degenerative stage. Kessels et al 20 randomized infants with IH into two groups, the observation group and the PDL group, at a 12-month follow-up, the PDL group showed a significant improvement in color compared to the control group. However, due to the low penetration depth of PDL, only 1-2 mm, it has a limited effect on the treatment of thicker IH and may not be able to prevent the proliferation and growth of deeper components of IH. 5 The 755-nm long-pulse alexandrite laser penetrates the skin 50-75% deeper than 595-nm PDL and helps treat deeper IH with fewer adverse effects. 6,7 Therefore, in this study, 595-nm PDL combined with 755-nm long- 2836 pulse alexandrite sequential laser treatment of hybrid IH has improved efficiency and safety. Moreover, the therapeutic effect is correlated with the age and thickness of IH, with younger having a better clinical efficacy than older and shallower thickness having better clinical efficacy than deeper thickness. This also suggests that early active intervention for IH can lead to better therapeutic outcomes. In line with this, Jiang et al 21 also verified that early intervention with 595-nm PDL combined with 755-nm long-pulse alexandrite laser can reduce the incidence of IH sequelae. The incidence of adverse reactions during the treatment was low, with only 28 cases (24.14%) showing hyperpigmentation, hypopigmentation, scarring, and blistering, in addition to the lack of systemic adverse effects. More interestingly, 30.17% of the children increased in volume after 1 time laser treatment compared to the pre-treatment period, which may be due to the proliferative phase. At the same time, there are certain inconveniences associated with the laser treatment. Compared to topical timolol and oral propranolol, laser treatment is a greater financial burden for some of families. Also, as mentioned above, the laser treatment is still limited in its scope of action compared to oral propranolol and local injections, and can only be used to treat less risk IH. In the case of superficial IH, clinicians usually judge the therapeutic effect by visual observation of changes in the size, color, and other features of the IH. The most common evaluation method has been the internationally accepted class IV classification proposed by Achauer et al. 22 However, the efficacy of hybrid or deep IH is hard to evaluate precisely by dermatologists subjectively because its depth cannot be visualized. Therefore, in recent years, many clinicians have adopted color Doppler ultrasound for routine diagnosis and therapeutic evaluation of IH. 23 Color Doppler ultrasound is a quick, noninvasive examination to determine the blood flow within the IH, assesses the depth of IH involvement and differentiates different types of IH. Superficial IH involves only the skin and mucous membranes, deep IH without affecting the skin or mucous membranes, and hybrid IH involves both. Color Doppler ultrasound also helps dermatologists determine treatment downtime. 9,10 As previously indicated, there are various treatment options for IH. Dermatologists can use color Doppler ultrasonography to assess the volumes, depths, and CDFIs of the IHs and aid them in making the appropriate treatments taking into account the children's conditions as well as the worries of the families. In the proliferative phase of IH, the ultrasound shows a hypoechoic localized mass and an enhanced blood flow signal. In contrast, in the regressive phase of IH, the tissue is gradually replaced by fibers and fat, 24 and the ultrasound shows volume decrement, echogenic enhancement, and blood flow signal degression. 25 Color Doppler ultrasound is mainly used to evaluate the efficacy observation and prediction of oral propranolol for IH, which is deeper. In contrast, evaluation of the effectiveness of superficial IH relies primarily on the visual observation of clinicians. However, He et al 26 suggested that spectral ultrasound could predict the efficacy of timolol topical application in treating IH. As an objective detection modality, shallower IH can be evaluated by color Doppler ultrasound for greater efficacy. Therefore, it is reliable as a diagnostic aid, the treatment timeline, and outcome of IH by regular color Doppler ultrasound, which can provide more valuable information. In terms of the relationship between the outcome of IH and the change in blood flow signal, there was more reduction in blood flow in the group with the effective outcome, which also reflects that color Doppler ultrasound can be used as an important objective evaluation modality and the effectiveness of this treatment protocol. Consistent with this study, Babiak-Choroszczak et al 27 found that color Doppler ultrasound showed the reduction of blood flow signal within the lesion as IH subsided during oral propranolol treatment of IH and that blood flow signal correlated with basic fibroblast growth factor (bFGF) during and after treatment. This study has some limitations. First, the study did not include a control group or a control treatment group, so it was impossible to separate the actual effect of laser treatment from the spontaneous regression of IH. Second, as a retrospective study, selection bias may affect the generalizability of the results. Third, the study lasted a short period of time and did not evaluate the subjects of this study for an extended period of time. Conclusion Color Doppler ultrasound can be applied quickly and objectively to diagnose and assess the treatment timing and outcome of hybrid IH, and it can help clinicians recognize hybrid IH more accurately. The 595-nm PDL combined with 755-nm long-pulse alexandrite laser of hybrid IH is safe and effective in treating hybrid IH. Moreover, the effect of treatment is correlated with the age and thickness of the IH, with younger having better clinical efficacy than older and shallower thickness having better clinical efficacy than deeper thickness. The earlier intervention for hybrid IH we perform, the better outcomes will be.
2022-12-24T16:03:18.154Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "f96a9a51af5aa2b8adce2716a3a21eed041a5f55", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9190eeb8649441de77352c7139c0612d2c95cf27", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257220153
pes2o/s2orc
v3-fos-license
Supersolvability of built lattices and Koszulness of generalized Chow rings We give an explicit quadratic Grobner basis for generalized Chow rings of supersolvable built lattices, with the help of the operadic structure on geometric lattices introduced in a previous article. This shows that the generalized Chow rings associated to minimal building sets of supersolvable lattices are Koszul. As another consequence, we get that the cohomology algebras of the components of the extended modular operad in genus 0 are Koszul. Introduction In [14] Feichtner and Yuzvinsky defined algebras FY(L, G) for every pair of a geometric lattice L and a building set G ⊂ L (such datum (L, G) will be called a built lattice). In the realizable case those algebras are the cohomology rings of the wonderful compactifications introduced by De Concini-Procesi [9]. If G is equal to L \ {0} we get the so-called combinatorial Chow ring of L. Those rings are known to satisfy very strong properties such as Poincaré duality or even the Kähler package (see Adiprasito-Huh-Katz [1] for combinatorial Chow rings and Pagaria-Pezzoli [22] for general Feichtner-Yuzvinsky rings). An important property of algebras which is still a largely open question for Feichtner-Yuzvinsky algebras is that of Koszulness. In plain English, Koszulness means that the algebra in question has a weight grading such that the algebra is generated by elements of weight 1, the relations between elements of weights 1 are generated by elements of weight 2 (i.e. the algebra is quadratic), the relations between relations are generated in weight 3 and so on. Koszulness is a particularly interesting property to ask of the cohomology ring of a formal space because it allows a direct computation of other rational homotopy invariants such as the rational homotopy Lie algebra (see Berglund [4]). Since the wonderful compactifications of hyperplane arrangements are known to be formal, it is natural to ask which Feichtner-Yuzvinsky algebras are Koszul, a question raised by Dotsenko in [11]. A classical way to prove the Koszulness of a given algebra is to find a quadratic Gröbner basis for this algebra. Feichtner and Yuzvinsky computed explicit Gröbner bases for the Feichtner-Yuzvinsky rings, but those bases are almost never quadratic. In fact, the Feichtner-Yuzvinsky rings themselves are not necessarily quadratic. One of the first results proving the Koszulness of some Feichtner-Yuzvinsky algebras was given by Dotsenko who proved that the Feichtner-Yuzvinsky algebras associated to the complete graphs with building set of connected components are Koszul. To put it in a nutshell, Dotsenko introduced an explicit order on the generators of the Feichtner-Yuzvinsky rings and then used the operadic structure on this collection of rings to construct a bijection between the algebraic normal monomials associated to the latter order and relations of degree 2, and the operadic normal monomials obtained in a previous work via Gröbner bases for operads (Dotsenko-Khoroshkin [12]). By a dimension argument this implies that the relations of weight 2 form a quadratic Gröbner basis of the Feichtner-Yuzvinsky rings in question. More recently, Mastroeni-McCullough [21] proved that the combinatorial Chow rings are all Koszul, using the notion of Koszul filtrations. In [23] Stanley introduced the class of "supersolvable" lattices. Definition 1.1 (Stanley,[23]). A lattice L is called supersolvable if it admits a maximal chain ∆ such that for every chain K in L, the sublattice generated by ∆ and K is distributive. Supersolvable lattices have very nice properties in general. In particular, we have the following classical result. Theorem 1.2 (Yuzvinsky, [27]). The Orlik-Solomon algebra of a supersolvable lattice admits a quadratic Gröbner basis. In this article we prove a similar result for Feichtner-Yuzvinsky algebras. We first introduce a notion of supersolvability for built lattices (which coïncides with the usual supersolvability when taking the maximal building set) and we prove the following theorem. In order to prove this result we will generalize the strategy of Dotsenko, by using the extended operadic structure introduced in [8]. Stanley [23] proved that the geometric lattices associated to chordal graphs (i.e. graphs such that every cycle has a chord) are supersolvable. Alternatively, one can associate to a graph G a built lattice (L G , G G ) where L G is the lattice associated to G and G G is the building set of connected closed subgraphs of G. Stanley's original argument also shows that (L G , G G ) is a supersolvable built lattice. This implies by Theorem 1.3 that its Feichtner-Yuzvinsky algebra admits a quadratic Gröbner basis. Since the complete graphs are chordal we recover the result of Dotsenko. In [17], Losev and Manin introduced moduli spaces of stable curves with marked points of two types, where the points of the first type are not allowed to coincide with any other points, and those of second type are allowed to coincide between them. Those moduli spaces form the components of an object called the "extended modular operad", introduced by Losev and Manin in the sequel [18]. In [20], Manin asked if the cohomology algebras of those moduli spaces are Koszul. By considering the family of chordal graphs G m,n , where G m,n has m + n vertices, the first m vertices are neighbors of every vertices and the last n vertices are neighbors only of the first m vertices, one obtains the following result. Theorem 1.5. The cohomology algebras of the components of the extended modular operad in genus 0 have quadratic Gröbner bases and are therefore Koszul. In Section 2 we introduce the combinatorial objects needed to understand the rest of the article and we recall some of their known properties. In Section 3 we prove Theorem 1.3 and we deduce Theorem 1.4. In Section 4 we concentrate our attention toward supersolvable built lattices associated to chordal graphs, which leads to Theorem 1.5. Finally, in Section 5 we take a step back and give some general comments for further research. Geometric lattices and matroids Definition 2.1 (Lattice). A finite poset L is called a lattice if every pair of elements in L admits a supremum and an infimum. The supremum of two elements G 1 , G 2 is denoted by G 1 ∨ G 2 and called their join, while their infimum is denoted by G 1 ∧ G 2 and called their meet. Remark 2.2. Since L is supposed to be finite, having supremums and infimums for pairs of elements implies having supremums and infimums for any subset S of L, which will be denoted by S and S respectively. As a consequence, every lattice admits an upper bound (the supremum of S = L) and a lower bound (the infimum of S = L) which will be denoted by1 and0 respectively. Definition 2.3 (Geometric lattice). A finite lattice (L, ≤) is said to be geometric if it satisfies the following properties: • For every pair of elements G 1 ≤ G 2 , all the maximal chains of elements between G 1 and G 2 have the same cardinal. (Jordan-Hölder property) • The rank function ρ : L → N which assigns to any element G of L the cardinal of any maximal chain of elements from0 to G (not counting0) satisfies the inequality • Every element in L can be obtained as the supremum of some set of atoms (i.e. elements of rank 1). (Atomicity) For any geometric lattice L we will denote by At(L) its set of atoms. For any element F in L we will denote by At ≤ (F ) the set of atoms of L which are below F . One of the reasons to study this particular class of lattices is that the intersection poset of any hyperplane arrangement is a geometric lattice. In fact, one may think of geometric lattices as a combinatorial abstraction of hyperplane arrangements. In addition, this object is equivalent to the datum of a loopless simple matroid via the lattice of flats construction and therefore it has connections to many other areas in mathematics (graph theory for instance). Let us describe in more details the correspondence between simple loopless matroids and geometric lattices. There are several equivalent definitions of matroids. We refer to [26] for more details. Definition 2.4 (Matroids via independent subsets). A matroid is a pair of a finite set E and a set I of subsets of E (the "independent" subsets) satisfying the axioms • For any I in I, every subset of I belongs to I. • For any I, J in I, if #J > #I there exists an element a in J and not in I such that I ∪ {a} is independent. Definition 2.5 (Matroids via closure operator). A matroid is a pair of a finite set E and an application (the "closure operator") satisfying the axioms • For any X ∈ P(E) we have X ⊆ σ(X). • For any X ∈ P(E) and Definition 2.6 (Matroids via circuits). A matroid is a pair of a finite set E and a set C of subsets of E (the "circuits") satisfying the axioms • The empty set is not a circuit. One can replace the last axiom by a stronger version which we will use later in this article. One passes from the independent subset definition to the circuit definition by defining a circuit as a minimal dependent subset. One passes from the circuit definition to the closure definition by putting A matroid (E, I) is said to be simple loopless if every subset of E of cardinal less than two is independent. A flat of a matroid M = (E, σ) is a subset F ⊆ E such that σ(F ) is equal to F . The set of flats of M denoted by L M ordered by inclusion is a geometric lattice with meet given by the intersection. Conversely if L is a geometric lattice then the datum (E, σ) where E is the set of atoms of L and σ is the map defined by is a simple loopless matroid. Those two constructions are inverse to each other on simple loopless matroids. In the sequel we will freely identify an element of some geometric lattice with the set of atoms below this element. For instance if G 1 and G 2 are two elements of some geometric lattice L then G 1 ∪ G 2 will mean At ≤ (G 1 ) ∪ At ≤ (G 2 ). Finally, notice that by definition, for any subset S ⊆ L with L some geometric lattice we have where σ is the closure operator of the associated matroid. Here is a list of some important well-known geometric lattices. Example 2.7. • If X is any finite set, the set P(X) of subsets of X ordered by inclusion is a geometric lattice with join the union and meet the intersection. It is the intersection lattice of the hyperplane arrangement of coordinate hyperplanes in C X . Those geometric lattices are called boolean lattices and denoted by B X . • If X is any finite set, the set Π X of partitions of X ordered by refinement is a geometric lattice. It is the intersection lattice of the so-called braid arrangement which consists of the diagonal hyperplanes {z i = z j } in C X . Those geometric lattices are called partition lattices. • If G = (V, E) is any graph one can construct the graphical matroid M G associated to G and then consider L G the lattice of flats associated to M G (see [26] for the details of this construction). Those lattices are said to be graphical. This family of geometric lattices contains the two previous ones because B X is the lattice associated to any tree with edges X and Π X is the lattice associated to the complete graph with vertices X. For any graph G = (V, E) the geometric lattice L G is the intersection lattice of the hyperplane arrangement A fundamental fact about geometric lattices is that an interval of a geometric lattice is a geometric lattice (see [26]). In the rest of this article every lattice will be assumed to be geometric. Building sets and nested sets The following definition is due to De Concini-Procesi [9] in the realizable case, and Feichtner-Yuzvinsky [14] in general. Definition 2.8 (Building set). Let L be a geometric lattice. A building set G of L is a subset of L \ {0} such that for every element X of L the morphism of posets is an isomorphism (where max G ≤X is the set of maximal elements of G ∩ [0, X]). The elements of max G ≤X are called the factors of X in G. Definition 2.9 (Built lattice). The datum of a lattice L and a building set G of L will be called a built lattice. If G contains1 we say that (L, G) is irreducible. The definition of a building set makes sense for a larger class of posets, as shown in [14], but in this paper we will restrict ourselves to the case of geometric lattices. In this particular context, building sets are geometrically motivated by the construction of wonderful compactifications for hyperplane arrangement complements. To put it in a nutshell, building sets are sets of intersections of a hyperplane arrangement that one can successively blow up in order to obtain a wonderful compactification of its complement (see [9] for more details). Each blowup creates a new exceptional divisor, so the wonderful compactification is equipped with a family of irreducible divisors indexed by G. This family of divisors forms a normal crossing divisor when G is a building set. There are a few key examples to keep in mind throughout this story. • Less trivially, every lattice L also admits a unique minimal building set which consists of all the elements G of L such that [0, G] is not a product of proper subposets. • From the definition one can see that a building set of some lattice L must contain all the atoms of L. If L is a boolean lattice (see Example 2.7) then its set of atoms is in fact a building set (the minimal one). This fact characterizes boolean lattices. • If L is the lattice of partitions of some finite set (see Example 2.7) then the subset of partitions with only one block having more than two elements is a building set of L. This is the minimal building set of L. • If L is a graphical lattice (see Example 2.7) then the set of elements of L corresponding to sets of edges which are connected forms a building set of L. We will denote this built lattice associated to a graph G by (L G , G G ). This family of examples contains the two previous ones (by considering totally disconnected graphs for the former and complete graphs for the latter). • Alternatively, if G = (V, E) is a graph one can consider the boolean lattice B V . This lattice has a building set made up of the "tubes" of G, that is sets of vertices of G such that the induced subgraph on those vertices is connected. This leads to the notion of graph associahedra introduced in [6]. A key fact about building sets is that any interval [G 1 , G 2 ] in some built lattice (L, G) admits an "induced" building set which we describe now. We start by introducing a useful notation. Notation 2.11. For any element G of some lattice L and a subset X of L, we denote by G ∨ X the set of elements of L which can be obtained as the join of G and some element of X. Definition 2.12 (Induced building set). Let We will often write Ind(G) instead of Ind [G 1 ,G 2 ] (G) if the interval can be deduced from the context. Proof. The proof can be found in [5] (Lemma 2.8.5). Definition 2.14 (Nested set). Let (L, G) be a built lattice. A subset S of G is called a nested set if for every antichain A in S which is not a singleton, the join of the elements of A does not belong to G. The nested sets of a built lattice (L, G) form an abstract simplicial complex denoted N (L, G). We denote by N irr (L, G) the set of nested sets of (L, G) containing the maximal elements of G. Geometrically, nested sets correspond to sets of divisors in the wonderful compactification which have a nontrivial intersection. Supersolvable built lattices Recall that a lattice L is said to be distributive if for every triple X, Y, Z ∈ L we have the equality An element X in a lattice L is said to be modular if for every Y both the pairs (X, Y ) and (Y, X) are modular. The following definition is due to Stanley [23]. Definition 2.16 (Supersolvable lattice). A lattice L is said to be supersolvable if there exists a maximal chain M of elements of L such that for every chain K in L the sublattice generated by M and K is distributive. Stanley proved that for geometric lattices (or more generally semimodular lattices) we have the following equivalence. Proposition 2.17 (Stanley [23]). A geometric lattice is supersolvable if and only if it has a maximal chain of modular elements. In this article we only consider geometric lattices and we will mostly use the above equivalent characterization. Fact 2.18. Supersolvability is a hereditary condition (meaning it is stable by taking intervals) because if G is some element in some supersolvable lattice L with maximal chain of modular elements0 are maximal chains (with possibly multiple occurencies) of modular elements of [0, G] and [G,1] respectively (see [23]). We introduce the following variant for built lattices. Definition 2.19 (Supersolvable built lattices). A built lattice (L, G) is said to be supersolvable if it admits a maximal chain0 = G 1 < ... < G n =1 of modular elements in G such that for any element G in G, the element G i ∧ G belongs to G ∪ {0} for all i ≤ n. By Fact 2.18, supersolvability for built lattices is a hereditary condition (meaning it is stable by taking intervals and induced building set). Example 2.20. Let B 4 be the boolean lattice of {1, 2, 3, 4}. If we put is a maximal chain of modular elements in G (all the elements of B 4 are modular), and we have On the contrary if one puts and any maximal chain of elements in G must contain either {1, 2, 3} or {2, 3, 4}. One can immediately see that if L is a supersolvable lattice then (L, G max ) is a supersolvable built lattice. In Section 3 and Section 4 we will introduce other large classes of supersolvable built lattices. Let (L, G) be a supersolvable built lattice with some chosen maximal chain of modular elements ω = {0 = G 1 < ... < G n =1}. For any G in G and any G ′ in Ind [G,1] (G) we denote by d ω,G (G ′ ) the coatom in the maximal chain of modular elements induced by ω on [G, G ′ ] (see Fact 2.18). An element of the form d ω,G (G ′ ) will be called an initial segment of G ′ relative to G. In practice we will drop ω from the notation. If G is equal to0 we also drop it from the notation. In the sequel whenever we introduce a supersolvable built lattice we implicitly choose a particular maximal chain of modular elements of this built lattice. To conclude this subsection we prove a small general lemma which will be useful later on. Lemma 2.21. Let L be a geometric lattice, G a modular element of L and C a circuit in L. At least one of the following propositions is true: Proof. Assume the first two propositions are not true. Let us denote Let H be any element of I, which is not empty by assumption. By modularity of G we have Since J is not empty I is independent and the element on the left has rank at least #I. Since σ(I \ {H}) only has rank #I − 1 we must have By Formula (2) this implies that we have the desired circuit C ′ . The Feichtner-Yuzvinsky rings with all the generators in degree 2, and I aff the ideal generated by elements G≥H x G for every atom H, and elements G∈X x G for every set X ⊂ G which is not nested. In the realizable case, the ring FY(L, G) is the cohomology ring of the wonderful compactification associated to the building set G (see [9] for the computation of the cohomology ring). Those rings were generalized to arbitrary built lattices by Feichtner and Yuzvinsky in [14]. The Feichtner-Yuzvinsky rings admit another important presentation. for every G ∈ G and A an antichain in G such that A is equal to G. The change of variable between the last presentation and the defining presentation is given by This presentation appeared first in [13] for the braid arrangement and in [2] for general maximal building sets. It is widely used in [22]. In this article we will use exclusively this presentation. In [14], the authors address the issue of finding a Gröbner basis for FY aff (L, G) (see [3] for a reference on Gröbner bases) and they show that when considering any linear order on generators refining the reverse order on G, although the elements defining I aff do not form a Gröbner basis in general, one can still describe a fairly manageable Gröbner basis. Theorem 2.24 (Feichtner-Yuzvinsky). Elements of the form with S any nested set and G ′ any element of G satisfying G ′ > S , together with the usual G∈X x G for every non-nested set X, form a Gröbner basis of FY aff (L, G) for any linear order on generators refining the reversed order of L. The normal monomials with respect to this Gröbner basis are monomials of the form where the G i 's form a nested set S and for every i ≤ n we have Proof. The proof can be found in [14]. Those Gröbner bases are almost never quadratic. The rest of the paper will be devoted to proving the following theorem. The proof of this theorem will be carried out in Subsections 3.1 and 3.2. The following proposition is a small step toward Theorem 2.25 which we will need later on. Proof. It is enough to prove that the nested set complex of (L, G) is flag, meaning that for any anti-chain G 1 , ..., G n in G with n ≥ 2, if G 1 ∨ ... ∨ G n belongs to G then there exists i = j ≤ n such that G i ∨ G j belongs to G. Assume the contrary is true and there exist an anti-chain G 1 , ..., G n such that we have By restricting to a smaller interval we can assume In the latter case, using the building set isomorphism (4) we see that either G i or G j is below d (1). As a consequence we see that there is at most one integer i ≤ n such that G i is not below d(1) (in fact there is exactly one such i). By reordering let us assume that this integer is n. By atomicity there exist an element X < G n such that we have k is a new counterexample to the flagness of the nested set complex of (L, G). We get a contradiction by reiterating this process. The operadic structure on built lattices In order to prove Theorem 2.25 we will use the operadic structure on built lattices introduced in [8]. Let us quickly summarize this construction, referring to the latter article for more details. From now on, all nested sets are supposed to contain the maximal elements of the building set they live in. For any built lattice (L, G) and any nested set S in G we have maps of algebras (where G is the minimal element of S such that we have G ′ < G). In [8] we show that the collection of Feichtner-Yuzvinsky algebras {FY(L, G), (L, G)} together with the above morphisms can be formalized as an operad over a certain Feynman category (Kaufmann-Ward [16]) denoted LBS having as objects the built lattices and morphisms the nested sets. The key ingredient is the definition of an associative composition of nested sets : for every nested set S ∈ N irr (L, G) and every collection of nested sets 1], Ind(G))) G∈S one can define S • (S G ) G ∈ N irr (L, G) such that the operation • is associative (this is the composition of morphisms in LBS). In the case of the maximal building set, the composition of nested sets is just the concatenation of chains. The operad FY ∨ = {FY ∨ (L, G), (L, G)} with structural morphisms given by the linear dual of morphisms (5) admits a quadratic presentation with one generator of top degree (the degree map) in each arity (each built lattice (L, G)). The "monomials" in FY ∨ are all possible operadic products of those generators. Since we have only one generator in each arity, the monomials living in some Feichtner-Yuzvinsky ring FY(L, G) are in bijection with the nested sets of (L, G). In [8] we construct a Gröbner basis machinery for operads over LBS, via the introduction of a notion of shuffle LBS-operads, governed by another Feynman category having as objects the directed built lattices (built lattices together with an order on the atoms of L). One can compute an (operadic) Gröbner basis for FY ∨ and describe the associated (operadic) normal monomials as follow. Let (L, G, ⊳) be a directed built lattice. The total order on atoms ⊳ defines an EL-labelling λ ⊳ by putting We refer to [25] for a reference on EL-labellings. For any two elements X < Y in L and k some positive integer less than rk(Y ) − rk(X) we define ω k X,Y,λ⊳ to be the truncation at height k of the unique maximal chain with increasing λ ⊳ -labels between X and Y . More precisely, if this unique maximal chain is If there is no ambiguity on the EL-labelling we will drop it from the notation. If the maximal chain is not truncated, i.e. k = rk(Y ) − rk(X), we simply omit the superscript. In addition, to any maximal chain ω =0 ≺ X 1 ≺ ... ≺ X n in L one can associate a nested set Notice that this is well defined even if the X i 's do not belong to the building set G because every X i is an atom in [X i−1 ,1] and therefore must belong to the induced building set on the interval [X i−1 ,1]. Finally, in [8] we show that the operadic normal monomials are represented by the nested sets of the form where we have k G < rk([τ S ′ (G), G]) − 1 for all G in S ′ , except for k1 which is less or equal than rk([τ S ′ (1),1]). The main results This section is devoted to the proof of Theorem 2.25. In the first subsection we define an order on the generators of the Feichtner-Yuzvinsky algebras of supersolvable lattices and we compute the normal monomials of weight 2 associated to this order. In the next subsection we define a bijection between the algebraic normal monomials associated to the latter order and the operadic normal monomials introduced in Subsection 2.5. The construction of this bijection will be done by induction and using the operadic structure. By a dimension argument this bijection will show that the algebraic normal monomials form a basis of the Feichtner-Yuzvinksky algebras, which will show that the set of weight 2 relations forms a Gröbner basis of the Feichtner-Yuzvinsky algebras. The order on generators and the normal monomials of weight 2 Definition/Proposition 3.1. Let (L, G) be a supersolvable built lattice. The transitive closure of the relations for all k and all G, G ′ ∈ G with G ′ ≤ G and G ′ not an initial segment of G, is anti-symmetric and thus defines a partial order. Proof. One can define an explicit total order containing the relations (6) For any element G in G let us denote by w(G) the word with letters At ≤ (G) written in increasing order. We define a total order on G, also denoted ⊳, by putting G ⊳ G ′ ⇔ w(G) is less than w(G ′ ) for the lexicographic order. In the sequel whenever we introduce a supersolvable built lattice we implicitly choose an associated total order on G as in the above proof. The monomial α is normal if and only if one of the three following conditions is verified. • The element G does not belong to G. • The element G belongs to G and G 1 is an initial segment of G 2 = G. • The element G belongs to G, G 2 G 1 , G 2 is not covered by G and we have G 1 = d k (G) where k is the maximal integer satisfying Proof. We have an obvious bijection between the monomials described in the above proposition and the normal monomials given by 2. 24, sending h d is a nested set. By a dimension argument it is enough to prove that the normal monomials of weight 2 with respect to ⊳ are included in the monomials described in the proposition. If G 1 G ⊳ G 2 and G 2 is covered by G then h G 1 h G 2 is the leading term of the relation Notice that the normal monomials do not really depend on ⊳ but only on the chosen maximal chain of modular elements. We next come to an important lemma which highlights a first connection between our normal monomials and supersolvability. Lemma 3.3. Let (L, G) be a supersolvable built lattice. Let G 1 and G 2 be two non-comparable elements of G such that we have G 1 ⊳ G 2 , G 1 ∨ G 2 ∈ G, G 2 is not covered by G 1 ∨ G 2 and G 1 is an initial segment of The forward statement is always true but for the converse we need the supersolvability hypothesis. For instance consider L the graphical lattice associated to a 5-cycle and number the edges (i.e. the atoms) from 1 to 5. If we pick G 1 = {1, 2, 3} and G 2 = {4, 5} in the maximal building set, then we have Proof. With the hypothesis on G 1 and G 2 we have The fourth equivalence comes from the fact that by supersolvability G 1 is modular in the interval [0, G 1 ∨ G 2 ]. A bijection between algebraic normal monomials and operadic normal monomials This means that the operadic normal monomials do not depend on the particular choice of ⊳ but only on the choice of the maximal chain of modular elements of (L, G). Such choice being implicit we will drop ⊳ from all notations. From operadic normal monomials to algebraic normal monomials We first define maps by induction on the rank of L. Let (L, G) be a supersolvable built lattice. For any element G =1 ∈ G and any algebraic monomial α = G ′ ∈I h G ′ in FY([G,1], Ind(G)) we define the algebraic monomial in FY(L, G): where for any G ′ ∈ G, i G ′ ,G is the biggest integer such that we have d i G ′ ,G (G ′ ) ∨ G = G ′ , and for any G ′ / ∈ G, the element G ′⊥ is the factor of G ′ in G different from G. Finally, for any operadic normal monomial S in some supersolvable built lattice (L, G, ⊳) with G the maximal element of S ′ for the order ⊳ we define by induction (on both the cardinal of S and the rank of L) the map initialized on empty nested sets by One can check that G ∨ S G is an operadic monomial so our map is well-defined. This map sends an operadic normal monomial to some algebraic monomial, which will turn out to be normal but we will not need this fact. From algebraic normal monomials to operadic normal monomials We are concerned with finding an inverse for Φ. Let us define a candidate ANM(L, G) by induction on the rank of L and the weight of the monomial. We will drop the built lattice from the notation if it can be deduced from the context. We initialize with Let α = i h G i be some algebraic normal monomial in FY(L, G) and let us denote by G the maximum of the G ′ i s with respect to ⊳. If G =1 then by Proposition 3.2 we see that all the G i 's except G are below d(1) so we put where Ψ [0,d(1)],Ind(G) (α/h1) is viewed as a normal monomial in (L, G). If G is different from1, let us denote respectively with G ∨ α G a notation for G i G h G∨G i . One can check that this defines an operadic normal monomial. As a side remark let us remind the reader that we have so we are in fact using again the (co)operadic structure on the Feichtner-Yuzvinsky rings. We must prove that the monomial G ∨ α G is normal in FY([G,1], Ind(G)). This is implied by the following lemma. This is the technical core of the article. The statement is not true in general without the supersolvability condition, as shown by the following example. Let L be the graphical lattice associated to a 6-cycle with edges {1, ..., 6}. Consider the elements If a built lattice has a small rank it can happen that it satisfies Lemma 3.4 without being supersolvable (see Subsection 5.1). Proof. The statement is obvious when two of the G i 's are comparable so we can assume that the elements G 1 , G 2 , G 3 are not comparable. We make a disjunction on whether G i ∨ G j belongs to G for i, j ≤ 3. In this case we have ( Indeed, by the proof of Proposition 2.26 the element G 1 ∨ G 2 ∨ G 3 does not belong to G, and if G 1 ∨ G 2 ∨ G 3 is equal to G 1 ∨ G with G ∈ G and G 1 , G nested we immediately get G 1 ∨ G 2 = G ∈ G contradicting the initial hypothesis. Let us show that {G 1 ∨G 3 , G 2 ∨G 3 } is a nested anti-chain in Ind(G) as in the previous case. By contradiction assume that G 1 ∨ G 2 ∨ G 3 belongs to G. By restriction we can assume By nested-ness this implies d k (1) = G 2 which contradicts G 1 ⊳ G 2 . If G 1 ∨G 2 ∨G 3 belongs to Ind(G) but not to G we immediately get a contradiction as in the previous case. This is similar to the previous case. Once again let us show that {G 1 ∨ G 3 , G 2 ∨ G 3 } is a nested anti-chain in Ind(G). By contradiction assume that G 1 ∨ G 2 ∨ G 3 belongs to G. By restriction we can assume that we have G 1 ∨ G 2 ∨ G 3 =1. By assumption there exists an integer k 1 such that we have Let us denote k := min(k 1 , k 2 ). Let us assume k 1 ≥ k 2 , the other case being symmetric. By modularity of d k (1) and definition of k we have This contradicts the fact that G 1 ∨ G 2 does not belong to G. If G 1 ∨ G 2 ∨ G 3 belongs to Ind(G) and not to G we immediately get a contradiction as in the previous cases. We can either have G 1 ∨ G 2 ∨ G 3 / ∈ G or the contrary. In the first case the building set isomorphism immediately gives the result. In the second case we can assume 1]. By assumption there exists an integer k such that we have d k (1) ∧ (G 1 ∨ G 2 ) = G 1 . We will prove the equality We have The other inequality is obvious. Let us now show the inequality According to Lemma 3.3 it is enough to prove the inequality An atom H below G 1 ∨ G 3 is either below G 1 or G 3 by nested-ness, and similarly for G 2 ∨ G 3 . As a consequence, an atom below G 1 ∨ G 3 and below G 2 ∨ G 3 is either below G 3 or is below G 1 ∧ G 2 , which is below d(G 1 ) by Lemma 3.3. One must also prove that In this case by nested-ness d(G 1 ) is either below G 2 or G 3 . In the first case we immediately obtain that G 2 is covered by G 1 ∨ G 2 which is a contradiction. In the second case we get G 1 ∧ G 3 =0 which contradicts the fact that G 1 ∨ G 3 does not belong to G. In this case we necessarily have G 1 ∨ G 2 ∨ G 3 ∈ G. Let us prove that G 1 ∨ G 3 is an initial segment of It is enough to prove that G 1 is an initial segment of G 1 ∨ G 2 ∨ G 3 . By restriction we can assume G 1 ∨ G 2 ∨ G 3 =1. Since G 1 is an initial segment of G 1 ∨ G 2 there exists an integer k 1 such that we have The integer k 1 is greater or equal than k 2 because the opposite inequality would imply G 2 ≤ G 1 . Let us prove the equality We have The opposite inequality is obvious. Let us now prove inequality 9. By supersolvability the lattice generated by G 1 = d k 1 (1), G 3 and G 2 ∨ G 3 is distributive. This implies As in the other cases one can check that In this case we necessarily have G 1 ∨ G 2 ∨ G 3 ∈ G. As always by restriction we can assume G 1 ∨ G 2 ∨ G 3 =1. By assumption there exists an integer k 2 such that we have d k 2 (1) ∧ (G 1 ∨ G 2 ) = G 1 and an integer k 3 such that we have d k 3 (1) ∧ (G 1 ∨ G 3 ) = G 1 . Let us denote k := max(k 2 , k 3 ). We will prove the equality We have The opposite inequality is obvious. This implies that G 1 ∨ G 3 is an initial segment of 1]. Let us now prove inequality 9. By supersolvability the lattice generated by G 1 = d k 1 (1), G 3 and G 2 ∨ G 3 is distributive. This implies As in the other cases one can check that Proof of the main theorem In this subsection we give the proof of our main theorem and some immediate corollaries. Proof. By dimension it is enough to prove that the map Φ is a left inverse of Ψ, which we will do by induction. The base cases are obvious. Let α = G ′ ∈I h G ′ be a normal algebraic monomial with maximal element G with respect to ⊳. If G =1 then every element G ′ ∈ I is of the form d k (1) for some k. In this case we can explicitly compute which proves reciprocity. If G =1 by induction one can prove that the maximal element (for ⊳) of Ψ(α) is G. We then have by definition However, by normality of α and maximality of G we get Supp G (G ∨ α) = α G which concludes the proof by induction. We have the immediate corollary. When restricting our attention to the maximal building set we get the following. Corollary 3.7. Let L be a supersolvable lattice. The combinatorial Chow ring FY(L, G max ) admits a quadratic Gröbner basis. In the next subsection we shall see that this is also true for the minimal building set. Minimal building sets of supersolvable lattices A lattice is said to be irreducible if it is not a product of proper subposets. The minimal building set G min of a lattice L is the set of elements G of L such that [0, G] is irreducible. We have the key proposition. Proof. It is enough to prove that if L is a supersolvable irreducible lattice then d(1) is irreducible. In fact here d(1) can be any modular coatom (it does not need to be part of a maximal chain of modular elements). We will prove the contraposition of this statement. Assume that d(1) decomposes as a product [0, Denote by R the set of atoms which are not below d(1). We have the following lemma. H)). This implies H ≤ H 1 ∨ H 3 and therefore d(1) ∧ (H 1 ∨ H 3 ) is also equal to H. If the three atoms are different then they must form a circuit, and thus they must all belong to some same G i . This concludes the initialization. Let us now assume that all the atoms d(1) ∧ (H 1 ∨ H 2 ) are below G 1 for instance. Let C be a circuit of arbitrary length, containing a unique element H not in R. Let H 1 , H 2 be two atoms in C different from H. By the initialization part there exists a circuit If H ′ is equal to H then we have H ∈ G 1 . If not, by Axiom (1) one can construct a circuit C ′ , containing H and not containing H 1 , such that C ′ is included in C ∪ {H ′ }. If C ′ does not contain H ′ , then we are done by induction. If C ′ contains H ′ then since d(1) is modular by Lemma 2.21 there exists a circuit C ′′ containing some element H ′′ in d (1) and such that C ′′ \ {H ′′ } is contained in C ′ ∩ R. By induction the atom H ′′ belongs to G 1 . If H ′′ = H we are done. Otherwise by Axiom (1) there exists a circuit C ′′′ containing H, contained in C ′ ∪ {H ′′ } and not containing some element in C ′ ∩ R. Reiterating this process we get a circuit containing H and some elements in G 1 which proves that H belongs to G 1 . From this we deduce the second lemma. Proof. Let H be some element contained in a circuit C contained in G 1 ∪ R ∪ {H}. If H is in R we are done. If H is the unique element of C under d(1) then by the previous lemma we are also done. Otherwise using Lemma 2.21 together with Axiom (1) (as we did in the previous lemma) gives us a circuit contained in d (1), containing H with every other element in G 1 . This proves that H belongs to G 1 . Finally, we get the concluding lemma. Proof. We will prove that every circuit is either contained in G 1 ∪ R or in some G i with i ≥ 2. Let C be a circuit in L. If C is contained in d(1) then the result comes from the isomorphism [0, d(1)] ≃ [0, G 1 ] × ... × [0, G n ]. If C d(1) and C ∩ d(1) is a singleton then by the previous lemma we have C ⊂ G 1 ∪ R. If C d(1) and C ∩ d(1) is not a singleton, pick H any atom in C ∩ d(1). By iterating Lemma 2.21 as in the previous proof we obtain a circuit C ′ containing H, contained in d(1) and containing some elements in implies that this circuit should be contained in G 1 which proves the result. The above proposition and Theorem 2.25 imply the following theorem. Theorem 3.12. Let L be a supersolvable lattice. The algebra FY(L, G min ) has a quadratic Gröbner basis and is therefore Koszul. Proof. Any supersolvable lattice L decomposes as a product of irreducible supersolvable lattices L ≃ L 1 × ... × L n . We then have and we can conclude by Proposition 3.8 and Theorem 2.25. Chordal graphs In [23] Stanley proved that if G is a chordal graph (meaning every cycle in G has a chord), the geometric lattice associated to G is supersolvable. This result is based on the following lemma by Dirac [10]. Lemma 4.1 (Dirac,[10]). Every chordal graph admits a vertex v such that the graph induced by the neighboors of v is a complete graph. Such vertices are called "simplicial". If we remove a simplicial vertex from a chordal graph, the graph we obtain is chordal and this graph is a coatom in the original graph. This means we can reiterate the process and get a maximal chain in the lattice associated with a chordal graph. One can then check that this maximal chain contains only modular elements. We have a "built" variant of this result. Let us remind the reader that in Example 2.10 we have defined a built lattice (L G , G G ) for every simple graph G, with L G the usual graphical matroid associated to G and G G the building set of connected subgraphs of G. Lemma 4.2. Let G be a connected chordal graph. The built lattice (L G , G G ) associated to G is supersolvable. Proof. Let us choose a maximal chain of modular elements as in the last paragraph. By construction those elements are connected subgraphs of G. Let G ′ be a closed connected subgraph of G. For any integer k less than the rank of G, the element d k (1) ∧ G ′ can be obtained from G ′ by successively removing simplicial vertices of G ′ and therefore it is connected. The components of the extended modular operad In [17], Losev and Manin introduced new moduli stacks L g,S for stable curves of genus g with painted marked points indexed by S of two types (say "black" and "white") where the points of type black are allowed to coincide and the points of type white are not. Those stacks are the components of the so-called "extended modular operad" (see [18]). We will deduce from Corollary 4.3 the following result. Proof. It is part of the folklore that if S is a (colored) set with m white points and n black points and * is some chosen white point, then the moduli space L 0,S is isomorphic to the wonderful compactification of the graphical arrangement with respect to the building set of connected components (see 2.10). The corresponding graph denoted G m−1,n has m + n − 1 vertices, with the first m − 1 vertices connected to every other vertices and the last n vertices connected only to the first m − 1 vertices. We notice that G m,n is a chordal graph for every m and n and therefore we can conclude by Corollary 4.3. Let us summarize here the main line of arguments giving the stated isomorphism. In the sequel [20], Manin remarked that the moduli stacks L g,S are part of the formalism of Hassett spaces introduced by Hassett in [15]. In the latter article, the author introduces the moduli problem of curves with weighted points, where one fixes a "weight data" consisting of a vector (g, w) = (g, w 1 , ..., w n ) ∈ N × (]0, 1] ∩ Q) n and one then seeks to parametrize the nodal curves of genus g with n marked points (s i ) i≤n which are allowed to coincide "up to their weights", meaning that if the points s i , i ∈ I coincide then we require i∈I w i ≤ 1, and satisfying a (weighted) stability condition (see [15]). If the first p weights are 1 and the last n − p weights are small enough (precisely i>p w i < 1) then the above condition means exactly that the first p points cannot coincide with any other point and the last n − p points can coincide only between them, and we recover the painted moduli problem of Losev and Manin. Hassett proved that there exists a Deligne-Mumford stack M g,w representing the above weighted moduli problem. In genus 0 the stability condition can be simply stated: for any irreducible component T of the nodal curve, we require i s.t.s i ∈T w i + #nodes of T > 2. In addition, in genus 0 the moduli stack M 0,w is a smooth projective scheme (called a Hassett space). If the weights are either 1 or very small then we call those Hassett spaces "heavy/light". Indexes with weight 1 are called heavy and the other indexes are called light. Work of Cavalieri-Hampe-Markwig-Ranganathan [7] shows that a "heavy/light" Hassett space is a tropical compactification of the projective complement M 0,w of the same graphical arrangement {z i = z j | i = * heavy, j = * heavy or light} (with * some chosen heavy index). To put it in a nutshell this means that there exists an embedding of M 0,w in a torus T n together with a fan Σ in R n having support the tropicalization of M 0,w and such that the closure of M 0,w in the toric variety X(Σ) is the Hassett space M 0,w (we refer to [19] for an introduction to tropical geometry). The fan Σ introduced in [7] is none other than the Bergman fan associated to the buit lattice (L Gm,n , G Gm,n ) (see [14] for the definition of the Bergman fan of a built lattice). In [24], Tevelev has shown that the tropical compactification of a projective hyperplane arrangement complement along the bergman fan of some building set G of the correponding lattice can in fact be identified with the wonderful compactification of De Concini and Procesi along the same building set G, which is the stated isomorphism. Towards a classification of Koszul Feichtner-Yuzvinsky algebras We would like to emphasize the fact that we know plenty of Feichtner-Yuzvinsky algebras FY(L, G) which admit quadratic Gröbner bases and such that the built lattice (L, G) is not supersolvable, especially in low rank. For instance, if C 4 and C 5 are respectively the 4 and 5-cycles then the lattices L C 4 and L C 5 are not supersolvable but the built lattices (L C 4 , G C 4 ) and (L C 5 , G C 5 ) are so small that they still satisfy the key Lemma 3.4 and therefore their Feichtner-Yuzvinsky algebras will be Koszul. However, for the wonderful presentation and for the order used in this article, based on a few examples it feels to the author that the supersolvability condition should be close to necessary, in high enough rank. For instance if C n is the n-cycle then one can easily check that the relations of weight 2 do not form a Gröbner basis of the algebra FY(L Cn , G Cn ) with respect to the order considered in this article for n ≥ 6. We still do not know if FY(L C 6 , G C 6 ) is Koszul or not. In order to produce a quadratic Gröbner basis of this algebra one would either need to consider a different order, or even a different presentation (which should also be different from the classical presentation since one can show that no order on monomials induces a quadratic Gröbner basis for the classical presentation; the argument is completely analogous to that of Dotsenko [11] for the case of the building set of connected subgraphs of the complete graphs). Let us also highlight the fact that even the question of quadraticity of Feichtner-Yuzvinsky algebras is not completely clear. We know that the building sets having a flag nested set complex give quadratic Feichtner-Yuzvinsky algebras but this condition is not necessary, as shown by the following example. Consider C 4 the 4-cycle with edges numbered from 1 to 4. The set of flats is a building set of L C 4 which has a non-flag nested set complex since {2, 3, 4} is not nested and does not contain any proper subset which is not nested. However, the Feichtner-Yuzvinsky algebra of this built lattice is the algebra generated by h1 and h {1,2} with relations which is quadratic since the first relation is a consequence of the last two which are quadratic. This "pathology" has to do with the fact that the minimal building set of L C 4 (which is just the atoms together with the maximal element) does not have a flag nested set complex. Proposition 5.1. Let L be a lattice such that (L, G min ) has a flag nested set complex. If G is a building set of L such that FY(L, G) is quadratic, then the nested set complex of (L, G) is flag. Proof. Assume that we have non-comparable elements G 1 , ..., G n , with n ≥ 3, such that we have G := i G i ∈ G and G i ∨G j / ∈ G for all i = j. If G is irreducible then decomposing the elements G 1 , ..., G n in their irreducible factors immediately yields a contradiction to the flag-ness of the nested set complex of (L, G min ). If G is not irreducible, to lighten the notation let us assume G =1 (just restrict to the interval [0, G]). We have some decomposition L ≃ [0, with irreducible [0, F i ] ′ s and p ≥ 2. Let j be some index less than p. By isomorphism (10) we have F j = i (F j ∧ G i ). If we decompose the elements F j ∧ G i as the join of their factors in G we can see that there is at most two indexes i such that we have F j ∧ G i =0 (otherwise we get a new family of non comparable elements contradicting the flag-ness of N (L, G), but this time with join F j which is irreducible). In addition, there cannot be two such indexes, because if say F j = (G i 1 ∧ F j ) ∨ (G i 2 ∧ F j ) with i 1 = i 2 then F j is an element of G below G 1 ∨ G 2 which is neither below G i 1 nor below G i 2 which contradicts the fact that we have G i 1 ∨ G i 2 / ∈ G. In conclusion for each j there is exactly one i such that we have G i ∧ F j =0, and this implies that we in fact have F i ≤ G i . By using isomorphism (10) one more time we get that each G i is a join of some F j 's, and this forms a partition of the F j 's. Finally, if FY(L, G) is a quadratic algebra then the relation (h1 − h G 1 )...(h1 − h Gn ) can be written as a sum of relations of weight 2, multiplied by monomials. One of the terms of this sum shall be of the form h n−2 with G ′ 1 and G ′ 2 two elements in G with join1. By isomorphism (10) we have G ′ 1 = (G 1 ∧ G ′ 1 ) ∨ ... ∨ (G n ∧ G ′ 1 ), and similarly for G ′ 2 . If there are more than three indexes i such that we have G i ∧ G ′ 1 =0, then decomposing the elements G i ∧ G ′ 1 in G yields a new obstruction to the flag-ness of N (L, G), and we can conclude by some induction. If there are two indexes i 1 = i 2 such that we have G ′ 1 ∧ G i 1 =0 and G ′ 1 ∧ G i 2 =0 then G ′ 1 contradicts the fact that we have G i 1 ∨ G i 2 / ∈ G. Finally, we get G ′ 1 = G i 1 and G ′ 2 = G i 2 for some i 1 , i 2 , which contradicts G i 1 ∨ G i 2 / ∈ G. As we know from Proposition 3.8, if a lattice L is supersolvable the nested set complex associated to the minimal building set will be flag. Conceptualizing the proofs of Koszulness It would be very beneficial if one could explain in a more conceptual way the strategy for proving the Koszul property introduced by Dotsenko and extended in this paper. In this direction, it could be of interest to check if an analogous strategy could work to reprove the following classical theorem of Yuzvinsky. The corresponding (co)operad would be the cooperad of Orlik-Solomon algebras introduced in [8]. Having this other example may lead to a better understanding of the phenomena at play and perhaps give new applications. It would also be interesting to find an operadic characterization of supersolvable lattices, which would explain why they behave so well with respect to the operadic structure.
2023-02-28T06:42:16.821Z
2023-02-25T00:00:00.000
{ "year": 2023, "sha1": "bd1169a92b172cf4cf298dafd9a7593da3a62604", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "bd1169a92b172cf4cf298dafd9a7593da3a62604", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
259298146
pes2o/s2orc
v3-fos-license
Small allelic variants are a source of ancestral bias in structural variant breakpoint placement High-quality genome assemblies and sophisticated algorithms have increased sensitivity for a wide range of variant types, and breakpoint accuracy for structural variants (SVs, ≥ 50 bp) has improved to near basepair precision. Despite these advances, many SVs in unique regions of the genome are subject to systematic bias that affects breakpoint location. This ambiguity leads to less accurate variant comparisons across samples, and it obscures true breakpoint features needed for mechanistic inferences. To understand why SVs are not consistently placed, we reanalyzed 64 phased haplotypes constructed from long-read assemblies released by the Human Genome Structural Variation Consortium (HGSVC). We identified variable breakpoints for 882 SV insertions and 180 SV deletions not anchored in tandem repeats (TRs) or segmental duplications (SDs). While this is unexpectedly high for genome assemblies in unique loci, we find read-based callsets from the same sequencing data yielded 1,566 insertions and 986 deletions with inconsistent breakpoints also not anchored in TRs or SDs. When we investigated causes for breakpoint inaccuracy, we found sequence and assembly errors had minimal impact, but we observed a strong effect of ancestry. We confirmed that polymorphic mismatches and small indels are enriched at shifted breakpoints and that these polymorphisms are generally lost when breakpoints shift. Long tracts of homology, such as SVs mediated by transposable elements, increase the likelihood of imprecise SV calls and the distance they are shifted. Tandem Duplication (TD) breakpoints are the most heavily affected SV class with 14% of TDs placed at different locations across haplotypes. While graph genome methods normalize SV calls across many samples, the resulting breakpoints are sometimes incorrect, highlighting a need to tune graph methods for breakpoint accuracy. The breakpoint inconsistencies we characterize collectively affect ~5% of the SVs called in a human genome and underscore a need for algorithm development to improve SV databases, mitigate the impact of ancestry on breakpoint placement, and increase the value of callsets for investigating mutational processes. find read-based callsets from the same sequencing data yielded 1,566 insertions and 986 23 deletions with inconsistent breakpoints also not anchored in TRs or SDs. When we investigated 24 causes for breakpoint inaccuracy, we found sequence and assembly errors had minimal impact, 25 but we observed a strong effect of ancestry. We confirmed that polymorphic mismatches and 26 small indels are enriched at shifted breakpoints and that these polymorphisms are generally lost 27 when breakpoints shift. Long tracts of homology, such as SVs mediated by transposable 28 elements, increase the likelihood of imprecise SV calls and the distance they are shifted. 29 Tandem Duplication (TD) breakpoints are the most heavily affected SV class with 14% of TDs 30 placed at different locations across haplotypes. While graph genome methods normalize SV 31 calls across many samples, the resulting breakpoints are sometimes incorrect, highlighting a 32 need to tune graph methods for breakpoint accuracy. The breakpoint inconsistencies we 33 characterize collectively affect ~5% of the SVs called in a human genome and underscore a 34 need for algorithm development to improve SV databases, mitigate the impact of ancestry on 35 breakpoint placement, and increase the value of callsets for investigating mutational processes. 36 The human reference genome (International Human Genome Sequencing Consortium, 2001;38 Schneider et al., 2017) hosts annotations including genes (Frankish et al., 2021;O'Leary et al., 39 2016), regulatory regions (Encode Project Consortium, 2012;Encode Project Consortium et al., 40 2020), and repeats (Bailey et al., 2002;Benson, 1999;Smit, 2013Smit, -2015, and it has become a 41 universal coordinate system for describing genetic alterations across populations (Abel et al., 42 2020;Audano et al., 2019;Beyter et al., 2021;Collins et al., 2020;Ebert et al., 2021;43 International HapMap et al., 2007;Karczewski et al., 2020;Sudmant et al., 2015;The 100044 Genomes Project Consortium, 2015 and diseases (ICGC TCGA Pan-Cancer Analysis of Whole 45 Genomes Consortium, 2020; Taliun et al., 2021;Turner et al., 2017). New high-quality 46 references are emerging for humans (Nurk et al., 2022) and a growing number of other species 47 (Alonge et al., 2020;Ferraj et al., 2023;Jebb et al., 2020;Li et al., 2023;Mao et al., 2021;48 Mouse Genome Sequencing Consortium et al., 2002), which play fundamental roles in modern 49 genomics. 50 Variant discovery is largely based on aligning reads or assemblies to a reference genome. This 51 is used to identify single nucleotide variants (SNVs), small insertions and deletions (indels), and 52 structural variants (SVs) including insertions and deletions ≥ 50 bp, inversions, complex 53 rearrangements, and chromosomal translocations. Imprecise SV breakpoints affect 54 comparisons across samples, and while new methods are improving comparisons (Ebert et al., 55 2021;English et al., 2022;Kirsche et al., 2021), error-free merging across many haplotypes has 56 not yet been attained. Additionally, breakpoint features such as microhomology and nearby 57 variants in-cis are important signatures for predicting mechanisms of formation (Beck et al., 58 2015;Carvalho and Lupski, 2016;Carvalho et al., 2011;Vogt et al., 2014). Repetitive 59 sequences often mediate SVs and can make the determination of precise breakpoints 60 challenging. 61 While contiguous high-accuracy assemblies are becoming routine, we find that SV breakpoints 86 are still inconsistently placed across phased haplotypes, and many breakpoints do not represent 87 the true site of rearrangements, potentially impeding downstream analyses. To quantify the 88 effect on modern long-read variant discovery approaches, we re-analyze a recent callset from 89 64 phased haplotypes recently released by the Human Genome Reference Consortium 90 (HGSVC) (Ebert et al., 2021). With pangenomes recently released by the Human Pangenome 91 Reference Consortium (HPRC) (Liao et al., 2023), we identify discordance between linear-and 92 graph-based-reference approaches. We determine reasons why breakpoints can differ between 93 assemblies and suggest approaches for improving both mechanistic inference and variant 94 comparisons across samples. 95 96 97 Breakpoint offsets are prevalent in long-read SV callsets 98 We examined breakpoint placement for SVs across 64 phased haplotypes derived from 32 99 diverse samples released by the HGSVC (Ebert et al., 2021). In that study, variants were called 100 independently on each assembled haplotype against the GRCh38 reference using minimap2 101 (Li, 2018) and merged to a multi-haplotype, nonredundant callset. For each pair of haplotypes 102 (2,016 combinations of 64 haplotypes), we find an average of 20% of insertions and 15% of 103 deletions have different breakpoints between the pair. When SVs anchored in tandem repeats 104 (TRs) and segmental duplications (SDs) are excluded (Methods), we find 4.4% of insertions and 105 assemblies we are investigating (average n50 > 19.5 Mbp) are capable of spanning full-length 109 human TEs, which cluster around 300 bp and 6 kbp. 110 Inconsistent breakpoints in unique regions affect a small number of variants per haplotype pair, 111 but the effect across multiple haplotypes and samples is greater. In the merged callset across 112 all 64 haplotypes, we find 5.9% insertions and 3.1% deletions in unique loci disagree on 113 breakpoint location (Table 1). While many of these differences are small, insertions vary by a 114 median of 2.2 bp and deletions by 4.9 bp resulting in a non-trivial effect on SV representation 115 ( Fig 1A, Table 1). 116 Finally, the number of distinct breakpoints for each variant does not scale linearly with the 117 number of haplotypes harboring the SV (AC: allele count) (Fig 1B). This suggests that variant 118 breakpoints are placed consistently across many haplotypes, but are affected for a subset of 119 haplotypes. 120 121 Diversity is the main driver of differential breakpoint placement 122 To examine whether random sequence errors may affect SV quality, we compared CLR (21 123 genomes) with HiFi (11 genomes) in the HGSVC callset. We find a marginally significant 124 enrichment for differential breakpoints in CLR vs HiFi for insertions (4.40% vs 4.29%, p = 0.025, 125 Student's t-test) and no enrichment for deletions (1.75% vs 1.77%, p = 0.52, Student's t-test), 126 . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted June 26, 2023. ;https://doi.org/10.1101https://doi.org/10. /2023 which we confirmed with permutation tests (p = 0.012 insertions, p = 0.74 deletions, 100,000 127 permutations). 128 We next asked whether sequence variation affected placement. To investigate this, we stratified 129 the callset by ancestry across 64 HGSVC haplotypes derived from all five 1000 Genomes 130 The number of unique breakpoints for each variant (vertical axis) does not scale with the number of haplotypes (horizontal axis). A blue line represents the x = y diagonal. Scatterplot points were jittered in each axis uniformly from -0.5 to 0.5 to show density. (C) For any pair of haplotypes, the proportion of offset SVs is stratified by same superpopulation (green) or different superpopulation (violet). The difference in means is significant for both insertions and deletions (Student's T-test of means), but a greater effect is seen for insertions. Notches indicate a 95% confidence interval around the median. n.s: Not significant, *: 1e-3 < p ≤ 1e-2, **: 1e-4 < p ≤ 1e-3, ***: p ≤ 1e-4. (D) A cumulative distribution of breakpoint offset for all haplotype pairs. Most variants in both haplotypes share the same breakpoint (upper y-axis). For variants with at least 1 bp offset (lower y-axis), the cumulative proportion of matched calls decreases with increasing breakpoint distance. When both samples come from a different superpopulation (violet), larger differences between breakpoints are observed than when haplotypes come from the same superpopulation (green). When both haplotypes come from African samples (gold), breakpoint distances are elevated, but to a lesser extent than different ancestral backgrounds. . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted June 26, 2023. ;https://doi.org/10.1101https://doi.org/10. /2023 ancestral superpopulations composed of African, admixed American, East Asian, European, 131 and South Asian ancestry. We observed that variant breakpoints differed more often when a 132 pair of samples were derived from different superpopulations for insertions (4.49% vs 3.99%, p 133 = 2.44×10 -40 , Welch's t-test, Cohen's D = 0.73) (Fig 1C). Deletions were also increased, but the 134 effect did not reach significance (1.76% vs 1.71%, p = 0.069, Welch's t-test) (Fig 1C). 135 Furthermore, there is a noticeable increase in offset distance when haplotypes are derived from 136 different ancestral backgrounds (Fig 1D); we confirmed these results with permutation tests (p < 137 1×10 -5 insertions, p = 0.041 deletions, 10,000 permutations). Our results suggest that allelic 138 polymorphisms in or near SVs are a driver of breakpoint differences, which reveals a source of 139 ancestral bias in modern SV callsets. 140 141 Breakpoint offsets are more prevalent with TE-mediated SVs 142 Transposable elements (TEs) create tracts of homology throughout the genome resulting in TE-143 mediated rearrangements (TEMRs) (Balachandran et al., 2022;Han et al., 2008;Sen et al., 144 2006). TEs from the same family have highly similar sequences, and so there are many choices 145 for breakpoint placement along TE copies ( Fig. 2A). While TEs may provide the homology 146 necessary for duplications and deletions by non-allelic homologous recombination (NAHR), 147 most exhibit only short tracts of breakpoint homology and appear to be mediated by other repair 148 processes (Balachandran et al., 2022). Therefore, accurately placing SV breakpoints within 149 TEMRs is essential for understanding the mutational mechanisms underlying their formation 150 (Morales et al., 2015). 151 In the merged HGSVC SV callset, we find 112 SV insertions and 119 SV deletions with 152 differential breakpoints in unique loci are likely TEMRs (8.5% and 20.4% of differential variants, 153 respectively) (Methods). We find TEMR insertions were significantly enriched for offset 154 breakpoints (odds ratio (OR) = 4.18, p = 3.17×10 -25 , Fisher's exact test (FET)), as were TEMR 155 . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made (Fig 2B). interval around the median. Red arrows and numbers indicate the number of outlier points above the horizontal axis maximum. n.s.: Not significant, *: 1e-3 < p ≤ 1e-2, **: 1e-4 < p ≤ 1e-3, ***: p ≤ 1e-4. . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted June 26, 2023. ; https://doi.org/10. 1101 Tandem duplications are heavily affected by differential breakpoints 161 Tandem duplications (TDs) are a common SV type where a duplicate copy is inserted adjacent 162 to its template. TDs may be driven by existing homology, such as NAHR, or occur in regions 163 with little to no homology (Arlt et al., 2009;Lee et al., 2007;Li et al., 2020;Menghi et al., 2016;164 Willis et al., 2017). With short reads, TDs are detected by elevated copy number of the 165 duplicated sequence combined with paired-end evidence at the duplication breakpoint, 166 revealing the duplicated reference region (Alkan et al., 2011). However, long-read callers often 167 call TDs as insertions, especially when assemblies are used. A TD is highly homologous with 168 itself posing a significant problem for alignment algorithms because there are many valid 169 choices for the breakpoint placement. If the breakpoint is not placed on one end of a reference 170 copy, the insertion sequence contains a chimera of both duplication copies and the true 171 breakpoint is embedded somewhere inside the insertion sequence (Fig 2C). Annotating a TD 172 should be as easy as re-aligning the insertion sequence to the reference and determining if it 173 maps adjacent to the insertion breakpoint, however, rotated TDs align in two separate 174 fragments (Fig. 2D), and current alignment programs often miss one or both fragments. To 175 better annotate SVs as TDs, we re-aligned SV sequences with BLAST (Altschul et al., 1990) 176 (Methods). We found 1,843 SV insertions were TDs, of which 261 (14.2%) were shifted and 177 rotated leading to the duplication mapping to two separate BLAST records on each side of the 178 insertion site. We found 17 reference TDs with a deleted copy, of which 8 (47.1%) were shifted 179 and rotated mapping to both sides of the deletion SV. 180 We find that TD insertions are more likely to have differential breakpoints (OR = 0.55, p = 181 1.45×10 -9 , FET), but the effect on TD deletions is small (OR = 0.05, p = 3.14×10 -7 , FET). 182 Because homology runs across the full length of a TD, we observe greater average offset 183 distances for insertions (9.37 bp TD vs 2.20 bp non-TD, p = 1.07×10 -13 , Welch's t-test, Cohen's 184 D = 0.45). A large increase in distance for deletions failed to reach significance (741.9 bp TD vs 185 . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted June 26, 2023. ; https://doi.org/10.1101/2023.06.25.546295 doi: bioRxiv preprint 12.8 bp non-TD, p = 0.19, Welch's t-test, Cohen's D = 2.23) (Fig. 2E). These results appear to 186 suggest that TD deletions are highly susceptible to breakpoint shifts, however, the low number 187 of these events and the large range of offsets across all deletions make these observations 188 difficult to validate. 189 190 Small polymorphisms surround offset SV breakpoints 191 Differential breakpoints occur in regions with tracts of homology, such as TEMRs and TDs. In 192 cases of perfect homology, the actual breakpoint could have occurred anywhere in the 193 homologous region. By convention, many aligners, such as minimap2 (Li, 2018), push 194 breakpoints to the left yielding more consistent variant calls across haplotypes. As we have 195 observed, variation in breakpoint placement increases when haplotypes are derived from 196 different ancestral superpopulations, therefore, we reasoned that small allelic polymorphisms 197 around SV breakpoints might influence alignments. 198 To identify small polymorphisms at SV breakpoints that might influence alignments, we 199 extracted the offset region around breakpoints from each haplotype assembly and compared 200 them (Fig 3A) (Methods). For SV insertions in unique regions with shifted breakpoints, we find 201 on average 5.0 small variants on the left breakpoint vs 5.2 on the right breakpoint (p = 1.62×10 -202 10 , Welch's t-test) (Fig 3B). 203 When polymorphisms occur in homologous regions, it creates differences between the 204 haplotype and the reference sequence, which penalizes the alignment score for the haplotype 205 more diverged from the reference. If the variant can be shifted across a homologous region, a 206 better alignment score may be achieved by moving the breakpoint such that small 207 polymorphisms are pushed into the unaligned insertion sequence. As a consequence, we 208 observe a large peak of small polymorphisms on the upstream breakpoint shifted 1 bp inside the 209 . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted June 26, 2023. ; https://doi.org/10.1101/2023.06.25.546295 doi: bioRxiv preprint insertion (Fig 3B). Not only does this drive breakpoint disagreements, but these small 210 polymorphisms near SVs are systematically removed from variant callsets. 211 212 Breakpoint homology annotations change with breakpoint placement 213 SVs are often mediated by large tracts of homology during their formation through NAHR 214 requiring more than 100 bp of perfect homology, double-strand break repair pathways mediated 215 by tracts of microhomology from 1 to 10 bp, non-homologous end joining (NHEJ) requiring no 216 breakpoint homology, and alternative end-joining (alt-EJ) requiring little or no microhomology 217 (reviewed in Carvalho and Lupski (2016) and Iliakis et al. (2015)). Mobile element insertions 218 (MEIs) create homology in the form of target-site duplications (TSDs), which is an important 219 . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted June 26, 2023. ; https://doi.org/10.1101/2023.06.25.546295 doi: bioRxiv preprint annotation for distinguishing true MEI polymorphisms from other SVs containing MEI sequence 220 (Ebert et al., 2021;Zhou et al., 2020). Accurately detecting breakpoint homology is an important 221 predictor of SV mechanism and a useful quality metric for SV callsets, however, the effect of 222 differential breakpoints on microhomology has not been investigated. 223 We used a recent PAV addition to estimate microhomology for all SV breakpoints 224 (Balachandran et al., 2022) (Methods). For each SV, we find that the number of breakpoints 225 increases the number of different microhomology calls for insertions (ρ = 0.72, p < 1×10 -100 , 226 Spearman rank-order correlation (Spearman)) and deletions (ρ = 0.87, p < 1×10 -100 , Spearman) 227 indicating that breakpoint changes affect homology annotations in almost all cases (Fig. S1). 228 For insertions with consistent breakpoints (n = 6,855), microhomology annotations varied by 229 2.16 bp on average, which rises to 21.91 bp on average with inconsistent breakpoints (n = 725) 230 (p = 9.46×10 -15 , Welch's t-test, Cohen's D = 0.43). We see a similar effect on deletion 231 microhomology, which varies by 0.01 bp across haplotypes with consistent breakpoints (n = 232 3,399) and rises to 19.27 bp across haplotypes with inconsistent breakpoints (n = 172) (p = 233 1.01×10 -16 , Welch's t-test, Cohen's D = 1.77) (Fig. S2). 234 As a result of imprecise breakpoints, actual breakpoint homologies necessitate manual 235 reconstruction, which is tedious task and cannot easily scale with modern whole-genome 236 analyses. Therefore, precise mechanisms are difficult to annotate at scale. For example, while 237 SVs mediated by mobile elements with at least 85% identity are generally thought to be 238 mediated by non-allelic homologous recombination (NAHR) (Lam et al., 2010), a closer 239 examination of breakpoints using modern long-read data shows that at least 20% have 240 breakpoint features inconsistent with NAHR (Balachandran et al., 2022). 241 242 . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted June 26, 2023. ; https://doi.org/10. 1101 Read-based approaches have less consistent breakpoints 243 We examined the effects of offset breakpoints from aligned reads using PBSV. In our callset, 244 11,906 SV insertions and 5,501 SV deletions in unique loci were callable across the HGSVC 245 samples. We find that 13% of insertions and 18% of deletions are offset across samples when 246 called from read alignments, which is higher than 6% insertions and 3% deletions we observe 247 from assembly-based callsets for the same SVs. We hypothesize that assemblies are more 248 consistent because a single polished representation of the region is aligned where individual 249 reads may be subject to more systematic bias, for example read errors and SVs on the edges of 250 individual reads. 251 While short-read callers typically rely on read alignments, some produce breakpoint assemblies 252 and may improve breakpoint accuracy. A recent study of TEMRs (Balachandran et al., 2022) 253 finds that MANTA (Chen et al., 2016) places SVs more accurately than other short-read callers, 254 which may be a result of breakpoint assemblies MANTA performs. 255 256 Pangenomes 257 Pangenome graphs are constructed from multiple haplotypes and can be used to negate 258 differences in alignments. The Pangenome Graph Builder (PGGB) (Garrison et al., 2023) 259 constructs graphs from multiple haplotypes simultaneously, and the Minigraph-Cactus (MC) 260 approach iteratively adds haplotypes to a graph (Hickey et al., 2022). Both were featured in the 261 recent pangenome drafts constructed from 94 phased assemblies derived from 47 diverse 262 samples recently released by the Human Pangenome Reference Consortium (HPRC) (Liao et 263 al., 2023). 264 Across unique loci, we identified all SVs that were present in more than one haplotype and 265 matched an SV identified by MC (4,851 insertions, 3,240 deletions). We find that the MC 266 . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made (Beck et al., 2019;Carvalho and Lupski, 2016;Deem et al., 2011), although no point 277 mutations were actually generated by this SV. (Fig 4A). This pattern was observed frequently in 278 the MC callset. 279 Many differences in the PGGB SVs are attributable to different breakpoint choices among 280 largely equivalent representations. For example, a 162 bp VNTR expansion (27 bp motif) with 281 one imperfect reference copy is inserted to the right of the reference copy rather than the left 282 ( Fig S3). More importantly, we find a distinct pattern of PGGB deleting and re-inserting the 283 same bases when calling variants in loci without clean breakpoints. In one example, minimap2 284 represents a 101 bp net gain as a 109 bp insertion with three deletions totaling 8 bp, PGGB 285 calls a 118 bp insertion with a single 17 bp deletion, and MC calls a 105 bp insertion, a 5 bp 286 insertion, two deletions totaling 9 bp, and a SNP (Fig 4B). Further inspection of the breakpoints 287 shows that 13 bases deleted by PGGB are re-inserted as part of the SV insertion (Fig 4C). This 288 SV insertion sequence does not align to the human reference, but is present in the Pan 289 troglodytes on Chromosome 2 and is also in other primate genomes. Therefore, the insertion 290 . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted June 26, 2023. ; https://doi.org/10.1101/2023.06.25.546295 doi: bioRxiv preprint and is likely ancestral and the deletion became the reference allele by chance. The minimap2 291 representation of this locus appears to be the most likely biological explanation for this event 292 with small template switches within the replication fork, which is characteristic of some repair 293 mechanisms (Carvalho et al., 2013), most notably MMBIR (Hastings et al., 2009). Given that the 294 insertion is ancestral, the deletion and re-insertion of bases is less likely. 295 . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted June 26, 2023. ; https://doi.org/10. 1101 In addition to creating different representations of SVs, the area between breakpoints is often 296 filled with small variants that are annotated differently across the haplotypes which may impact 297 the interpretation of variants. These different breakpoints intersect coding sequences for 26 298 genes on average in MC and 5 genes on average in PGGB with additional discrepancies in 299 UTRs and ncRNAs ( Table 2). For example, we find a 180 bp insertion in ESYT3 where 300 minimap2 and PGGB place the breakpoint in an intron, but MC places it in an exon (Fig 5). . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made Advances in long-read sequencing coupled with new phased assembly and variant detection 304 methods have increased the number of detectable SVs dramatically from less than 10,000 to 305 more than 25,000 per diploid genome, and these advances continue to rival short-read 306 technology by reducing costs, increasing availability, and improving read quality. In addition to 307 detecting more SVs, long-reads also capture the full SV sequence, which is important for 308 detailed analyses of non-reference sequences and has already proven to be transformative in 309 mobile element characterization (Ebert et al., 2021;Ferraj et al., 2023). 310 Assemblies have improved breakpoint accuracy, however, systematic errors still exist where 311 breakpoints span homologous regions. This effect is especially large for tandemly duplicated 312 sequence and SV anchored in TEs. Since a majority of SVs have some form of homology 313 around breakpoints (Ebert et al., 2021;Lam et al., 2010), the effect of differential breakpoints is 314 potentially large even outside highly-repetitive sequences. However, modern aligners along with 315 assembly-based SV detection consistently place SVs effectively reducing potential biases, but 316 not eliminating them. 317 While errors in sequence and assembly do contribute to breakpoint differences, the most 318 significant driver is the presence of small allelic changes near SV breakpoints, largely due to 319 ancestral differences. This includes variation within insertions, which accumulate 320 . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted June 26, 2023. ;https://doi.org/10.1101https://doi.org/10. /2023 polymorphisms over generations. Short-reads are subject to known reference biases where 321 distant haplotypes align less confidently to alternate reference alleles (Brandt et al., 2015;322 Degner et al., 2009). Although small polymorphisms are spanned by much longer flanking 323 sequences with long reads and assemblies, this reference bias now manifests as differential 324 breakpoints. 325 On a large scale, these small differences have little effect on callset quality because modern 326 variant merging and comparison tools do allow for imprecise breakpoints, however, it does 327 impact breakpoint annotations. This impedes precise mechanistic inferences since shifting a 328 breakpoint changes microhomology annotations by an average of more than 20 bp and leads to 329 a lack of polymorphisms flanking SVs. These polymorphisms can be signatures of the DNA 330 repair causing the rearrangement (Beck et al., 2019;Carvalho and Lupski, 2016;Deem et al., 331 2011). As a result, callsets are still imprecise and incomplete, even within unique loci, despite 332 being covered by long, contiguous, high-quality assemblies. 333 While pangenome graphs normalize SV loci across samples, additional developments are 334 required to improve breakpoint precision. PGGB agrees with minimap2 for many SVs; some 335 method tuning could potentially improve PGGB for SV breakpoints in unique loci, whereas more 336 substantial improvements may be required for MC. As graph methods mature, they hold 337 promise for calling variants at scale across divergent haplotypes. Importantly, rare and somatic 338 variants will not be in graph references based on population samples, and calling variants 339 against the closest reference path will face the many of the same challenges as methods based 340 on linear references. Improving breakpoints in graph representations and linear references will 341 ultimately increase the utility of pangenome references. 342 In this study, we investigate breakpoint disagreements in unique regions of the human genome 343 where long reads and assemblies spanning rearrangements with megabases of flanking 344 sequence and few assembly errors. Complex genomic loci are dense with repeats and 345 . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted June 26, 2023. ;https://doi.org/10.1101https://doi.org/10. /2023 breakpoint homology, and our results suggest that these loci present with larger breakpoint 346 discrepancies. While these loci have added complexity from larger, more frequent, and more 347 complex rearrangements as well as more collapsed reference loci Vollger 348 et al., 2022), making more rigorous methods for precise rearrangement breakpoints may help 349 solve these regions more effectively. 350 Both simple and complex loci provide a rich opportunity for new methods to improve alignments, 351 variant calling, and variant annotation. While current sequencing data captures these events 352 with few errors, the limitations of current methods lead to systematic biases that affect the 353 accuracy of variant calls and limit their utility for detailed downstream analyses. While long 354 reads continue to gain in length and fidelity, the tools used to analyze them must keep pace. 355 356 Statistical analysis 357 Summary stats, such as mean and SD, were performed with Python numpy (v1.22.4) and 358 statistical tests including Student's t-test, Welch's t-test, F-test, and Fisher's exact test were 359 carried out with scipy (1.9.3). All tests were two-tailed. F-tests were used to determine if a 360 Student's t-test was carried out (F-test p-value ≥ 0.01) or a Welch's t-test (F-test p-value < 0.01). 361 P-values less than 1.0×10 -100 are reported as p < 1.0×10 -100 . Extremely low p-values less than 362 the smallest floating point value Python can represent (~ 1×10 -308 ± 1×10 -15 on our system) are 363 also reported as p < 1×10 -100 in this manuscript. 364 Microhomology. The number of unique breakpoints was compared to the number of unique 365 microhomology calls per merged variant. Neither the number of unique breakpoint locations and 366 unique microhomology calls model a normal distribution (p < 1×10 -100 , scipy.stats.normaltest 367 . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted June 26, 2023. ; https://doi.org/10.1101/2023.06.25.546295 doi: bioRxiv preprint based on D'Agostino and Pearson's test), so we computed correlation based on Spearman 368 rank-order correlation coefficient. 369 Genome reference 370 We use the hg38-NoALT reference published with the HGSVC callset (Ebert et al., 2021) 371 (ftp://ftp.1000genomes.ebi.ac.uk/vol1/ftp/data_collections/HGSVC2/technical/reference/2020051 372 3_hg38_NoALT/). This reference is the full primary assembly of the human genome build 38 373 (GRCh38/hg38) including unplaced and unlocalized contigs, but it does not include patches, 374 alternates, or decoys. 375 376 We acquired the version 2 (Freeze 4) merged callset from HGSVC (Ebert et al., 2021) (. We 377 retained the same 32 population samples excluding the trio children used in the HGSVC 378 publication. Frequencies and allele counts were adjusted to exclude child samples in the 379 merged callset. We removed variants on unplaced and unlocalized contigs of the reference 380 including only variants on primary chromosome scaffolds. A merging bug in SV-Pop allowed for 381 some long-range intersects, and we removed these merged variants. To accomplish this filter 382 this, we required either (a) the maximum offset is less than or equal to the merged SV length or 383 (b) the maximum offset difference was less than 400 bp (200 bp in either direction) and the 384 maximum SV length difference was not greater than 50% of the maximum SV length. These 385 parameters mirror the expected results from the merging process without the long-range bug. 386 We obtained the Tandem Repeats Finder (TRF) (Benson, 1999) and RepeatMasker (Smit, 387 2013(Smit, 387 -2015 annotations from the UCSC Genome Browser (retrieved 2023-01-27, tracks 388 "simpleRepeat" and "rmsk", respectively) (Kent et al., 2002). From TRF, we used all loci. From 389 RMSK, we used all loci annotated as "Low_complexity" or "Simple_repeat". RMSK and TRF 390 records within 200 bp were merged with BedTools merge (v2.30.0) (Quinlan and Hall, 2010) 391 . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted June 26, 2023. retained only records with repeat class "LINE", "SINE", or "LTR" and with a minimum size of 100 415 . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted June 26, 2023. ;https://doi.org/10.1101https://doi.org/10. /2023 bp. For deletions, we intersected the reference locations for each event independently (i.e. 416 upstream breakpoint location and downstream breakpoint location) and annotated deletions as 417 TEMRs if (a) both breakpoints intersected a TE annotation of the same type (e.g. Alu, ERV1, 418 ERVK, L1, L2, etc), and (b) each side of the breakpoint intersected a different TE (i.e. distinct 419 TE events). For SV insertions, we intersected the reference breakpoint with the RMSK track. 420 We additionally obtained RepeatMasker annotations run on the merged callset by HGSVC 421 (Ebert et al., 2021) discarded if 50% or more of the alignment record intersected the TR and RMSK filter. We 433 further filtered BLAST hits to include only records that mapped within 10% of the SV length from 434 the insertion site or deletion breakpoints (e.g. 1 kbp INS, 100 bp window around the insertion 435 site) with a minimum of 100 bp for small SVs. For deletions, we removed the deletion sequence 436 alignment (i.e. remapping produces an alignment over the deletion). Alignments less than 30 bp 437 were also excluded. Some redundant overlapping alignments remained and appeared to be 438 driven by small TRs that were not in the reference, which were removed by keeping only the 439 longest record if records overlapped by 80% or more. The same 80% overlap was performed in 440 . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted June 26, 2023. ; https://doi.org/10.1101/2023.06.25.546295 doi: bioRxiv preprint both reference space using aligned reference coordinates and in SV sequence space using 441 coordinates from the SV sequence (i.e. the first base of the SV sequence is position 0). We 442 selected SVs where the total number of aligned bases on each side of the breakpoint was within 443 90% of the total SV size and ensuring records with large gaps spanning more bases than were 444 aligned did not contribute to the SV size calculation. We did not select records that had the 445 expected alignment pattern (i.e. upstream SV sequence mapping downstream of the SV 446 breakpoint and downstream SV sequence mapping upstream of the SV), although all the 447 records left after the filtering process did exhibit this pattern. 448 449 Our goal was to identify small differences between haplotypes that causes variant breakpoints 450 to be placed differently. For each haplotype pair, we selected SV insertions and deletions with 451 breakpoints placed at different sites and with breakpoints in unique loci (not TR or SD). We 452 extracted the haplotype sequence from around the assembly including a 50 bp flank on each 453 side and we extended one end appropriately to add flank so that in the absence of other small 454 variants, both sequences should start on the same base. 455 The sequences were aligned so that the right-most variant was the reference and the left-most 456 variant was the query, although either order should produce similar results. Sequences were 457 aligned with the "swalign" Python package (v0.3.7) using a global alignment and with match, . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted June 26, 2023. ; https://doi.org/10.1101/2023.06.25.546295 doi: bioRxiv preprint Alignment align ("M" CIGAR operations) records were transformed to match/mismatch ("=" and 461 "X" CIGAR operations), and using the known flanks added to each, we assigned variants to left 462 flank, left breakpoint (intersecting the breakpoint), differential region, right breakpoint 463 (intersecting the breakpoint), and right flank along with their relative position in each category. 464 Microhomology. 465 Microhomology is the span of perfectly matching bases at each end of a breakpoint, for 466 example, the perfect homology at sites of ectopic recombination (i.e., NAHR), homology 467 directed repair, replication-based repair, or alt-EJ. We measured homology at breakpoints using 468 an algorithm in PAV and previously validated as part of a TEMR project (Balachandran et al., 469 2022) where the region upstream of an SV sequence is matched with the downstream 470 reference or contig, and the region downstream of the SV sequence is matched with the 471 upstream reference or contig. To compare haplotypes more consistently, we computed SV 472 homology for insertions against the upstream and downstream contig where the SV was called, 473 and against the reference for deletions. We excluded all TD variants from homology because 474 estimating breakpoint homology using this method counts whole TD copies as homologous. 475 Graph genome comparisons. CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made Table S1 511 512 513 Figure S3: Ambiguous breakpoints for SVs in degenerate tandem repeats. The true breakpoint for this 162 bp expansion is difficult to identify even though tandem repeats in this locus were too diverged or too small to yield a tandem annotation. Despite this divergence, breakpoints were still not consistently placed, and the optimal location is difficult to identify and all three methods chose different breakpoints. . CC-BY-ND 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted June 26, 2023. ;https://doi.org/10.1101https://doi.org/10. /2023
2023-07-01T13:09:38.081Z
2023-06-26T00:00:00.000
{ "year": 2023, "sha1": "1c21b7d4fc84e0cd821a0487475af15d4ae3ec69", "oa_license": "CCBYND", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/06/26/2023.06.25.546295.full.pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "289a41f67efd66282eedc68f6abeca43d9061066", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
262054691
pes2o/s2orc
v3-fos-license
Fano varieties with torsion in the third cohomology group We construct first examples of Fano varieties with torsion in their third cohomology group. The examples are constructed as double covers of linear sections of rank loci of symmetric matrices, and can be seen as higher-dimensional analogues of the Artin--Mumford threefold. As an application, we answer a question of Voisin on the coniveau and strong coniveau filtrations of rationally connected varieties. Introduction If X is a nonsingular complex projective variety, the torsion subgroup of the integral cohomology group H 3 (X, Z) is an important stable birational invariant.It was introduced by Artin and Mumford in [2], where they used the invariant to show that a certain unirational threefold is not rational. For rationality questions, perhaps the most interesting class of varieties is that of Fano varieties, that is, smooth varieties with ample anticanonical divisor.In dimension at most 2, these are all rational, with H 3 (X, Z) = 0.In dimension 3, there are 105 deformation classes of Fano varieties [20,21,23], and direct inspection shows that in each class the group H 3 (X, Z) is torsion free.Beauville asked on MathOverflow whether the same statement holds for Fano varieties in all dimensions [1]. 1 In this paper, we answer the question in the negative. Theorem 1.1.For each even d ≥ 4, there is a d-dimensional Fano variety X of Picard rank 1 with H 3 (X, Z) = Z/2. As a consequence, by [7,15], the variety X is rationally connected but not stably rational.We do not know if it is unirational. The X in the theorem is a complete intersection in a double cover of the space of rank ≤ 4 quadrics in P d/2+2 .The families of maximal linear subspaces of these quadrics give Brauer-Severi varieties over X, and via the isomorphism Br(X) ∼ = Tors H 3 (X, Z), the associated Brauer class maps to the nonzero element in H 3 (X, Z).Our examples can be regarded as higher-dimensional analogues of the Artin-Mumford threefold from [2], whose construction is closely related to that of our X (see Section 4.3). Starting in dimension 6, the Fano varieties we consider have a further exotic property. Theorem 1.2.When d ≥ 6, the d-dimensional Fano variety X from Theorem 1.1 has the property that the coniveau and strong coniveau filtrations differ.More precisely, (1.1) Date: November 17, 2023. 1 An incorrect counterexample is proposed in the answer to [1]; see Section 4.4. The two coniveau filtrations N c H l (X, Z) ⊆ N c H l (X, Z) of H l (X, Z) were introduced in the paper [4].The subgroups of the filtrations contain the cohomology classes in H l (X, Z) obtained via pushforward from smooth projective varieties (resp.possibly singular projective varieties) of codimension at least c.In the case c = 1, l = 3, they are described as follows.The group N 1 H 3 (X, Z) consists of classes in H 3 (X, Z) supported on some divisor of X.Its subgroup N 1 H 3 (X, Z) consists of pushforwards f * β of classes β ∈ H 1 (S, Z) via proper maps f : S → X where S is nonsingular of dimension dim X − 1. An inequality of the two coniveau filtrations is particularly interesting for c = 1 because for each l ≥ 0, the quotient is a stable birational invariant for smooth projective varieties [4,Proposition 2.4]. While the examples of [4] show that this quotient can be non-zero in general, it is known to be zero for certain classes of varieties.Voisin [30] proved that for a rationally connected threefold, any class in H 3 (X, Z) modulo torsion lies in N 1 H 3 (X, Z).Tian [27,Theorem 1.23] strenghtened this to show that H 3 (X, Z) = N 1 H 3 (X, Z) for any rationally connected threefold.Theorem 1.2 shows that the quotient (1.2) can be nonzero for rationally connected X of higher dimension, answering a question of Voisin (see [30,Question 3.1] and [4,Section 7.2]). The paper is organised as follows.Section 2 begins with background on the geometry of symmetric determinantal loci and their double covers.In Section 2.2, we explain how these symmetric determinental loci and their double covers are GIT quotients of affine space by an action of an orthogonal similitude group.In Section 2.3 (more specifically Definition 2.13), we define the main examples in Theorem 1.1 as linear sections of the double covers of symmetric rank loci. In Section 3, we use the presentation of the double symmetric rank loci as GIT quotients to show that their smooth part has non-trivial torsion classes α ∈ H 3 (X, Z).Taking a linear section and applying a generalised Lefschetz hyperplane theorem then proves Theorem 1.1, restated more precisely as Theorem 4.1.In Section 4, we study some special examples appearing in our construction and compute their geometric invariants, in particular, the "minimal" example of a 4-dimensional Fano variety. In the final Section 5 we prove Theorem 1.2, restated precisely as Theorem 5.3.The key point is that the mod 2 reduction of the generator α of H 3 (X, Z) satisfies α 2 = 0 (mod 2), which implies that α is not of strong coniveau 1 by a topological obstruction described in [4]. We would like to thank N. Addington, O. Benoist, J. Kollár, S. Schreieder, F. Suzuki and C. Voisin for useful discussions.The work on this paper was begun at the Oberwolfach workshop Algebraic Geometry: Moduli Spaces, Birational Geometry and Derived Aspects in the summer of 2022.J.V.R. is funded by the Research Council of Norway grant no.302277.1.1.Notation.We work over the complex numbers C. We use the notation for projective bundles where P(E ) consists of lines in E . By a Fano variety we mean a nonsingular projective variety with ample anticanonical bundle. Symmetric determinantal loci and related varieties Here we survey basic facts on symmetric determinal loci.Some of these are well known; we in particular follow the works of Hosono-Takagi [13, Section 2] and Tyurin [28]. Let V = C n .We identify P(Sym 2 V ∨ ) with the space of quadrics in P(V ) and let Z r,n ⊂ P(Sym 2 V ∨ ) denote the subset of quadrics of rank r.Z r,n is a quasi-projective variety; its closure Z r,n parameterizes the quadrics of rank ≤ r and is defined by the vanishing of the (r + 1) × (r + 1)-minors of a generic n × n symmetric matrix.These give a nested chain of subvarieties of P(Sym 2 V ∨ ), where Z 1,n is the 2nd Veronese embedding of P n−1 , and Z n−1,n is the degree n hypersurface defined by the determinant. Proposition 2.1.The variety Z r,n is irreducible of dimension This proposition can be checked using the incidence variety Z r,n parameterizing (n − r − 1)-planes contained in the singular loci of quadrics For [L] ∈ Gr(n − r, V ), the fiber of the first projection π 1 , can be identified with the space of quadrics in P(V /L) ≃ P r−1 , so π 1 is a P r(r+1)/2−1 -bundle over Gr(n − r, V ).It follows that Z r,n is nonsingular, and its dimension is given by (2.1).Moreover, it is straightforward to check that the second projection gives a desingularization π 2 : Z r,n → Z r,n .For the claim about the singular locus, see [13,Section 2]. • Z 1,5 is the 2nd Veronese embedding of P 4 in P 14 ; it is a fourfold of degree 16. 2.1.Double covers.We will only be interested in the case when the rank r is even.In this case, we can define a double cover which is ramified exactly over the locus Z r−1,n , of codimension n − r + 1 in Z r,n .The construction is based on the classical fact that for a quadric Q of rank r in n variables, the variety of (n − r/2 − 1)-planes in Q ⊂ P n−1 is isomorphic to the orthogonal Grassmannian OG(r/2, r), which has two connected components. The formal construction of W r,n from this observation starts with the incidence variety Taking the Stein factorisation of the projection U r,n → Z r,n we get a new variety W r,n and morphisms where η has connected fibres and σ is finite.The fibre of η at a general point of W r,n is isomorphic to a connected component of OG(r/2, r).The morphism σ is a double cover, ramified exactly along Z r−1,n (see [13,Proposition 2.3]). For the remainder of the paper, we will let be the pullback of the polarization from P(Sym 2.2.(Double) symmetric determinantal loci as GIT quotients.In this section, we explain how the varieties Z r,n and W r,n can be presented as GIT quotients of affine spaces, which is a key ingredient in the cohomology computations needed in Theorems 1.1 and 1.2.Let r be even, let S = C r , and let ω S ∈ Sym2 S ∨ be a nondegenerate quadratic form.The orthogonal similitude group GO(S) ⊂ GL(S) consists of the linear automorphisms of S which preserve ω S up to scaling. 2 In other words, an invertible linear map φ : S → S lies in GO(S) if there exists a χ(φ) ∈ C * such that for all v ∈ S, The map χ : GO(S) → C * defined by this relation is a group homomorphism, and we have an exact sequence The group GO(S) naturally acts on the orthogonal Grassmannian OG(r/2, S).The variety OG(r/2, S) has two connected components, and the action of GO(S) on this two-element set gives an exact sequence where GO(S) • is connected.We further have SO(S) = O(S) ∩ GO(S) • , and an exact sequence Consider now the affine space Hom(V, S) ≃ A rn .The group GO(S) acts on Hom(V, S) via GO(S) × Hom(V, S) → Hom(V, S) We have a morphism of affine spaces τ : Hom(V, S) → Sym 2 V ∨ , defined by, for any f ∈ Hom(V, S) and v, w ∈ V , Let CZ r,n ⊂ Sym 2 V ∨ be the subset of Sym 2 V ∨ corresponding to quadratic forms of rank r, so that Z r,n = CZ r,n /C * .The set τ −1 (CZ r,n ) ⊂ Hom(V, S) consists of the f : V → S such that f * ω S has rank r. Proof.The previous lemma shows that GO(S) • acts freely on τ −1 (CZ r,n ).So let f ∈ τ −1 (CZ r−1,n ), and let φ ∈ GO(S) be an element which fixes f .We will show that φ is the identity. Since f * ω S has rank r − 1, we may find a basis v 1 , . . ., v n of V such that The elements f (v 1 ), . . ., f (v r−1 ) ∈ S are orthonormal, so we can choose a vector e ∈ S such that f (v 1 ), . . ., f (v r−1 ), e is an orthonormal basis for S. Since φ fixes f , we have and, for each i, This implies that φ(e) = ±e, and then the fact that φ ∈ GO(S) • forces φ(e) = e.This means that φ is the identity element. . The set of maps f of rank r − 2 has codimension 2(n − r + 2), while the set of f with rank r − 1 has codimension n − r + 1.The further requirement that P(f (V )) is tangent to the quadric gives codimension n − r + 2. Proof.Let R be the coordinate ring of Hom(V, S).The GIT quotient Hom(V, S) ss GO(S) is given by Proj R O(S) , where the ring R O(S) is graded by the action of GO(S), an action which factors through χ : GO(S) → C * .Any linear function ) , and the first fundamental theorem of invariant theory for orthogonal groups says that these τ * (x) generate R O(S) [24, p. 390].This shows that Hom(V, S) us = τ −1 (0), and moreover that τ gives a closed embedding Hom(V, S) ss GO(S) → P(Sym 2 V ∨ ).It is easy to see that its image is Z r,n . Thinking of χ as a character of GO(S) • , we get a GO(S) • -linearisation of O Hom(V,S) .The associated GIT semistable locus in Hom(V, S) is the same as for the GO(S)linearisation, since GO(S) • has finite index in GO(S). Lemma 2.9.The GIT quotient Hom(V, S) ss GO(S) • is isomorphic to W r,n . The open subset τ −1 (CZ r,n ) GO(S) • ⊂ Hom(V, S) ss GO(S) • is isomorphic to σ −1 (Z r,n ) ⊂ W r,n by the following construction.Fix an r/2-dimensional isotropic linear subspace L ⊂ S. Recall the variety U r,n from (2.2) and define a morphism and the linear subspace This morphism is GO(S) • -invariant, and one checks that it gives a bijection between the GO(S) • -orbits in τ −1 (CZ r,n ) and the points of The birational map ψ fits in the following commutative diagram: Let L be the function field of Hom(V, S) ss GO(S) • , identified with the function field of W r,n .Since these two varieties are normal and finite over Z r,n , they are both equal to the relative normalisation of Z r,n in Spec L, and so ψ extends to an isomorphism of varieties. Proposition 2.10.Étale locally near a point p ∈ σ −1 (Z r−2,n ), the pair (W r,n , p) is isomorphic to (C × A M , (0, 0)) where C is the affine cone over the Segre embedding of P n−r+1 × P n−r+1 , 0 ∈ C is the singular point, and Proof.We use the isomorphism Hom(V, S) ss GO(S) • ≃ W r,n .Let f ∈ Hom(V, S) ss be a point whose orbit maps to σ −1 (Z r−2,n ) under this isomorphism.Then f ∈ τ −1 (CZ r−2,n ), and we can choose a basis v 1 , . . ., v n for V such that This means that the elements f (v 1 ), . . ., f (v r−2 ) are orthonormal in S, and we extend this sequence to a basis of S by adding vectors e 1 , e 2 such that The isotropic subspaces of e 1 , e 2 are e 1 and e 2 .Reordering the e i , we may assume that f (v r−1 ), . . ., f (v n ) are all contained in e 1 .After linearly transforming the v i , we may assume that f (v r−1 ) = γe 1 for some γ ∈ C and f There are now two cases to consider: Let us write A i (j) for a T -representation of dimension i with weight j.We then have an isomorphism of T -representations The Luna étale slice theorem implies that étale locally near the orbit of f , the variety Hom(V, S) ss GO(S) • is isomorphic to gives for some M .The quotient N f T is isomorphic to C × A M with C the cone over P n−r+1 × P n−r+1 , so this completes the proof. where the H i are general divisors in |H|.In other words, X is a ramified double cover of a linear section of Z r,n , We are particularly interested in the case when X is also a Fano variety.This can happen only when r < 6: Lemma 2.12.Let X denote a general linear section of W r,n .If 6 ≤ r ≤ n, then either X is singular, or K X is base-point free. Proof.Write X as in (2.5) for divisors H i ∈ |H|.As the H i are general, and W r,n is Gorenstein with canonical singularities, it follows that the same holds for X.By Proposition 2.3 and adjunction, the canonical divisor is given by Therefore, if c > rn/2, X is of general type, and for c = rn/2, it is Calabi-Yau.If c < rn/2, we note that which is non-negative for our choices of r and n.This means that X meets the singular locus of W r,n , and hence it must be singular. By Lemma 2.12, we obtain Fano varieties as linear sections of W r,n only when r = 2 or r = 4.The case r = 2 gives W 2,n = P n−1 × P n−1 , and many linear sections of P n−1 × P n−1 are indeed Fano, but these varieties do not have interesting cohomology groups from the point of view of this paper. We therefore focus on the case r = 4.In this case the existence of the double cover σ : W 4,n → Z 4,n is explained as follows.A smooth quadric surface in P 3 contains two families of lines; thus a quadric of rank 4 in n variables contains two families of (n − 2)-planes, each parameterised by a P 1 .Thus W 4,n parameterises quadrics plus a choice of one of the two families. The dimensions of the first few rank loci Z i are given by By Corollary 2.11, the double cover W 4,n is singular along σ −1 (Z 2,n ), which has codimension 2n − 5 in W 4,n .By (2.4), the canonical divisor of W 4,n equals Definition 2.13.Given n ≥ 4 and c ≥ 0, let X n,c be a general complete intersection (2.6) The varieties X in Theorems 1.1 and 1.2 are X n,2n−1 with n ≥ 5 and n ≥ 6, respectively. Cohomology computations Let X sm n,c be the smooth part of X n,c .In this section we compute the low degree cohomology of X sm n,c .In Proposition 3.1 we compute the low degree cohomology of BGO(4) • , and in Proposition 3.5 we show that this agrees with low degree cohomology of X sm n,c .We summarise the consequences for the cohomology of X sm n,c in Corollary 3.6.In order to prove Theorem 1.1, we want a non-zero 2-torsion cohomology class of degree 3, and for Theorem 1.2, the class should furthermore have a non-zero square modulo 2 (this will be explained in Proposition 5.2). Cohomology of BSO(4). The cohomology rings with integer coefficients of the classifying spaces BSO(n) were computed by Brown [6] and Feshbach [10].For n = 4, the ring is given by , where e is the Euler class (of degree 4), p is the Pontrjagin class (degree 4), and ν is a 2-torsion class of degree 3. Thus the low-degree cohomology groups of BSO(4) are given by The cohomology ring of BSO(4) with Z/2-coefficients is given by where w 2 , w 3 , w 4 ∈ H * (BSO(4), Z/2) denote the Stiefel-Whitney classes [19]. 3.3. Cohomology of hyperplane sections of W sm r,n .Let S be a quadratic r-dimensional vector space and L ⊆ P(Sym 2 V ∨ ) is a codimension c linear subspace.We analyse the natural homomorphism Z) and show that it is an isomorphism in low degrees.To define the homomorphism, begin with the pullback maps , with τ and CZ r−2,n as defined in Section 2.2.By Lemma 2.5 and Corollary 2.11, the variety W sm r,n is isomorphic to (Hom(V, S) − τ −1 (CZ r−2,n )/ GO(S) • , where the group action is free, so we get an isomorphism . Finally, we have the pullback homomorphism Z), and composing these maps gives (3.2).Lemma 3.2.Let G be an algebraic group on an affine space A N .Let Z ⊂ A N be a closed, G-invariant subset of codimension c, and let U = A N − Z. Then the natural homomorphisms ) are isomorphisms for l < 2c − 1, and injective for l = 2c − 1. Proof.The Leray-Serre spectral sequence for equivariant cohomology [18, p. 501] has E 2 -page H i G (pt, H j (U )) and converges to H i+j G (U ).Since H j (U ) = 0 for 0 < j ≤ 2c−2, there are no non-trivial differentials whose domain is of degree (i, j) with i+ j ≤ 2c− 2. The claim of the theorem follows from this. Proof.Combine Lemma 2.7 and Lemma 3.2.Lemma 3.4.Let L ⊆ P(Sym 2 V ∨ ) be a generic codimension c linear subspace.The homomorphisms Proof.The generalised Lefschetz theorem of Goresky-MacPherson [11,Thm p.150] states that we have isomorphisms on the level of homotopy groups for low degrees.Combining this with the Hurewicz theorem gives the statement for cohomology groups.Proposition 3.5.Let L ⊆ P(Sym V ∨ ) be a generic codimension c subspace.The homomorphisms If moreover c ≤ 4n − 13, then the square of the non-zero class in H 3 (X sm n,c , Z) does not vanish modulo 2. The varieties X n,c We now analyse a few particularly interesting choices of n and c.Proof.The singular locus in W 4,n has dimension 2n − 2 by Proposition 2.1 and Corollary 2.11, so X is nonsingular by Bertini's theorem.The proof of Lemma 2.12 gives K X = −H.Finally, H 3 (X, Z) is computed in Corollary 3.6. Proposition 4.2.The fourfold X is a Fano variety with invariants (1) Pic(X) = ZH, with H 4 = 10. ( Proof.(1), ( 2) and ( 4 4.2.1.Homological projective duality.In the paper [25], the second named author studies derived categories of linear sections of the stack Sym 2 P n−1 from the perspective of homological projective duality [17].When n is odd, the paper defines a noncommutative resolution Y n of W n−1,n , and shows that linear sections of this noncommutative resolution are related to dual linear sections of Sym 2 P n−1 in precisely the way predicted by HP duality, which strongly suggest that Y n is HP dual to Sym 2 P n−1 . Specialising to the case n = 5 and linear sections of the appropriate dimensions gives the following result.Let V = C 5 , let L 1 , . . ., L 9 be general hyperplanes in P(Sym 2 V ), and let L be their intersection.Let be the orthogonal complement. In this language, X = L × P(Sym 2 V ) W 4,5 .Since X avoids the singular locus of W 4,5 , the noncommutative resolution Y 5 of W 4,5 is equivalent to W 4,5 , and the main theorem of [25] applies.On the other side of the HP duality we find (4.1) which is the intersection of 6 general (1, 1)-divisors in Sym 2 P(V ∨ ) = Sym 2 P 4 .The following is a slight amplification of the main result of [25]. Proposition 4.3.The category D(X) admits a semiorthogonal decomposition where the E i are exceptional objects. The amplification consists in the fact that [25] only proves that D(S) includes as a semiorthogonal piece in D(X).The fact that the orthogonal complement is generated by 4 exceptional objects is not difficult to show using the techniques of the paper. Lemma 4.4.The surface S in (4.1) is smooth of degree 35 with respect to the embedding S ⊂ P(Sym 2 V ∨ ).It has Hodge numbers Proof.The map P 4 × P 4 → Sym 2 (P 4 ) induces an étale double cover π : T → S where T is a general complete intersection of 6 symmetric (1, 1)-divisors in P 4 × P 4 .In particular, T is simply connected by the Lefschetz theorem.Furthermore, we find that S = 35 and hence χ top (S) = 85 by Noether's formula.From this we find that h 1,1 (S) = 65.Corollary 4.5.With S and X as above, we have Proof.The semiorthogonal decomposition in Proposition 4.3 gives the relation of Hochschild homology groups Expressing Hochschild homology via Hodge numbers through and using the fact that h 0,i (X) = 0 since X is Fano gives the result. Example 4.6.The fact that Tors H 3 (X, Z) = 0 can be seen as a consequence of the fact that the conic bundle η : U 4,5 → W 4,5 does not admit a rational section. To see this, recall that U 4,5 is a projective bundle over the Grassmannian G = Gr(3, V ).Explicitly, U 4,5 = P(E) where E is the rank 9 vector bundle appearing as the kernel of the natural map S 2 (V ∨ ⊗ O G ) → S 2 (U ∨ ), and where U is the universal subbundle of rank 3. Now, if D ⊂ P(E) is the divisor determined by a rational section of η, D is linearly equivalent to a divisor of the form aL + bG, where L = O P(E) (1) and G is the pullback of O Gr(3,V ) (1).We must also have D • L 13 = 10 (as the 1-cycle L 13 is represented by 10 fibers of P(E) → W 4,5 ).On the other hand, using the Chern classes of S 2 (U ∨ ), we compute that D • L 13 = −20b, contradicting the condition that b is an integer. This shows that the Brauer group of W sm 4,5 is non-trivial.In our case, we may identify the Brauer group with Tors H 3 (W sm 4,5 , Z) because H 2 (W sm 4,5 , Z) = Z is generated by algebraic classes [3,Proposition 4].Finally, Lemma 3.4 shows that H 3 (W sm 4,5 , Z) → H 3 (X, Z) is an isomorphism, so the latter group has non-trivial torsion part as well. For an alternative approach to the absence of rational sections, see Claim A.2 in the Appendix. 4.3. The case c = 2n − 2. Let X = X n,2n−2 .Then X has dimension 2n − 5, isolated singularities in σ −1 (Z 2,n ) ∩ X, and K X = −2H.Let X → X be the blow-up at the singular points.Then the exceptional divisor E is a disjoint union of components E 1 , . . ., E s , all of which are isomorphic to P n−3 × P n−3 , by Proposition 2.10. By Corollary 3.6, we have H 3 (X sm , Z) = Z/2.Since X sm ≃ X −E, we get a pullback map H 3 ( X, Z) → H 3 (X sm , Z).This map is an isomorphism by the exact sequence using also that H 1 (E, Z) = 0 and H 2 (E, Z) is torsion free.Proposition 4.7.For each n ≥ 4, X is a smooth projective variety of dimension 2n − 5 with Tors H 3 ( X, Z) = 0.The variety X is unirational, but not stably rational. Proof.Only the unirationality remains to be proved.The incidence variety U 4,n of (2.3) is a P 2n−2 -bundle over the Grassmannian Gr(n − 2, V ).This means that if X is a complete intersection of 2n − 2 divisors in W 4,n , the preimage U X = η −1 (X) is birational to Gr(n − 2, V ).Therefore U X is rational, and hence X is unirational. Example 4.8.When n = 4, X is a double cover of P 3 branched along a singular quartic surface.This is the example famously studied by Artin and Mumford in [2], and for which they prove Proposition 4.7.Here X has 10 ordinary double points and the blow-up X contains 10 exceptional divisors isomorphic to P 1 × P 1 .4.4.The case n = 4, c < 6.The Artin-Mumford examples of X 4,6 can also naturally be generalised to X 4,c with c < 6.We will explain that, at least when c = 4 or 5, these do not have torsion in H 3 in their smooth models (correcting a claim made in a MathOverflow answer [1]). The singular locus of X 4,c has codimension 3 and is a smooth Enriques surface or a smooth genus 6 curve when c = 4 and c = 5, respectively.There is a resolution π : X → X 4,c obtained by blowing up the singular locus, where the exceptional divisor is a P 1 × P 1 -bundle over the singular locus.Proposition 4.9.With X as above, we have that the group H 3 ( X, Z) is torsion free for c = 5 and 0 for c = 4. Proof.To show that H 3 ( X, Z) is torsion free, we first remark that H 3 (X, Z) has no torsion by Corollary 4.11 below.Next, we consider the Leray spectral sequence associated to the blow-up π : X → X, with E 2 -page H p (X, R q π * Z) converging to H p+q ( X, Z).Let S ⊂ X be the singular locus.We have R 0 π * Z X = Z X , R 1 π * Z = 0, R 2 π * Z X = F and R 3 π * Z = 0, where F is a rank two local system.More explicitly, we have F = R 2 π * Z E , and since E is P 1 × P 1 -bundle over S, this means F ∼ = R 0 f * Z S ′ , where f : S ′ → S is the étale double cover of S corresponding to the two families of lines in each fibre of E → S. By Corollary 4.11, we have H 3 (X, Z) = 0, and so the only non-vanishing term of the E 2 -page of the spectral sequence is Running the spectral sequence then gives Since H 1 (S ′ , Z) is torsion free, the same is true for H 3 ( X, Z).When c = 4, the variety S is an Enriques surface, so that S ′ is either a K3 surface or two copies of S; in either case H 1 (S ′ , Z) = 0 which gives H 3 ( X, Z) = 0. In the argument above, we used the following version of the Weak Lefschetz hyperplane theorem for singular varieties.Proposition 4.10.Let V be a projective variety of dimension n + 1 and let D be an ample divisor which is disjoint from the singular locus sing(V ).Then the natural maps are isomorphisms for i < n and surjective for i = n. Proof.Letting U = V − D, the relative cohomology sequence takes the form . Now, using that U is affine of dimension n + 1, the cohomology groups H i (U, Z) vanish for all i > n+1, by Artin's vanishing theorem. Corollary 4.11.Let σ : X → P n be a ramified double cover.Then for each i < n Proof.Note that X can be defined by an equation of the form z 2 = f (x 0 , . . ., x n ) in the weighted projective space V = P(1, . . ., 1, d 2 ).Thus X is an ample divisor, disjoint from the one singular point of V .Thus the conditions of Proposition 4.10 hold, and we find that H j (X, Z) = H 2n−j (V, Z) when j < n.The cohomology of V is computed in [14, Theorem 1], which gives claim (i), and claim (ii) follows by the Universal Coefficient theorem. Proof of Theorem 1.2 In this section we state and prove a precise version of Theorem 1.2.We first recall some general background on the coniveau filtrations on cohomology of algebraic varieties, referring to [4] for details.We restrict ourselves to the case of cohomology with integral coefficients H i (X, Z) on a smooth projective variety X over C. A cohomology class α ∈ H l (X, Z) is said to be of coniveau ≥ c if it restricts to 0 on X − Z where Z is a closed subset of codimension at least c in X.These classes give the coniveau filtration N c H l (X, Z) ⊂ H l (X, Z).Equivalently, viewing H l (X, Z) as H 2n−l (X, Z) via Poincaré duality, a class α ∈ H 2n−l (X, Z) is of coniveau ≥ c if and only if α = j * β for some β ∈ H 2n−l (Y, Z), where j : Y → X is the inclusion of a closed algebraic subset of X of codimension at least c.So for example, N c H 2c (X, Z) consists of exactly the algebraic classes in H 2c (X, Z). A class α ∈ H l (X, Z) is said to be of strong coniveau ≥ c if α = f * β where f : Z → X is a proper morphism, Z is a smooth complex variety of dimension at most n − c, and We have N c H l (X, Z) ⊂ N c H l (X, Z) for every c.Moreover, the quotient is a birational invariant among smooth projective varieties [4].This invariant is particularly interesting for rationally connected varieties X.In this case, all cohomology classes are of coniveau ≥ 1: Proposition 5.1.Let X be a rationally connected smooth projective complex variety.Then for any l > 0, N 1 H l (X, Z) = H l (X, Z). In [30, Question 3.1], Voisin asked whether N 1 H l (X, Z) = N 1 H l (X, Z) for X a rationally connected variety, i.e., whether all cohomology classes are of strong coniveau 1 (see also [4,Section 7.2]).In the same paper, she proved that any class in H 3 (X, Z) modulo torsion is of strong coniveau 1.This was extended by Tian [27,Theorem 1.23] who proved that H 3 (X, Z) = N 1 H 3 (X, Z) for any rationally connected threefold.Our Fano varieties give the first rationally connected examples where the two coniveau filtrations are different. Proof.This is a special case of [4,Proposition 3.5]. Here is the precise version of Theorem 1.2: Theorem 5.3.For n ≥ 6, the variety X n,2n−1 from Definition 2.13 is a Fano variety of dimension 2n − 6 with K X = −H, such that Proof.Let X = X n,2n−1 .The computation of dim X, H 3 (X, Z) and K X is part of Theorem 4.1.Since X is Fano, it is rationally connected, so Proposition 5.1 gives N 1 H 3 (X, Z) = H 3 (X, Z).Corollary 3.6 shows that the class α = 0 ∈ H 3 (X, Z) is such that the mod 2 reduction of α 2 is non-zero.Proposition 5.2 then implies α ∈ N 1 H 3 (X, Z), so N 1 H 3 (X, Z) = 0. Remark 5.4.We can obtain examples of other rationally connected varieties where N c H l = N c H l for any c ≥ 1 and l ≥ 2c + 1 by taking appropriate products with projective spaces (see e.g., [4,Theorem 4.3]). Remark 5.5 (The Artin-Mumford example).In light of Theorem 1.2, it is natural to ask whether the 2-torsion class α ∈ H 3 (X, Z) the Artin-Mumford example has strong coniveau ≥ 1, i.e., whether the birational invariant (1.2) is zero.It turns out that this is indeed the case: Inspecting Artin-Mumford's 'brutal procedure' in [2, p. 82-83], shows that the class α is obtained from a cylinder map H 1 (C, Z) → H 3 (X, Z) from an elliptic curve C. In other words, α is the pushfoward from a class in H 1 from some ruled surface S over C. Note that this can also be seen as a special case of [27, Theorem 1.23]. 5.1.Open questions.We conclude with two open questions regarding the two coniveau filtrations: Question 1. Are there rationally connected varieties X with N 1 H l (X, Z) = N 1 H l (X, Z) for some l > 0 and torsion free H l (X, Z)? Question 2. Are there rationally connected varieties of dimension 4 or 5 where N 1 H l (X, Z) = N 1 H l (X, Z) for some l > 0? Remark 5.6.Let X = X 5,9 be the fourfold from Section 4.2.Then we don't know if the generator α of H 3 (X, Z) has strong coniveau ≥ 1.We can show, however, that α 2 = 0 in H 6 (X, Z/2), so the topological obstruction of Proposition 5.2 vanishes.To see this, we use the fact that the third integral Steenrod square Sq 3 Z : H p (Z, Z) → H p+3 (X, Z) is naturally identified with the third differential d 3 in the Atiyah-Hirzebruch spectral sequence of topological K-theory, with E 2 -page H p (X, K q (pt)) = H p (X, Z) q even 0 otherwise converging to K p+q (X).Now H * (X, Z) has torsion only in degrees 3 and 6, with torsion part Z/2 in each of these degrees.It also has torsion Z/2 ⊕ Z/2 in its topological Ktheory, by Proposition 4.3 (because S is a general type surface with fundamental group Z/2).This implies d 3 = 0, since otherwise the Atiyah-Hirzebruch spectral sequence Most surfaces in P 4 , including general 6 × 5 determinantal surfaces, have only 1parameter families of 5-secants. Theorem 4 . 1 . The variety X is nonsingular of dimension 2n − 6 with K X = −H, and hence Fano.It has Picard number 1 and H 3 (X, Z) = Z/2. by Proposition 2.10, and the claim follows since the singular locus is closed. 2.3.Linear sections of double symmetric determinental loci.The varieties appearing in Theorems 1.1 and 1.2 will be constructed by taking general linear sections of the double cover W r,n , i.e., complete intersections(2.5)
2023-09-20T06:42:44.008Z
2023-09-19T00:00:00.000
{ "year": 2023, "sha1": "3c5f0fe6d1abb340443cb914830b0bcca65cbc51", "oa_license": "CCBY", "oa_url": "https://www.degruyter.com/document/doi/10.1515/crelle-2024-0020/pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "098ee816bd517c989edb43bcb4304c7c1e1df320", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
17489102
pes2o/s2orc
v3-fos-license
Resistance Patterns Selected by Nevirapine vs. Efavirenz in HIV-Infected Patients Failing First-Line Antiretroviral Treatment: A Bayesian Analysis Background WHO recommends starting therapy with a non-nucleoside reverse transcriptase inhibitor (NNRTI) and two nucleoside reverse transcriptase inhibitors (NRTIs), i.e. nevirapine or efavirenz, with lamivudine or emtricitabine, plus zidovudine or tenofovir. Few studies have compared resistance patterns induced by efavirenz and nevirapine in patients infected with the CRF01_AE Southeast Asian HIV-subtype. We compared patterns of NNRTI- and NRTI-associated mutations in Thai adults failing first-line nevirapine- and efavirenz -based combinations, using Bayesian statistics to optimize use of data. Methods and Findings In a treatment cohort of HIV-infected adults on NNRTI-based regimens, 119 experienced virologic failure (>500 copies/mL), with resistance mutations detected by consensus sequencing. Mutations were analyzed in relation to demographic, clinical, and laboratory variables at time of genotyping. The Geno2Pheno system was used to evaluate second-line drug options. Eighty-nine subjects were on nevirapine and 30 on efavirenz. The NRTI backbone consisted of lamivudine or emtricitabine plus either zidovudine (37), stavudine (65), or tenofovir (19). The K103N mutation was detected in 83% of patients on efavirenz vs. 28% on nevirapine, whereas Y181C was detected in 56% on nevirapine vs. 20% efavirenz. M184V was more common with nevirapine (87%) than efavirenz (63%). Nevirapine favored TAM-2 resistance pathways whereas efavirenz selected both TAM-2 and TAM-1 pathways. Emergence of TAM-2 mutations increased with the duration of virologic replication (OR 1.25–1.87 per month increment). In zidovudine-containing regimens, the overall risk of resistance across all drugs was lower with nevirapine than with efavirenz, whereas in tenofovir-containing regimen the opposite was true. Conclusions TAM-2 was the major NRTI resistance pathway for CRF01_AE, particularly with nevirapine; it appeared late after virological failure. In patients who failed, there appeared to be more second-line drug options when zidovudine was combined with nevirapine or tenofovir with efavirenz than with alternative combinations. Introduction The World Health Organization (WHO) currently recommends starting antiretroviral (ARV) combination regimens with a nonnucleoside reverse transcriptase inhibitor (NNRTI) and two nucleoside reverse transcriptase inhibitors (NRTIs), i.e. nevirapine (NVP) or efavirenz (EFV), with lamivudine (3TC) or emtricitabine (FTC), plus zidovudine (ZDV) or tenofovir (TDF) [1]. The combination most commonly used in resource limited countries is a fixed dose formulation containing nevirapine, lamivudine and either stavudine (d4T) or zidovudine, and efficacy and drug failure are monitored for most subjects by clinical or, if available, CD4 criteria. Maintaining a failing first line regimen which includes two drugs with low genetic barriers to resistance, such as nevirapine or efavirenz, plus lamivudine as one of the NRTI's, poses a risk of accumulation of resistance mutations. This can, in turn, limit therapeutic drug options for the second-line therapies [2,3,4,5,6,7,8,9]. In addition the pattern of drug-resistant mutations may differ according to the particular drug combinations used and the circulating HIV-1 subtypes. Although a large data base analysis comparing the NNRTI resistance patterns induced by efavirenz and nevirapine was recently published [10], there have been few studies performed in homogeneous groups of patients [11]. With regard to subtype, in subjects infected with HIV-1 subtype B, the thymidine analogue mutations pathway 1 or TAM-1 (including mutations M41L, L210W and T215Y) is probably more frequent than the TAM-2 pathway (including mutations D67N, K70R, T215F and K219E/Q) [12,13,14], although systematic studies of these pathways have not been done. In subtype C virus, Novitsky and colleagues [15] reported a distinct TAM pathway in patients failing ZDV/ddI-containing HAART. Similarly, there may be different pathways for NVP or EFV resistance mutations which may impact on the success of second generation NNRTIs. The predominant subtype in Thailand is CRF01_AE, and there are few published studies analyzing the resistance mutation patterns that develop during virologic failure in this important subtype, prevalent throughout East and Southeast Asia [8,16,17,18]. Nationwide access to antiretroviral treatment in Thailand began in 2002, with gradually increasing coverage to more than 200,000 HIV-infected patients receiving combination antiretroviral drugs, usually beginning with one of the locally manufactured fixed-dose combinations, (d4T or ZDV)+3TC+NVP [19]. In case of toxicity, NVP is replaced by EFV. The primary objective of this study was to describe and compare the patterns and frequencies of NNRTI and NRTI-associated mutations emerging on nevirapine-and efavirenz-based HAART in Thai HIV-infected adults failing their first-line treatment using Bayesian statistical methods, with a view toward supporting decisions regarding subsequent salvage treatment choices. Secondary objectives were to assess factors associated with more frequent occurrence of NNRTI and NRTI resistance mutations and to compare clusters of mutations observed under nevirapine and efavirenz at failure. Patient characteristics A total of 138 subjects with virologic failure were identified, 19 of whom (13%) showed neither NNRTI nor NRTI mutations and were assumed to be non-compliant with their treatment. These 19 were not considered further in this analysis. Of 98 remaining subjects who initiated a first line nevirapine-based HAART, 10 had nevirapine replaced by efavirenz within 2-4 weeks for toxicity reasons. Of 21 subjects who initiated efavirenz-based HAART, 1 had efavirenz replaced by nevirapine. Thus, 89 subjects were on nevirapine-and 30 subjects on efavirenz-based HAART at the time of virologic failure and showed at least one resistance mutation. Their demographic, clinical (including NRTI backbone) and laboratory data at the time of genotyping are described in Table 1. The estimated length of time from HAART initiation to virologic failure was about. 220 days and the duration of failure before genotypic resistance testing was about 90 days; these two intervals were similar between the 2 groups. D4T and 3TC were more often used with nevirapine (P,0.001 and P,0.001) while ZDV, TDF and FTC were more used with efavirenz (P = 0.006, P = 0.007 and P = 0.001), which supports the need for statistical adjustments with respect to the NRTI backbone used. Pattern of resistance mutations The frequency of NNRTI resistance mutations among the nevirapine-and efavirenz-based treatment groups is shown in Figure 1A and that of NRTI resistance mutations in Figure 1B. In the NVP-based treatment group, 100% had virus with one or more NNRTI mutations: Y181C/I was present in 56% (18% as the sole mutation), G190A/S in 30% (4%) and K103N in 28% (18%). Among the efavirenz-based group, 93% had virus with one or more NNRTI mutation: K103N was present in 83% (32% as the sole mutation). One fourth of samples had Y181C/I or G190A mutations. The most prevalent NRTI mutations in both NNRTI groups were M184V/I (93% in nevirapine and 66% in efavirenz). Four percent (4 of 89) of the nevirapine-based treatment group and 32% of the efavirenz-based treatment group had virus with no NRTI resistance mutations. Of the nine K65R mutations observed, all were found in patients on NVP-based treatment, 6 among patients on TDF and 3 among those on d4T. Number of NRTI and NNRTI resistance mutations The number of NRTI and NNRTI resistance mutations per subject was not significantly different between the efavirenz and nevirapine study subjects. However, tenofovir, when used in the backbone (in comparison to d4T), was found associated with lower occurrence of NRTI mutations when combined with efavirenz (OR = 0. 58, 90%-CI = [0. 15 Longer duration of failure was associated with more frequent occurrence of NRTI mutations in patients on nevirapine-based treatment, with about 5% additional risk of any NRTI resistance mutation per additional month on virologic failure (OR = 1.05, 90%-CI = [1.02,1.08], PP[OR.1] = 99.9%). On average, this corresponds to one new NRTI resistance mutation every 20 additional months spent in failure. No strong evidence was found for the effect of viral load at genotyping. There was also no effect of time to virologic failure. Model-based analysis, mutation by mutation Viral load in the first sample after failure, failure duration and NRTI backbone were the most predictive variables for this analysis. These variables were selected by the statistical model-building procedure, which systematically favored random effects instead of fixed effects models, and were therefore included in the final model as mutation-specific variables. Time to failure was not selected by the model-building procedure and was hence excluded from the analysis. When adjusted for the effects of failure duration, viral load and NRTI backbone, the analysis, presented in Figure The NRTI backbone was found to influence the emergence of some resistance mutations. More specifically, TDF, when compared with d4T, was found strongly associated with greater risk of mutations Y115F and K65R (PP[OR.1].99%), and, with less clear evidence (PP[OR.1].92) with greater risk of mutations Y181I, V179F, and A62V ( Figure 2B). TDF was also associated with lower risk of mutation M184V (PP[OR,1] = 99%) and with less significance, of mutation V75I (PP[OR,1] = 89%). When ZDV was compared to d4T ( Figure 2C), no significant associations were seen, but in each of the 32 mutations investigated, ZDV posed a higher risk of mutations, with posterior probabilities up to 80% (Supporting Information S1). Longer duration of virologic failure was found significantly associated with higher risk of all TAM-2 mutations (D67N, K70R, K219Q/E, T215F) and the T215Y mutation with ORs ranging from 1. 25 Cluster analysis The cluster analysis was performed to identify patterns of mutation occurrence based on the correlation structure of the data. The outcomes of the cluster analysis are displayed as dendrograms in Figure 3. Overall, both NNRTI and NRTI resistance mutations appear to be substantially less inter-correlated for efavirenz-based treatment as compared with nevirapine-based treatment. In both efavirenz and nevirapine groups, inter-correlations were weaker for NNRTI ( Figure 3A) than for NRTI mutations ( Figure 3B). The cluster D67N-K70R-K219Q-M184V, the first three of which are in the TAM-2 pathway, contained the NRTI mutations most likely to occur together in both treatment groups ( Figure 3B). With NVP, the next two mutations, K219E and T215F, complete the TAM-2 cluster, whereas with EFV, the three TAM-1 mutations (M41L, L210W, and T215Y) appear next, along with the K219E from the TAM-2 pathway. Predicted drug resistance patterns during ARV failure Based on the measured sequences, the best-predicted phenotype and a resistance probability score for each drug that might be used in subsequent treatment were derived using the Geno2Pheno system [20]. The NRTI backbone (observed at time of failure) was the only factor, besides NNRTI choice, influencing the resistance patterns observed on NVP-versus EFV-based regimens, as evidenced by the model-building process. Figure 4 displays boxplots showing the WinBUGS-generated posterior distributions of resistance probabilities for each drug, comparing nevirapinebased and efavirenz-based HAART, and with NRTI backbones containing d4T, ZDV or TDF at the time of failure. Among those in failure while receiving a d4T-containing backbone ( Figure 4A), patients' viruses were predicted to be resistant to abacavir (ABC) and 3TC, while they remained susceptible to d4T, ZDV and TDF whether they had been on NVP or EFV Likewise, they were quite uniformly resistant to both NNRTI's (Supporting Information S2). With a ZDV-based backbone ( Figure 4B), the program predicted marginally lower resistance to d4T, ZDV and TDF in those receiving NVP than in those on EFV-based regimens. Interestingly some susceptibility to EFV persisted in those who had failed on NVP (Supporting Information S3). With TDF, the situation showed sharper contrasts ( Figure 4C). Those failing on EFV retained full susceptibility to TDF (zero resistance probability), whereas the nevirapine-based HAART group had a 55% chance to be resistant (Supporting Information S4). Likewise, some susceptibility was predicted for ABC and 3TC in the EFV-based group, but little was seen in those failing on NVP. In contrast, moderate susceptibility to EFV and, to a lesser extent, NVP was retained by those who failed while on NVP, whereas there was essentially complete resistance to both drugs in those on EFVbased regimens. Discussion This study, using standard and model-based Bayesian analytic methods, presents the first detailed comparison of the ARV resistance patterns found during virologic failure for NVP-vs. EFV-based combination regimens in a group of subjects infected with CRF01_AE strains of HIV-1. Our findings emphasize differences and similarities from the patterns seen during failure in patients infected with other subtypes. The clearest differences found between treatment groups were in specific resistance mutations or clusters rather than in overall total numbers of mutations. Our analysis offers strong evidence that, in contrast to the mutation patterns published for other subtypes [12,13,14,15,21], individuals infected with CRF01_AE and experiencing virologic failure while receiving NVP-based HAART favor TAM-2 resistance pathways (K70R, D67N, T215F, K219Q/E) rather than TAM-1 (M41L, L210W, T215Y), whereas those receiving regimens containing EFV appear to select both TAM-2 and TAM-1 pathways ( Figures 1B, 2A and 3B). As suggested by previous studies of mainly Subtype B viruses [10,22,23,24,25,26], and as shown in both the raw percentages ( Figure 1A and 1B) and the adjusted Bayesian analysis (Figure 2A), mutations 101E, 181C and 190A were preferentially selected by nevirapine, while 103N, 106M, and 225H were preferentially selected by efavirenz. Wallis, in her study of subtype C, found similar differential distributions of 101E, 181C, 190A and 106M but found almost equal proportions of 103N and 225H mutations selected by EFV and NVP [11]. Reuman et al. also found that K103N, V106M and P225H were among the 16 NNRTI mutations preferentially selected by EFV and K101E, Y181C and G190A among the 12 mutations preferentially selected by NVP. However, in their analysis of covariation of NNRTI resistance mutations, the K103N-P225H pair as well the Y181C-G190A and K101E-Y181C pairs significantly covaried in sequences from individuals experiencing EFV while the pair V108I-Y181C covaried in NVP group and the pair K101E-G190E covaried in both groups [10]. In our study, the pair 101E-190A of NNRTI mutations was closely correlated only in the nevirapine group. Less expected was our finding that the major NRTI mutations M184V and K65R were preferentially selected in the presence of nevirapine ( Figures 1B and 2A), while TAM-1 mutations were almost never selected (only in 2% of patients treated by nevirapine). In contrast, TAM-1 215Y and 210W were preferentially selected in the presence of efavirenz. The NRTI backbone was shown to influence the resistance patterns (see Figures 2B and 2C). Our model-based analysis suggested that tenofovir, in addition to selecting K65R, also strongly selected Y115F ( Figure 2B), a mutation rarely observed in other subtypes, and then only with the use of a triple-NRTI regimen [27,28,29]. In contrast, we confirmed the observation that K65R has an antagonistic effect with TAMs [30], since only 1 of 9 patients in our study had both 65R and one TAM. Also we found that tenofovir use was associated with a significantly lower rate of M184V mutation than d4T, illustrated in Figure 2B, while this was not observed in the 903 study which evaluated the efficacy and safety of tenofovir vs stavudine when combined with 3TC and Efavirenz in antiretroviral-naive patients [31]. The trend (seen in 26 of 32 mutations analyzed) towards higher rates of both NRTI and NNRTI mutations observed with zidovudine in comparison to d4T ( Figure 2C) is consistent with observations by Wallis in South Africa [11] and Bocket in France [12]. It is likely that the accumulation of mutations resulting in drug resistance follows specific pathways rather than random sequences. Duration of virologic failure may be associated with the order of mutation occurrence, and therefore with the timing of some mutations. Our analysis, shown in Figure 2D, confirmed that the M184I mutation occurs before M184V, suggested that K103N is an early mutation, and showed clearly that all the TAM-2 pathway mutations, as well as 215Y, occur as time spent on virologic failure increases. Overall, high viral load at genotyping was not seen to favor mutation occurrence after failure. Conversely, the clear negative association of viral load at genotyping and occurrence of mutations M184V and V108I can be interpreted as an effect of these mutations on viral fitness. Impairment of HIV fitness in viruses containing the 184V mutation is well known [32]. There is one report of lower viral loads with the 108I mutation, and this analysis confirms that finding [33]. The actual fitness of virus with this mutation has not been investigated. In our analysis of resistance and susceptibility to a range of available drugs, failure on efavirenz-based HAART seemed to impair any further use of efavirenz and nevirapine, while this was not systematically the case with nevirapine ( Figures 4A-C). The Geno2Pheno software analysis predicted that resistant virus selected by nevirapine may still be susceptible to efavirenz and even, although to a lesser extent, to nevirapine. EFV sensitivity is likely due to the Y181C mutation, as reported by other groups [34,35]. The persistent nevirapine susceptibility implies the possibility that nevirapine may be successfully recycled (perhaps after the elapse of some time) in resource-constrained environments. Overall it appears that, in terms of the salvage treatment options, zidovudine should preferably be associated with nevirapine rather than with efavirenz, whereas tenofovir should better be associated with efavirenz, consistent with the current DHHS guidelines [36]. The synergistic association of tenofovir and efavirenz is also supported by the fact that efavirenz was shown to protect from K65R and Y115F. One obvious limitation of this analysis is the cross-sectional nature of the data and methods used. Only one genotype assessment was used per patient so that the dynamics and timing of resistance mutations could not be investigated. The clusters identified could therefore not be imputed to some time ordering. Although the genotype data were of good and consistent quality as they originated from a single quality-controlled laboratory, nevertheless, some missing information could alter the precision and accuracy of some estimates. In addition, the Geno2Pheno predictions were analyzed as raw data, without accounting for possible small numbers and any uncertainty in the model since it was not available on the Max-Planck-Institute Informatik platform. Finally it should be re-emphasized that our comparative assessment was made on subjects who had failed their first-line treatment. Such a study design is less robust than a prospective, randomized design would have been, but focuses only on events at virologic failure. To complete and deepen this assessment, a similar model-based assessment could be developed on the population at start of treatment, integrating failure rate and timing of failure. Such an analysis would be strengthened by the use of longitudinal data and methods which could provide a clearer view of resistance mutation pathways over time to support optimal monitoring and medical decisions throughout treatment. Nevertheless, our model-based evaluation allowed the comparative assessment of resistance mutation patterns between nevirapine-based and efavirenz-based treatment groups, disentangling the concurrent effects of NRTI backbone and other factors and accounting for correlations between mutations. The Bayesian tools enabled the statistical inference of such models and provided comprehensive outputs and measures of uncertainty attached to the results. The analysis not only confirmed well-established patterns already observed in other studies with other subtypes [8,10,13,22,23,24,25,26,37], but also pointed at less known or new features to be considered for optimal treatment and future research. At therapy initiation, study subjects were antiretroviral-naïve except for prophylaxis of mother-to-child transmission of HIV. HIV RNA levels and CD4 cell counts were measured at treatment initiation, three months, six months and every six months thereafter. Virologic failure was defined as HIV RNA concentration greater than 500 copies/mL after 6 months of HAART. For Table 1, the date of virologic failure was defined as the midpoint between the last viral load ,500 copies/mL and the first viral load .500 copies/mL. In the model-based analysis, the duration of virologic failure was estimated taking into account the dates of the last viral load ,500 copies/mL and the first viral load .500 copies/mL as well as the frequency of blood sampling. Measurement of plasma HIV-1 RNA Plasma HIV-1 RNA levels were quantified using the standard (limit of detection, 400 copies/mL) or the ultrasensitive (limit of detection, 50 copies/mL) protocol of the Cobas Amplicor HIV-1 Monitor RNA test, version 1.5 (Roche Molecular Systems Inc., Branchburg, USA). Genotypic resistance testing HIV-1 Resistance testing for the RT gene was performed using the ViroSeq HIV-1 Genotyping system (Celera Diagnostics, Alameda, USA) according to the manufacturer's instructions or using the consensus technique of the Agence Nationale de Recherches sur le SIDA (AC11 Resistance Study Group PCR and Sequencing Procedures http://www.hivfrenchresistance.org/ ANRS-procedures.pdf). The first round of nested PCR was performed on extracted RNA, with the Kit Titan One tube (Roche Diagnostics) and the set of MJ3 and MJ4 primers. The second round PCR used the set of A35 and NEI135 primers. PCR products were purified using the QIAQUICK Purification PCR kit (QIAGEN). In both techniques, sequencing products were then submitted onto the automated genetic analyzer 3100 (Applied Biosystems, Foster city, CA, USA). Sequences were aligned using the Viroseq or Seqscape softwares (Applied Biosystems). RT mutations were identified from the International AIDS Society Descriptive statistics and demographics Demographic, laboratory, and clinical characteristics at the time of genotyping such as CD4 count, HIV RNA, CDC stage, and RT mutation frequencies were compared between the NVP-and EFV-based HAART groups. Categorical variables were compared using Chi-square or Fisher's exact tests; reported P-values were two-tailed. Continuous variables were compared using the Wilcoxon rank-sum test. Comparison of resistance mutation counts at failure The total number of NRTI or NNRTI resistance mutations was compared between the nevirapine and efavirenz groups. In order to investigate the possible effect of NRTI or NNRTI mutations, a bivariate binomial-logistic model was fitted to the numbers of NRTI and NNRTI mutations, accounting for their correlation using a patient-specific random effect. The logistic regression component permitted an adjustment for the treatment used (nevirapine vs. efavirenz and NRTI backbone), failure duration, time to failure, and the viral load at genotyping. Mutation-by-mutation analysis A multivariate logistic regression model was fitted to any NRTI or NNRTI mutations observed. Covariates considered were NNRTI treatment (nevirapine or efavirenz), backbone drug (TDF vs. ZDV vs. d4T), failure duration, viral load at failure, and time to failure. Those covariates were assumed to have either mutation-specific effects (random or fixed) or constant effects across mutations. These choices were based on the performance of the fitting algorithms (robust convergence) and based on statistical model selection criteria (Deviance Information Criteria [38], favoring models which can better reproduce the data observed with most parsimonious parameterization. Odds ratios (ORs) were derived for each of these factors. Analysis of correlations between resistance mutations (cluster analysis) We performed cluster analysis in order to analyze the multivariate correlation patterns between resistance mutations observed. The model described in the previous paragraph allowed the derivation of a correlation matrix of all NRTI and NNRTI mutations, adjusted for backbone, viral load and duration of failure effects. Based on this adjustment, a distance, defined as (1correlation), was used to describe the clustering of mutations. The nearest neighbor algorithm was used as linkage method to determine in what order clusters may join with each other. Results are displayed for both NRTI and NNRTI mutations using a correlation tree or dendrogram, which lists all mutations and indicates at what level of similarity (or correlation) any two clusters joined together. The main features of dendrograms for nevirapinebased and efavirenz-based treatments were then described and compared. Phenotype inference The nucleotide sequences of the region coding for the reverse transcriptase were submitted to the web-based user interface of the Max-Planck-Institute Informatik Geno2Pheno website [20]. Based on the alignment of the uploaded genotype sequence with the HXB2 reference and on machine learning approaches, the Geno2Pheno system derives the best-predicted phenotype and a resistance probability score for a list of drugs. Probability scores were made available for each viral sequence for the following antiretroviral drugs: the NRTIs ZDV, ddI, d4T, 3TC, emtricitabine (FTC), ABC, and TDF; and the NNRTIs NVP and EFV. Comparative assessment of drug resistance For the purpose of modeling, the distributions of Geno2Phenopredicted resistance probabilities were dichotomized. The resulting binary data were analyzed by a Bernoulli/logistic regression to compare nevirapine-versus efavirenz-based HAART, with adjustment for NRTI backbone at failure and with all significant or relevant covariates included in the model-building phase. Once the model was fitted to the data, the predictive drug resistance distributions were compared between efavirenz and nevirapine, and between the d4T-, ZDV-and TDF-containing backbones. Bayesian statistical inference and model-building Bayesian inference was used to fit the statistical models described above [39]. Briefly, in this framework, prior information about the quantities of interest was combined with the observed data to derive a posterior distribution on these quantities, using Monte Carlo Markov chains algorithms. In all analyses, only noninformative priors were used. Bayesian inference is appropriate for mixed-effect and/or non linear models like the ones used, especially in the case of small sample sizes [39,40]. The 90%credibility interval (CI) attached to point estimates are presented and can be directly interpreted as the range within which there is 90% chance that the quantity of interest lies. Point estimates were reported as mean posterior estimates. For odds ratios estimates, their (posterior) probability to be greater than 1 (PP[OR.1]) or lower than 1 (PP[OR,1]) was reported in addition to credibility intervals. To visualize posterior distributions, posterior medians, inter-quartile ranges, and 90%-credibility intervals were presented as boxplots using ad hoc routines coded in Matlab Release 14. The model-building phase was consistently implemented as follows. First, the model was fitted to the data without any covariate. Then, the resulting estimates were used as initial values. The WinBUGS software (version 1.4, [41]) was used for both the assessment of the model-building and the final models. Supporting Information Supporting Information S1 Posterior probabilities [OR.1] for each mutation. Posterior probabilities are provided for each NNRTI and NRTI resistance mutation. (DOC) Supporting Information S2 Resistance probabilities with a d4T backbone. Probabilities of virus to be resistant to 3TC, ABC, EFV, NVP, TDF, d4T and ddI (95% confidence interval) among patients failing a d4T-containing backbone in combination with NVP or EFV. (DOC) Supporting Information S3 Resistance probabilities with a ZDV backbone. Probabilities of virus to be resistant to 3TC, ABC, EFV, NVP, TDF, d4T and ddI (95% confidence interval) among patients failing a ZDV-containing backbone in combination with NVP or EFV. (DOC) Supporting Information S4 Resistance probabilities with a TDF backbone. Probabilities of virus to be resistant to 3TC, ABC, EFV, NVP, TDF, d4T and ddI (95% confidence interval) among patients failing a TDF-containing backbone in combination with NVP or EFV. (DOC)
2016-05-04T20:20:58.661Z
2011-11-23T00:00:00.000
{ "year": 2011, "sha1": "888f0fb217074ccd5135b64028d558d3a7fea846", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0027427&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "888f0fb217074ccd5135b64028d558d3a7fea846", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258858445
pes2o/s2orc
v3-fos-license
Orographic mechanical and surface thermal effects of the Tibetan–Iranian Plateau on extratropical intraseasonal waves in boreal summer: numerical experiments The intensity and location of boreal summer extratropical intraseasonal oscillations along the subtropical westerly jet (EISO-SJ) are crucial in triggering and distributing extreme events over Eurasia. Based on numerical experiments, this study distinguishes the orographic mechanical and surface thermal forcing of the Tibetan–Iranian Plateau (i.e. TIP-MF and TIP-TF) on EISO-SJ. The TIP-MF primarily modulates the amplitude of EISO-SJ that strengthens over the upstream and weakens over the downstream. Comparatively, the TIP-TF not only reduces/increases the intensity of EISO-SJ over the TIP upstream/downstream, but also significantly migrates the track of EISO-SJ northward. Further analysis demonstrates that the changes of the westerly jet, eddy energy propagation and energy conversion are consistent with the track and amplitude changes of EISO-SJ. This study indicates the variations of the TIP surface sensible heating in interannual variation and global change, as well as the terrain uplift of the TIP in paleoclimate influence on the mid-latitude subseasonal variation. Introduction The atmospheric intraseasonal oscillations prevail over extratropical Eurasia in boreal summer (Yang et al 2015, Hannachi et al 2017, Stan et al 2017, Zhu et al 2023, which primarily feature a zonal quasi-biweekly wave train with eastward propagation along the subtropical westerly jet (SJ) (Fujinami and Yasunari 2004, Yang et al 2014. Numerous studies have identified the importance of extratropical intraseasonal oscillations along the SJ (EISO-SJ) on causing regional subseasonal variations and frequently triggering extreme meteorological events, such as heatwaves (Schubert et al 2011, Gao et al 2018 and flooding (Dugam et al 2009, Li et al 2021. Meanwhile, EISO-SJ has been proven to significantly affect the local subseasonal prediction skills (Qi and Yang 2019, Liu et al 2020, Yan et al 2021Yan et al , 2022 and even provides a window of opportunity for East Asian subseasonal prediction (Zhu et al 2023). Therefore, understanding the factors affecting the variations and features of EISO-SJ is crucial for the subseasonal community. The Tibetan Plateau has remarkable impacts on changing atmospheric circulation through thermal (Yeh 1950, Wu and Liu 2000, 2016, Wu et al 2012 and mechanical effects (Boos andKuang 2010, Park et al 2012). In terms of the thermal forcing of the Tibetan Plateau, numerical studies found that the warming Tibetan Plateau causes the increasing trend of summer frontal rainfall in the East Asian region through exciting two Rossby wave trains over the upper-level westerly jet stream and the low-level southwesterly monsoon (Wang et al 2008). Additionally, the thermal forcing of the Tibetan Plateau modulates the decadal (Duan et al 2011, Wang and Li 2019 and interannual variations (Ueda and Yasunari 1998, Bansod et al 2003, Hsu and Liu 2003, Ullah et al 2021 of the atmospheric circulation and Eurasian temperature/precipitation. In the intraseasonal timescale, Zhu and Guan (1997) found that the anomalous surface sensible heat fluxes of the Tibetan Plateau has a significant effect on the intensity and propagation speeds of extratropical atmospheric intraseasonal oscillations using numerical experiments; however, which only resulted from a single case analysis using a simple two-layer atmospheric circulation model. Liu et al (2007) found that the thermal forcing of the Tibetan Plateau produces the atmospheric quasibiweekly oscillation over the Tibetan Plateau using a global primitive theoretical model, which were also reflected from an individual case. As for the mechanical forcing of the Tibetan Plateau, previous studies mainly focused on its effects on the formation of the Asian summer monsoon systems (Boos and Kuang 2010, Cane 2010, Park et al 2012. Yang et al (2019), (2020) addressed that the mechanical forcing of the Tibetan Plateau facilitates the northward propagation of tropical boreal summer intraseasonal oscillations. The Iranian Plateau, although its area and altitude are smaller than those of the Tibetan Plateau, also has significant impacts on atmospheric circulation through thermal (Zhang et al 2002) and mechanical effects (Zarrin et al 2011). Liu et al (2017 have emphasized that the Tibetan Plateau and Iranian Plateau are not only geographically adjacent but also have mutual influences and feedbacks. Therefore, numerous studies regard the Tibetan-Iranian Plateau (TIP) as a whole in their research (Wu et al 2012, Zhou et al 2016, He et al 2019. TIP is located at the pathway of EISO-SJ, and its thermal and mechanical effects on EISO-SJ have not been clarified yet. Recently, the Global Monsoons Model Intercomparison Project (GMMIP), one of the endorsed MIPs in the Coupled Model Intercomparison Project Phase 6 (CMIP6), was launched, aiming to understand the behavior of monsoon circulations as well as the TIP's thermal and mechanical effects on monsoon variations (Zhou et al 2016, He et al 2019. Based on this research project, the Chinese Academy of Sciences (CAS) Flexible Global Ocean-Atmosphere-Land System (FGOALS-f3-L) (He et al 2019, 2020) and the First Institute of Oceanography-Earth System Model 2.0 (Song et al 2020) climate system models finished a series of experiments with and without TIP's terrain height and surface sensible heating, but only the FGOALS-f3-L provides daily circulation data output. Therefore, this study borrows it to investigate the thermal and mechanical effects of the TIP on EISO-SJ. This paper is organized as follows: section 2 presents the data, numerical experiment and method. Section 3 shows the simulated performance of EISO-SJ in CAS FGOALS-f3-L, and the orographic mechanical and surface thermal effects of TIP on EISO-SJ are displayed in section 4. Section 5 discusses the underlying physical mechanisms. Conclusions are provided in section 6. Data, numerical experiment and method Observed daily atmospheric circulation fields are retrieved from ERA-Interim provided by the European Centre for Medium-Range Weather Forecasts (Dee et al 2011), with a 1.5 • × 1.5 • horizontal resolution. The historical record is the period between 1980 and 2014. The global atmospheric general circulation model used in this study is the CAS FGOALS-f3-L, which was developed at the Institute of Atmospheric Physics/State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics. Three sets of experiments are carried out, named the AMIP, GMMIP amip-TIP (hereafter GMMIP-TIPnoTH) and GMMIP amip-TIPnosh (hereafter GMMIP-TIPnoSH), respectively. Their detailed configurations are introduced in the description paper of the AMIP and GMMIP datasets (He et al 2019, 2020), and a brief summarization is shown in table 1. In general, the GMMIP-TIPnoTH and GMMIP-TIPnoSH have the similar configurations to the AMIP, except that the TIP's terrain height is removed in the former experiment by setting the topography above 500 m to 500 m over the TIP region. The TIP's surface sensible heating is removed in the latter experiment by cutting off the sensible heating over the same TIP region as GMMIP-TIPnoTH. Therefore, the AMIP can be treated as the control run, and the GMMIPs as the sensitive runs. Unless otherwise specified, the TIP's mechanical forcing (TIP-MF) mentioned in this study refers to the orographic blocking of TIP, and the TIP's thermal forcing (TIP-TF) refers to the surface sensible heating of TIP. The selected period of the experimental data is consistent to the ERA-Interim , and the simulated data has the raw horizontal resolution of C96 (about 1.0 • × 1.0 • ). To be more comparable with the ERA-Interim, a bilinear interpolation scheme is used to interpolate the raw data to the regular 1.5 • longitude-latitude grid. The quasi-biweekly component of a particular variable can be obtained by the following two steps: I) subtracting the climatological mean and the first three harmonics, and II) using the Butterworth bandpass (8-25 d in this study) filter. Empirical orthogonal function (EOF) is used to extract EISO-SJ. In detail, EISO-SJ is retrieved by regressing the quasi-biweekly 250 hPa meridional wind (V250) onto the first principal component, obtained by making the EOF analysis for the boreal summer quasi-biweekly V250 over the SJ region (25-55 • N, 15 • W-130 • E), which refers Zhu et al (2023) in more detail. Note that compared to Zhu et al (2023), only the EOF1 is analyzed and discussed in this study, because the EOF2 exhibits similar sensitive changes with EOF1 as a response to the numerical experiments (figure ignored). A t-test is used in this study to determine whether the regression coefficient and the differences between AMIP and GMMIPs are significant. It should be noted that the filtered data is often not independent, the effective degrees of freedom for significance tests are estimated referring to Bretherton (1999). Wave activity flux is considered to investigate the energy propagation and dispersion (Takaya and Nakamura 2001), and can be computed as where W represents the horizontal wave-activity flux, U is the wind velocity, u and v are the zonal and meridional winds respectively, ψ denotes the stream function, and a bar and a prime are the summer basic state and quasi-biweekly component. Barotropic energy conversion (CK) and baroclinic energy conversion (CP) are calculated to probe the underlying physical mechanisms and the formulas are as follows (Xu et al 2020): where f is the Coriolis parameter; p is the pressure; and S is the static stability, which is defined as ( , in which C p is the specific heat and R is the gas constant of dry air. An overbar denotes the summer mean state, and a prime is the quasibiweekly component. A positive value of CK/CP represents energy conversion from the mean flow to the quasi-biweekly perturbation by barotropic/baroclinic processes (Kosaka and Nakamura 2006, 2010). Simulated EISO-SJ in CAS FGOALS-f3-L Before lending insight into the thermal and mechanical effects of TIP on EISO-SJ, it is necessary to examine the ability of the AMIP experiment in CAS FGOALS-f3-L to reproduce the characteristics of the SJ and EISO-SJ. Figure 1(a) shows the observed zonal wind (U) in the upper troposphere (i.e. U250), which exhibits that the main body of the Eurasian SJ is located along the latitudes of 30-50 • N, in which the SJ axis is about 35-40 • N and two Eurasian SJ cores lie near the Caspian Sea and to the north of the Tibetan Plateau. The primary features of the SJ are well simulated by the AMIP, except that the SJ slightly shifts northward and the intensity of the SJ core around the Caspian Sea is slightly underestimated ( figure 1(b)). Figures 1(c)-(f) display the features of EISO-SJ in observation and AMIP experiment, respectively. Compared with the observation, the AMIP-simulated EISO-SJ well exhibits the location of the positive and negative anomalous centers along the wave train i.e. the centers of the anomalous northerly are over the central North Atlantic, central Mediterranean, Caspian Sea-Lake Balkhash and East Asia, while southerly anomalies are over the south of Great Britain, Black Sea-Caspian Sea and northeast of the Tibetan Plateau (figures 1(c) and (d)). Moreover, the phase velocity speed (∼3.3 m s -1 ), group velocity speed (25.1 m s -1 ) and wavelength (∼4400 km) of EISO-SJ shown in observation can be well simulated in the AMIP experiment (figures 1(e) and (f)). Note that for the AMIP-simulated EISO-SJ, the European part is weaker than the observed one, while the Asian part is stronger than that in observation, which may be related to the biases of the simulated SJ in location and intensity (Lu et al 2002). Overall, the CAS FGOALS-f3-L shows reasonable performance in simulating the observed climatological features of the SJ and EISO-SJ, which is a reliable tool to investigate the thermal and mechanical effects of TIP on EISO-SJ. Distinguishing the orographic mechanical and surface thermal effects of TIP on EISO-SJ Figures 2(a)-(c) show the features of EISO-SJ in the AMIP, GMMIP-TIPnoTH and GMMIP-TIPnoSH, respectively. When the TIP-MF is removed, the main body of EISO-SJ exists and the location of EISO-SJ almost remains unchanged compared with the AMIP-simulated one, in which the pathway axes of EISO-SJ are both lied around the latitude of 40 • N. However, the relative intensities of each anomalous center are changed (figure 2(a) vs. (b)), that is, the anomalous amplitudes to the west of 70 • E are weakened, while the anomalous centers to the east of 70 • E are enhanced. In detail, the anomalous centers over the south of Great Britain, central Mediterranean and Black Sea-Caspian Sea weaken, and the anomalous center over the central North Atlantic even disappears. However, the anomalous centers over the Caspian Sea-Lake Balkhash, northeast of the Tibetan Plateau and East Asia strengthen, and a new significant anomaly is generated over Northwest Pacific ( figure 2(b)). Here, we simply divided the upstream and downstream regions of the TIP by using 70 • E as a boundary, i.e. the domain of 25-55 • N, 40 • W-70 • E is defined as the TIP upstream, while the domain of 25-55 • N, 70 • E-180 • E is defined as the TIP downstream. In order to quantify the EISO-SJ's differences over the TIP upstream and downstream between the results with and without the TIP-MF, we calculated the strength of EISO-SJ over the TIP upstream/downstream using the TIPupstream-averaged/TIP-downstream-averaged variance of quasi-biweekly V250, and named it as EISO-SJ-U/EISO-SJ-D. As a result, EISO-SJ-U decreases by 18.6% (|20.1 − 24.7|/24.7 ≈ 18.6%) while EISO-SJ-D increases by 28.7% (|34.1 − 26.5|/26.5 ≈ 28.7%) with the removal of the TIP-MF ( figure 2(d)). In contrast, EISO-SJ exhibits more significant changes with the removal of the TIP-TF, as shown in figure 2(c). Compared with the AMIP-simulated EISO-SJ, the pathway axis of EISO-SJ simulated in the GMMIP-TIPnoSH experiment is located around the latitudes of 30-35 • N, which shifts southward by about 10 • of latitudes over the Eurasian continent ( figure 2(a) vs. (c)). Meanwhile, the anomalous centers over the TIP upstream strengthen, while the anomalous centers over the TIP downstream weaken, in which EISO-SJ-U are increased by 28.3% (|31.7 − 24.7|/24.7 ≈ 28.3%) while EISO-SJ-D is decreased by 9.4% (|24.0 − 26.5|/26.5 ≈ 9.4%) when the TIP-TF is removed ( figure 2(e)). The comparison between the AMIP and GMMIPs demonstrates that both the TIP-MF and TIP-TF could significantly modulate EISO-SJ but have different effects. The TIP-MF strengthens the EISO-SJ's amplitudes over the TIP upstream but weakens its TIPdownstream intensities. However, the TIP-TF exhibits the opposite impacts on regulating the amplitudes over the TIP upstream and downstream along EISO-SJ. Meanwhile, the TIP-TF forces the pathway axis of EISO-SJ to significantly migrate northward over the Eurasian continent. Change of SJ and causes The SJ is the dominant atmospheric waveguide whose location traps the propagation track of transient waves (Branstator 2002, Wirth et al 2018. Therefore, we first explore the changes in the SJ's location. According to the numerical experiments, the SJ's location remains almost unchanged with the removal of the TIP-MF ( figure 1(b) vs. 3(c)). In contrast, the SJ's location evidently shifts southward after removing the TIP-TF, (figure 1(b) vs. 3(d)). To quantitatively depict the changes in the SJ's location, the westerly jet axis index is calculated, which is defined as the latitude of maximum U250 in the meridional direction between 20 • N and 55 • N (Xiao and Zhang 2013). We restrict the latitudes between 20 • N and 55 • N to exclude the potential identification of the polar front jet at higher latitudes. According to this definition, the zonally averaged (15 • W and 130 • E) westerly jet axis is 40.2 • N in the AMIP experiment, 40.9 • N in the GMMIP-TIPnoTH experiment, and 32.8 • N in the GMMIP-TIPnoSH experiment. The change in the SJ's location are highly consistent with the change in the path of EISO-SJ. To further explore the reason for the changes in the SJ's location, figure 3 shows the change of the zonal-averaged (30-100 • E) temperature with pressure and upper tropospheric (250 hPa) geopotential height (GHT250) between the GMMIPs and AMIP. On the one hand, as the TIP-MF is removed but TIP-TF remains, the temperature around the TIP obviously increases ( figure 3(a)), which indicates that the meridional thermal gradient is evidently decreased in the subtropics. The reduced meridional temperature gradient causes the northward shift of the SJ (Sha et al 2020). On the other hand, lacking TIP's mechanical blocking means the disappearance of both the topographic bifurcation on SJ (Ding 1994) and the increasing pressure in the windward slope of TIP and over TP (Wu and Liu 2016), just corresponding to the anomalous low pressure over TIP upstream and TIP area (figure 3(c)). The above two aspects work in the opposite way, so that the SJ's location almost maintains similar latitudes. In contrast, when the TIP-TF is removed but the TIP-MF remains, the temperature around the TIP obviously decreases ( figure 3(b)), causing an increased meridional thermal gradient from tropical to subtropical, which forces the SJ to move southward (Nan et al 2021). Meanwhile, the removal of the TIP-TF means the disappearance of the sensible-heat-driven air pump, resulting in anomalous low pressure over the TIP area and its downstream ( figure 3(d)). The combined effects of the two, lead to a significant southward shift in the SJ's location. Figures 4(a) and (b) display the 250 hPa wave activity fluxes in the GMMIP-TIPnoTH and GMMIP-TIPnoSH experiments, respectively. The wave activity flux is the common method used to describe the energy propagation and dispersion of Rossby waves. As the TIP-MF is removed but TIP-TF is kept, more wave activity fluxes propagate eastward toward TIP downstream without the TIP's blocking effects (Rhines 2007, White et al 2018, which strengthens the TIP-downstream EISO-SJ ( figure 4(a)). However, when only the TIP-TF is removed, the eastward propagating wave activity fluxes are significantly reduced over the TIP downstream due to the disappearance of the TIP's heating (Liu et al 2007), so that the intraseasonal waves are evidently weakened over the TIP downstream ( figure 4(b)). The eddy kinetic energy (EKE) further quantifies this process (figure 4(c)). The EKE increases by 30.0% (|33.4 − 25.7|/25.7 ≈ 30.0%) over the TIP downstream with the removal of the TIP-MF while decreases by 15.2% (|21.8 − 25.7|/25.7 ≈ 15.2%) over the TIP downstream without the TIP-TF. Change of eddy energy In addition, previous works have suggested that extratropical intraseasonal Rossby waves can develop efficiently by harvesting perturbed energy from basic flow via baroclinic (Kosaka et al 2009, Chen et al 2013, Xu et al 2020 and barotropic processes (Wang et al 2013, Zhu andYang 2021). Therefore, we calculate the baroclinic energy conversion (denoted as CP) and barotropic energy conversion (denoted as CK) over the TIP upstream and downstream in the GMMIP-TIPnoTH and GMMIP-TIPnoSH experiments, respectively, as shown in figures 4(d) and (e). With the removal of the TIP-MF (figure 4(d)), CP is decreased by 22.8% (|1.42 − 1.84|/1.84 ≈ 22.8%) over the TIP upstream, while increased by 131.9% (|3.85 − 1.66|/1.66 ≈ 131.9%) over the TIP downstream. The increased/decreased positive CP indicates that less/more time-mean available potential energy is converted to perturbed available potential energy over the TIP upstream/downstream to develop the Though CK is about an order of magnitude smaller than CP in both GMMIPs, similar with CP, with the removal of the TIP-MF, CK is decreased by 56.5% (|0.10 − 0.23|/0.23 ≈ 56.5%) over the TIP upstream, while increased by 50.0% (|0.18 − 0.12|/0.12 ≈ 50.0%) over the TIP downstream. Conversely, with the removal of the TIP-TF, CK is increased by 60.9% (|0.37 − 0.23|/0.23 ≈ 56.5%) over the TIP upstream, while decreased by 25.0% (|0.09 − 0.12|/0.12 ≈ 25.0%) over the TIP downstream. The increased/decreased positive CK indicates that less/more time-mean kinetic energy is converted to EISO-SJ. Conclusions and outlooks Using the state-of-the-art CAS FGOALS-f3-L model, this study distinguished the orographic mechanical and surface thermal effects of TIP on EISO-SJ and discussed the underlying physical mechanisms. When the TIP-MF is removed, EISO-SJ weakens over the TIP upstream but strengthens over the TIP downstream. In contrast, EISO-SJ exhibits more significant changes when the TIP-TF is absent. That is, the intensity of EISO-SJ is enhanced over the TIP upstream, while the amplitude is reduced over the TIP downstream. Meanwhile, the core pathway of EISO-SJ shifts southward with the removal of the TIP-TF. Further analysis indicated that the change in the SJ's location migrates the track of EISO-SJ while the change in the eddy energy modulates the intensity of EISO-SJ. When the TIP-MF is removed but TIP-TF is kept, more wave activity fluxes propagate eastward toward TIP downstream without the TIP's blocking effects, accompanied by a weaker/stronger positive energy conversion over the TIP upstream/downstream, which favors the enhancement of EISO-SJ over the TIP downstream and the weakening over the TIP upstream. In contrast, when the TIP-TF is removed, a southward shift in the SJ's location forces the track of EISO-SJ to move southward. Meanwhile, the intraseasonal perturbations are weakened downstream due to the disappearance of the TIP's heating, accompanied by a stronger/weaker positive energy conversion over the TIP upstream/downstream, which favors the enhancement of TIP-upstream EISO-SJ and the attenuation of the TIP-downstream EISO-SJ. This study has identified that the TIP-TF more significantly affects the intensity and location of EISO-SJ. The evident changes of the TIP-TF often occur on seasonal (Yanai et al 1992), interannual (Hsu and Liu 2003) and decadal (Wang and Li 2019) time scale. In particular, a weak decreasing trend (Wu et al 2015, Wang andZhao 2020) has been reported in recent decades. According to this study, the changes of the TIP-TF may modulate the intensity and location of EISO-SJ, and subsequently change the distribution and probability of EISO-SJ-related extremes. This study also indicates that the variations of TIP surface sensible heating in interannual variation and global change as well as the terrain uplift of TIP in paleoclimate modulate the mid-latitude atmospheric subseasonal waves.
2023-05-24T15:04:44.111Z
2023-05-22T00:00:00.000
{ "year": 2023, "sha1": "1a3aa94926afcaa3eff6f07a4ef5a6f7623a7f41", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1748-9326/acd796", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "cc6faf54aaff289793947792148a363f495933f6", "s2fieldsofstudy": [ "Environmental Science", "Geology", "Geography" ], "extfieldsofstudy": [ "Physics" ] }
221402232
pes2o/s2orc
v3-fos-license
Development of a Synthetic Population Model for Assessing Excess Risk for Cardiovascular Disease Death This decision analytical model describes the use of a semisynthetic population to identify the distribution of excess cardiovascular death risk and its correlation with social and biological risk factors. eMethods. Description of Synthetic Population Creation A synthetic population with demographics and disease characteristics was built from the synthetic population used by the FRED modeling and simulation platform (Figure 1). 1 The FRED population was generated by a census-based iterative proportional fitting methodology that results in a geospatially realistic population that accurately represents the demographics and household structure of the US population. 2 Counts of insured population with evidence of specific conditions by claims for Type II diabetes, hyperlipidemia, hypertension, and combinations of these conditions was provided by three major insurance organizations in the Allegheny County area. The covered population included private insurance, Medicare and Medicaid. Eligible enrollees were Allegheny County residents, enrolled for at least 90 continuous days under one of the three health plans in the year 2015 (Jan 1 -Dec 31). Medical claims data was provided for 30-86% of population in 97.2% of census tracts. Eligible enrollees varied by census tract but overall accounted for approximately 60 percent of the Allegheny County population. Percent of persons in a given census tract who were insured by each insurer or by the three insurers overall was calculated, but causes of lack of insurance are not available. At least one census tract with low reported percentage of insurance coverage is the location of 2 major universities and therefore is residence of a large number of students from other locations, who may have insurance through other carriers, but it is not possible to quantitate that from the available data. Since the insurers providing data included the largest Medicaid insurance provider in the area, Medicaid enrollees are presumed to be represented adequately in the claims data. Other reasons for low coverage in a census tract include cost of insurance, perceived lack of need for insurance, distrust of the system and lack of access to or knowledge about availability of insurance programs are possible but the level of these or other factors on insurance uptake at the census tract level are beyond the scope of this study. Further, some proportion of the population likely had insurance with other insurers who cover small portions of the county population and this may not be distributed evenly the population. This could affect the results and is a limitation of the study Census tract levels for each disease were stratified by gender and age range category (1-17, 18-44, 45-64, 65-84, and ages 85+). Disease claim counts were used to assign levels of diabetes, hypertension and hyperlipidemia and combinations of these conditions to the FRED population on a census tract basis. It is not possible to connect a diagnostic variable (such as diabetes) directly to an individual in the claims data. When the number of agents per census tract exceeded the population covered by insurer data, agent diabetes, hypertension and hyperlipidemia status were assigned by randomly drawing an individual with matching demographics from the National Health and Nutrition Examination Survey (NHANES). 3 Plausible values for individual height, cholesterol, high density lipoproteins, blood pressure and history of stroke and prior myocardial infarction were obtained from NHANES, by matching to the synthetic population's agent-level demographics and disease status. When agent diabetes, hypertension and hyperlipidemia status were assigned from NHANES, the NHANES values for individual height, cholesterol, high density lipoproteins, blood pressure and history of stroke and prior myocardial infarction from that NHANES individual were used to assign those variables to that agent. Data from the National Health Interview Survey was used to assign smoking status to agents based on demographics. 4 Rates were obtained by summing counts per tract and dividing by population. Each agent was assigned a five-year risk of death due to CVD using a published risk equation (see eAppendix 1). eAppendix 1. Description and Evaluation of Algorithm Used for Prediction of Cardiovascular Disease Death Rate To predict risk of death from cardiovascular disease, this study used a risk score that was developed using data from eight randomized clinical trials for treatment of hypertension. 5 Development of the risk score related individual characteristics to risk of death from cardiovascular disease using a multivariate Cox model. A risk score was developed from 11 factors: age, sex, systolic blood pressure, serum total cholesterol concentration, height, serum creatinine concentration, cigarette smoking, diabetes, left ventricular hypertrophy, history of stroke, and history of myocardial infarction. The risk score is an integer, with points added for each factor according to its association with risk. This risk score algorithm was chosen in part because the majority of individual level characteristics needed for the prediction were available in the FRED synthetic population, could be added from the insurer claims data available for this project or could be distributed in the population in a realistic way by choosing random similar individuals from NHANES or NHIS. Creatinine values were not available so 2 points was added to the risk score for all agents as suggested by the risk calculator developers. Left ventricular hypertrophy was also not available so was not used in the calculation. Risk was scaled to four years to match data and was summed over each census tract. Agents 18 years old or younger were assigned zero risk. Difference between expected and observed CVD death risk was approximately normally distributed by Shapiro-Wilk normality test (W = 0.99219, p-value = 0.06335) after removal of 2 outliers (eFigure1). Average difference between expected and actual CVD death risk was close to 0 (-40, SD 524) and approximately evenly distributed around 0 but with 2 notable outliers (eFigure2). Linear regression was used to evaluate the reliability of the algorithm used for prediction of cardiovascular disease (CVD) death risk. 6 Regression of observed CVD death rate risk from expected rate gave an intercept not significantly different from 0 (0.0013, CI [-0.0014, 0.0041], p=0.384) and slope close to 1 (0.94, CI [0.75, 1.12], p <0.001), with an adjusted R-squared of 0.214 and F-statistic: 95.87 on 1 and 348 DF (p-value: < 0.001). Plot of residuals versus fitted values did not show any pattern (eFigure 3A) and normal Q-Q plot showed residuals were normally distributed, with the exception of 2 outliers (eFigure 3B). Scalelocation plot indicated limited unequal variance (eFigure 3C). All points were within curved lines in plot of residuals vs leverage, although the outliers were close to the 0.5 line (eFigure 3D). Based on these metrics, this method was considered to provide an acceptable estimate of population risk. eFigure 1. Histogram of Difference Between Expected and Observed CVD Death Rate Difference between expected and observed CVD death risk per 100,000 was approximately normally distributed, after removal of 2 outliers. Normal curve plotted in red. Census tracts for which a majority of determinants were missing were omitted from the study (n=5). Data Description and Limitations Data was collected in 2016-2017 but in some cases was for prior periods, as noted. Data was provided at percent or counts per census tract and was not available at higher granularity. Two data variables had a large proportion of missing data (eTable 1) and it was not possible to either obtain those missing values or to determine what caused the data to be missing. Level of missing data was low for the most part (of 20 determinants, 17 had 5 or fewer tracts with missing data). Different variables were not generally missing for the same tracts. Percent vacant property estimates were produced by the US Postal Service. Vacant properties data is routinely collected by mail carriers on addresses no longer receiving mail due to vacancy and is reported quarterly at census tract geographies in the United States along with counts of total mailing addresses. Data used in this study was aggregated to Allegheny County census tracts. Location information for all Supermarkets and Convenience Stores in Allegheny County was produced using the Allegheny County Fee and Permit Data for 2016. Fee and permit data were used to generate number of fast food restaurants (restaurant that has more than one location in the county but without an alcohol permit) and number of restaurants per census tract (ACHD Fee and Permit data, 2016). Census tract level counts of Allegheny County Fast Food Establishments was obtained by exporting all chain restaurants without an alcohol permit from the County's Fee and Permit System. Chain restaurants capture both local and national chains (including locally owned national chains) so long as there is one or more establishments in operation within the County. While access to supermarkets and excess fast food establishments are believed to impact nutrition and therefore health, in this dataset that effect was not apparent. Most tracts had 0 or 1 supermarket and this study did not include analysis of access to transportation to supermarkets. Fast food establishments were concentrated in the downtown area, where a large number of individuals work but few reside, and in the university areas, where again there are many workers who reside elsewhere and additionally there are many students, who are less at risk for cardiovascular disease. Poor Housing Conditions is an estimate of the percent of distressed housing units in each Census Tract and was prepared using data from the American Community Survey and the Allegheny County Property Assessment database (https://data.wprdc.org/dataset/property-assessments). The estimate was produced by the Allegheny County Reinvestment Fund with the Allegheny County Department of Economic Development. Obesity rates for each census tract were obtained from a published study. 7 Obesity rates for each Census Tract in Allegheny County were produced by estimates using statistical modeling techniques. The obesity rate of a demographically similar census tract was applied to similar ones in Allegheny County to compute an obesity rate. 7 Census tract walk scores measure the walkability of any address using a patented system developed by the Walk Score company. Walk scores were produced by Walk Score (https://www.walkscore.com). For each 2010 Census Tract centroid, Walk Score analyzed walking routes to nearby amenities. Points were awarded based on the distance to amenities in each category. Amenities within a 5 minute walk (.25 miles) are given maximum points. A decay function is used to give points to more distant amenities, with no points given after a 30 minute walk. Walk Score also measures pedestrian friendliness by analyzing population density and road metrics such as block length and intersection density. Data sources include Google, Education.com, Open Street Map, the U.S. Census, Localeze, and places added by the Walk Score user community. While walking scores indicate the ability of residents to walk to amenities, probability of individuals having increased fitness levels by walking to them is highly variable. Homicide counts were obtained from the Department of Human Services. Homicide counts were found to often be located at the hospital where an affected individual would have died, so this variable was considered noninformative. The following census tract level data was obtained from the American Community Survey, US Census, We performed an analysis of Global Moran's I to assess spatial autocorrelation of difference between expected and observed CVD death risk at the census tract level. 8 Randomization with 999 permutations gave a pseudo p-value of 0.001, rejecting the null hypothesis that the distribution of difference was random in the county (eFigure 4, A and B). We further performed Local Indicators of Spatial Association (LISA) analysis to identify regions of clustering. Some areas of high-high and low-low clusters were identified, supporting the hypothesis that there was a degree of clustering of high and low risk census tracts within the county. Further analysis of spatial autocorrelation was beyond the scope of this study. Determinants include (in order from top to bottom): percent high school graduates; percent households below poverty level (LowIncome); median age; percent unemployed; percent uninsured; percent vacant housing, walk score; food desert (based on number of supermarkets); percent households with no access to vehicle. Regression line in red.
2020-09-02T13:06:41.469Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "212630d4bf80f85ebcffee002e568d89f7133765", "oa_license": "CCBY", "oa_url": "https://jamanetwork.com/journals/jamanetworkopen/articlepdf/2770060/krauland_2020_oi_200565_1602707723.17073.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2d098df720de3a61f6761dcd6f5e9478fdd4b8d0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
270880043
pes2o/s2orc
v3-fos-license
NOX4-mediated astrocyte ferroptosis in Alzheimer’s disease This study investigates NADPH oxidase 4 (NOX4) involvement in iron-mediated astrocyte cell death in Alzheimer’s Disease (AD) using single-cell sequencing data and transcriptomes. We analyzed AD single-cell RNA sequencing data, identified astrocyte marker genes, and explored biological processes in astrocytes. We integrated AD-related chip data with ferroptosis-related genes, highlighting NOX4. We validated NOX4’s role in ferroptosis and AD in vitro and in vivo. Astrocyte marker genes were enriched in AD, emphasizing their role. NOX4 emerged as a crucial player in astrocytic ferroptosis in AD. Silencing NOX4 mitigated ferroptosis, improved cognition, reduced Aβ and p-Tau levels, and alleviated mitochondrial abnormalities. NOX4 promotes astrocytic ferroptosis, underscoring its significance in AD progression. Supplementary Information The online version contains supplementary material available at 10.1186/s13578-024-01266-w. Introduction Alzheimer's disease (AD) is a prevalent neurodegenerative condition characterized by memory loss, cognitive decline, and impaired behavioral abilities [1][2][3].Despite some advances in understanding AD's pathological mechanisms, the specific molecular underpinnings remain elusive, and effective treatments targeting its root causes are lacking [1,[4][5][6].A comprehensive investigation into AD's pathogenesis holds paramount importance for its prevention and treatment.In recent years, ferroptosis, an emerging form of cell death reliant on iron, has garnered significant attention in the realm of neurodegenerative diseases [7][8][9][10].Astrocytes are one of the most abundant subtypes of glial cells in the central concentrations of iron in the brains of AD patients and transgenic mouse models, where excess iron can exacerbate oxidative damage and cause cognitive impairment.Disruption of iron homeostasis is considered to be associated with Alzheimer's disease [26,27].Increasing evidence indicates that iron death can lead to AD-mediated neuronal cell death [28].Nevertheless, iron's precise roles and regulatory mechanisms in Alzheimer's, particularly its interplay with astrocytes, remain enigmatic [29].Starshaped glial cells play pivotal roles in the central nervous system's physiological and pathological processes, with certain key regulatory genes or pathways potentially offering valuable insights into Alzheimer's pathogenesis [30][31][32]. Single-cell sequencing technology, noted for its remarkable sensitivity and high resolution, has gained widespread adoption in biomedical research [33][34][35].This innovative approach facilitates the analysis of gene expression and regulatory networks at the single-cell level, revealing cellular heterogeneity and dynamic changes under physiological and pathological conditions [36].Single-cell sequencing affords the capacity to dissect distinct neural cell types, including neurons and glial cells, and their roles in disease progression, offering invaluable insights into neurodegenerative conditions such as Alzheimer's [37]. NADPH oxidase 4 (NOX4), an enzyme with protein catalytic activity that generates reactive oxygen species (ROS), exerts pivotal roles in various physiological and pathological processes encompassing cell proliferation, migration, and cell death [38][39][40].Current research highlights NOX4's potential significance in neurodegenerative diseases, including Parkinson's and Alzheimer's diseases.Nonetheless, the precise mechanisms by which NOX4 influences Alzheimer's, particularly its association with astrocytic iron-mediated cell death, remain to be fully elucidated [41,42].This study's primary objective is to elucidate the critical role of NADPH oxidase 4 (NOX4) in iron-triggered astrocytic cell death and its implications for Alzheimer's disease (AD) pathogenesis.Employing single-cell sequencing technology, the GEO database, and transcriptome sequencing data, this study delves deeply into the cellular populations and associated genes of AD patients.Researchers meticulously label and analyze various cell types, identifying distinctive marker genes for astrocytes.Furthermore, this study uncovers NOX4's pivotal role in iron-induced astrocyte demise.These findings substantially augment our comprehension of Alzheimer's disease etiology, particularly elucidating the nexus between NOX4, astrocytic iron-mediated death, and AD.Importantly, these insights hold clinical relevance for the diagnosis and treatment of Alzheimer's disease. Transcriptome sequencing data acquisition The single-cell transcriptome sequencing data of ADrelated samples in the GSE164089 dataset was analyzed using the Seurat package in R software.To ensure data quality, quality control criteria were applied, including nFeature_RNA > 500, 1000 < nCount_RNA < 20,000, and percent.mt< 10%.Additionally, the top 1000 highly variable genes were selected based on their variance.Furthermore, AD-related microarray dataset GSE48350 was obtained from the GEO database (https://www.ncbi.nlm.nih.gov/geo/).This dataset consists of 173 normal brain tissue samples and 80 AD brain tissue samples [43]. TSNE clustering analysis To reduce the dimensionality of scRNA-Seq datasets, we employ principal component analysis (PCA) based on the top 1000 genes with the highest variance in expression.We used the Elbowplot function of the Seurat package and selected the top 15 principal components for downstream analysis.Using the FindClusters function provided by Seurat, we identified different subpopulations of cells with the default resolution (res = 0.5).Next, we use the t-SNE algorithm to reduce nonlinear dimensionality on scRNA-seq sequencing data.We also used the Seurat package to identify marker genes for individual cell subpopulations and combined the single package with the online website CellMarker (http://xteam.xbio.top/Cell-Marker)for cell type annotation analysis [44,45]. GO and KEGG enrichment analysis The differential expression genes (DEGs) were subjected to Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis using the "clusterProfiler", "org.Hs.eg.db", "enrichplot", and "ggplot2" packages in the R language.Bubble plots and circular plots were generated to visualize the enrichment results of the three categories, namely Biological Processes (BP), Cellular Components (CC), and Molecular Functions (MF), in the Gene Ontology (GO).Additionally, a bubble plot was generated to display the enrichment results of the KEGG pathway analysis [46]. Differential gene expression screening The "limma" package in R software was utilized for the selection of differentially expressed genes.Differentially expressed genes between normal samples and AD samples were filtered based on the criteria |logFC| > 0 and P.adjust < 0.05 [47]. Lentivirus infection To construct a lentivirus-mediated NOX4 silencing vector, the pSIH1-H1-copGFP (sh-) interference vector (catalog number SI501A-1, System Biosciences, USA) was purchased.The silencing sequence can be found in Table S1.The lentiviral particles carrying the vector were packaged into HEK-293T cells (CRL-3216, ATCC, USA) using the lentivirus packaging reagent kit (catalog number A35684CN, Invitrogen, USA).After 48 h, the supernatant was collected to obtain lentivirus with a titer of 1 × 10 8 TU/ml.Researchers interested in rapidly and efficiently constructing a lentiviral vector that mediates NOX4 silencing may consider adopting the methodology utilized in our laboratory [48,49]. Cell culture and screening Human normal astrocytes were purchased from ATCC (ATCC, USA) and cultured in human astrocyte medium (catalog number 1801, ScienCell, USA).The medium consisted of a basal medium (catalog number 1801), 2% (v/v) fetal bovine serum (FBS, catalog number 0010), 1% (v/v) astrocyte growth supplement (AGS, catalog number 1852), and 1% (v/v) penicillin/streptomycin solution (P/S, catalog number 0503).The cells were cultured at 37 °C in a 5% CO2 incubator.To simulate Aβ-induced neuronal injury, the cells were treated with Aβ25-35 peptide (catalog number A107853-25 mg, Aladdin, Shanghai, China) at a concentration of 20 μM for 24 h.The cells were divided into the following groups: control, AD, AD + sh-NC (infected with negative control lentivirus expressing sh-NC), and AD + sh-NOX4 (infected with lentivirus expressing sh-NOX4).After adding 1 × 10 5 TU lentivirus to the astrocytes, the cells were incubated for 48 h, except for the control group, which was incubated for an additional 24 h in the medium containing 20 μM Aβ25-35 peptide [50,51]. Alzheimer's disease APP/PS1 mouse model The male transgenic mice with overexpression of human amyloid precursor protein (APP) and mutant forms of presenilin 1 (PS1) were purchased from Jackson Laboratory (Bar Harbor, ME, USA, Stock #034829).The wildtype C57 male mice were purchased from Weitonlihua Experimental Animal Technology Co., Ltd. in Beijing, China for the Alzheimer's disease model experiments.They were maintained under non-pathogenic conditions at a temperature of 26-28 ℃ and humidity of 50-65%, with free access to food and water.All mice were acclimated for one week prior to the experiments.The experimental procedures were conducted in accordance with ethical standards and were approved by our institution's Animal Ethics Committee. For the experiments, the mice were randomly divided into five groups: WT, APP/PS1, APP/PS1 + sh-NC, APP/ PS1 + sh-NOX4, and APP/PS1 + sh-NOX4 + erastin, with six mice in each group.To silence NOX4 in the neurons in vivo, sh-NOX4 (4 × 10 5 TU) or sh-NC (4 × 10 5 TU) was slowly injected into the bilateral hippocampi of APP/PS1 mice.In the APP/PS1 + sh-NOX4 + erastin group, erastin (10 μM; HY-15,763, MedChem Express, New Jersey, USA) was dissolved in a water bath at 37℃ with gentle shaking, and then 5% dimethyl sulfoxide with corn oil (C8267, Sigma-Aldrich, USA) was added.The mice were treated for 20 days.At the end of the experiment, all mice were euthanized with an overdose of anesthetic (pentobarbital sodium).The tissues were processed by perfusing the ascending aorta with a 0.9% sodium chloride solution, followed by fixation of the brain tissues in 4% paraformaldehyde solution and embedding in paraffin [38,[52][53][54].The behavior test experiment was finished, and the above processing methods were executed. Flow cytometric cell sorting The mouse brain tissue samples were cut into small pieces and digested at 37 °C in PBS solution containing 0.8 mg/mL Collagenase IV (Merck, C4-BIOC, USA).After a wash with PBS buffer, the cell suspension was filtered through a 50 μm sieve to remove residual solid components such as organic matter and neurons.The cell suspension was then centrifuged at 1000 rpm for 5 min.The supernatant was discarded, and the cell pellet was retained.Washing with cell culture medium was performed, followed by grinding the remaining tissue cell clusters using a homogenizer.After repeated washes, the cell suspension was added to approximately 1 mL of washing buffer for selection.Subsequently, the cell suspension was placed in a column containing magnetic beads labeled with S100β antibody. In the spatial sliding column, cells were bound to the S100β antibody (# 9550 S, Cell Signaling Technology, Danvers, MA, USA).Through negative selection, nonstellar glial cells were removed while stellar glial cells were retained.Subsequent washing steps were performed to eliminate cells and impurities not bound to the magnetic beads.The cell suspension was then transferred to a sterile culture dish for inspecting the integrity and activity of stellar glial cells under a microscope.After identifying and collecting the target cell samples, the cells were fixed using PBS solution containing 3.7% formaldehyde.Permeabilization of the fixed cells was done using 0.1% Triton X-100, followed by labeling of the stellar glial cells using GFAP antibody (ab207165, Abcam, Cambridge, UK).The labeled cell samples were injected into a flow cytometer, and cell fluorescence intensity was measured by laser excitation to obtain a purity of 90% for stellar glial cells.Finally, the collected stellar glial cells were subjected to Western blot, lipid peroxidation detection, and measurement of iron ion content using the same culture conditions [55]. Western blot Tissue total protein was extracted using RIPA lysis buffer (P0013C, Beyotime, Shanghai, China) containing PMSF.The extraction process involved incubation on ice for 30 min, followed by centrifugation at 4 °C and 8000 g for 10 min to collect the supernatant.The total protein concentration was measured using a BCA assay kit (Catalog number: 23,227, ThermoFisher, USA).50 μg of protein was dissolved in 2x SDS loading buffer and boiled for 5 min at 100 °C prior to SDS-PAGE gel electrophoresis.The proteins were then transferred to a PVDF membrane. ECL fluorescence detection reagents from the ECL assay kit (Catalog number: abs920, Aibikeshin (Shanghai) Biotechnology Co., Ltd., Shanghai, China) were mixed in equal amounts of A and B solution.The mixture was then added onto the membrane and imaged using the Bio-Rad imaging system (Bio-Rad, USA) in a darkroom.Finally, the Quantity One v4.6.2 software was used for analysis, with the grayscale value of the corresponding protein bands normalized to the grayscale value of the GAPDH protein band to represent the relative protein content [56].Repeat each experiment three times and take the average value. Determination of GSH content The GSH content in human and murine astrocytes was determined using the GSH assay kit (A006-2-1, Nanjing Institute of Biotechnology, China) according to the manufacturer's instructions.Initially, the collected cells were washed 1-2 times with PBS and pelleted by low-speed centrifugation.The pellet was then resuspended in PBS buffer.Subsequently, the cells were manually homogenized for detection after cell disruption [57]. MDA determination Ferroptosis is a form of cell death caused by the accumulation of lipid peroxidation products on the cell membrane.Therefore, the level of malondialdehyde (MDA), a lipid peroxidation product, can serve as an indicator of ferroptosis.In this study, we evaluated the MDA content in astrocytes using the MDA assay kit (A003-4-1, China) produced by the Nanjing Institute of Biotechnology.The assay was conducted following the manufacturer's instructions, and absorbance at 530 nm was measured using a microplate reader [58,59]. Iron content determination The iron ion content in astrocytes can be measured using the iron assay kit (E1042, Beijing Pulei Gene Technology Co., Ltd., China) as instructed by the manufacturer.The procedure is as follows: First, collect the samples and wash them with PBS, then lyse the cells.Next, add Solution A to the collected lysate, mix, and incubate at 60 °C for 1 h.Finally, add the iron ion detection reagent, mix, and incubate for 30 min.Transfer 200 μl of the solution into a 96-well plate, and measure the absorbance at a wavelength of 550 nm [60]. To examine the effect of the orange fluorescent iron probe FerroOrange on ferrous ions in human and murine astrocytes, we first co-incubated the cells with Hoechst 33,342 (#4082, Cell Signaling Technology, Danvers, MA, USA) for 15 min, followed by three washes with PBS.Then, after a 30-minute incubation with 1 μM FerroOrange (#36,104, Cell Signaling Technology, Danvers, MA, USA), we washed the cells three times with PBS and used a laser confocal microscope (Olympus, Tokyo, Japan) to capture cell images.Fluorescence signals were observed and analyzed by exciting at 561 nm and detecting emission light between 570 and 620 nm, represented as the color orange [61]. GPX4 activity assay According to the manufacturer's instructions, the GPX4 ELISA kit was utilized to measure the activity of GPX4.For the cell sample (ml060706, mlbio, China), centrifugation at 1000×g for 10 min was performed to remove particles and aggregates, thereby obtaining the cell supernatant.Subsequently, the sample of interest and biotin-labeled antibody were co-incubated, followed by washing and addition of the avidin-labeled HRP.Afterwards, unbound enzyme complex was removed through incubation and washing, and substrate A, B, and the enzyme complex were concurrently added to generate a color reaction.The intensity of color corresponds to the concentration of the target substance in the sample.In the case of mouse serum sample (ml057982, mlbio, Shanghai), blood was collected in a tube without pyrogens and endotoxins.Centrifugation at 1000×g for 10 min was utilized to carefully and rapidly separate the serum from the red blood cells.Similar to the cell sample procedure, the sample of interest and biotin-labeled antibody were co-incubated, followed by washing and addition of the avidin-labeled HRP.Unbound enzyme complex was subsequently removed through incubation and washing, and substrate A, B, and the enzyme complex were concurrently added to generate a color reaction.The intensity of color corresponds to the concentration of the target substance in the sample [60]. Lipid peroxidation detection For the detection of lipid peroxidation during ferroptosis, mouse astrocytes were first collected by centrifugation.Subsequently, they were washed twice with PBS for 5 min each, and then 1 mL of BODIPY 581/591 C11 working solution was added, followed by incubation at room temperature for 15 min.The mixture was then centrifuged at 400 g for 3-4 min at 4 ℃, and the supernatant was discarded.The cells were washed again with PBS for 5 min, repeated twice.After resuspending the cells in 1 mL of serum-free culture medium, they were analyzed using a flow cytometer.During flow cytometry analysis, signals corresponding to 505-550 nm were measured in the FL1 channel, and signals above 580 nm were measured in the FL2 channel.When BODIPY 581/591 C11 undergoes an oxidation-reduction reaction with intracellular ROS, its maximum fluorescence emission shifts from around 590 nm to approximately 510 nm, and it is proportional to the generation of lipid reactive oxygen species (ROS) [62,63]. CCK-8 This study employed the CCK-8 assay kit (catalog number: CA1210, Beijing Solaibao Technology Co., Ltd., Beijing, China) for cell proliferation experiments.We first take cells in the logarithmic growth phase, with 1 × 104 cells seeded in each well and pre-cultured in a 96-well plate for 24 h.Afterward, the cells were transfected according to grouping.After transfection, 10μL of CCK-8 reagent was added at 48 h, respectively.Incubate for 3 h at 37 °C, then measure the absorbance values at 450 nm wavelength for each well on the spectrophotometer.The high or low absorbance values reflect the proliferation of cells in the culture medium.We create bar charts for each group to display the cell viability and present the experimental results [64]. Morris water maze experiment The Morris water maze test consisted of four platform trials and one probe trial conducted over five consecutive days.The movement trajectories of mice were recorded using video and analyzed using image analysis software (ANYMaze, Stoelting).At a temperature of 22-24℃, a circular pool filled with water containing titanium dioxide was used, with a platform positioned approximately 1 centimeter below the water surface in the first quadrant.In the platform trials, mice were placed in the water in one of the four quadrants.The time taken for a mouse to find and remain on the platform for 5 s after entering the water was recorded as the escape latency, along with the swimming path.If a mouse failed to find the platform within 60 s, the escape latency was recorded as 60 s.In the probe trial, the platform was removed, and the mice were allowed to freely swim in the pool for 60 s.We recorded the swimming paths, time spent in the target quadrant, swimming distance, and time spent in each quadrant [59]. Immunohistochemistry staining The brain tissue of each group of mice was embedded and sliced, followed by 20 min of baking at 60 °C.Subsequently, the slices were soaked in xylene for 15 min, with a change of xylene followed by another 15 min of soaking.After a 5-minute immersion in absolute alcohol, it was changed again for another 5 min.Then, the slices were hydrated in 95% and 70% ethanol for 10 min each.3% H2O2 was added dropwise onto each slice to block endogenous peroxidase activity and incubated at room temperature for 10 min.Citrate buffer was then added, and the slices were microwaved for 3 min, followed by a 10-minute room temperature incubation with antigen retrieval solution and washed with PBS three times. Normal goat serum blocking solution (E510009, Shanghai Bioengineering Co., Ltd., China) was added and incubated at room temperature for 20 min, followed by overnight incubation at 4 °C with the following primary antibodies: mouse Aβ (ab230297, 1:200, Abcam, Cambridge, UK) and rabbit p-Tau (S396) (ab32057, 1:1000, Abcam, Cambridge, UK).After three washes with PBS, the slices were incubated with goat anti-rabbit IgG (ab6721, 1:1000, Abcam, UK) and goat anti-mouse IgG (ab150113, 1:500, Abcam, UK) secondary antibodies for 30 min and washed with PBS again.DAB chromogenic reagent kit (P0203, Beyotime, Shanghai, China) was used by adding a drop of each A, B, and C reagent onto the samples for 6 min of color development, followed by staining with hematoxylin for 30 s. Subsequently, the slices were dehydrated in 70%, 80%, 90%, 95% ethanol, and absolute ethanol for 2 min each.Finally, they were treated with xylene for 5 min, soaked twice, and then sealed with neutral resin.The slices were observed under an upright microscope (BX63, Olympus, Japan) [59]. Congo red staining Due to the deposition of Aβ plaques in the brain, Congo red staining was performed on brain tissue samples.The brain tissue was sliced from paraffin-embedded tissue blocks into sections with a thickness of 4 μm.Following the manufacturer's instructions, the samples were stained with Congo red.Subsequently, the sections were observed and photographed at 400x and 100x magnification using the ECLIPSE Ci-L plus optical microscope from Nikon Co., Ltd., Japan, facilitating research and analysis [59]. Transmission electron microscope (TEM) To observe the microstructure of brain tissue using transmission electron microscopy, the following procedures were employed: Initially, brain tissue was perfused with 2.5% glutaraldehyde in 0.1 M sodium phosphate buffer (pH 7.4), followed by extraction and washing with phosphate buffer.Subsequently, small cubic brain tissue samples with a volume not exceeding 1 mm³ were fixed overnight at 4 °C.Afterward, the samples were sliced into sections (with a thickness of 50 nm) using an ultramicrotome and stained with uranyl acetate and lead citrate.Finally, the sections were observed using the JEM-1011 transmission electron microscope (JEOL Ltd., Tokyo, Japan) [65,66]. Statistical analysis We used SPSS 22.0 statistical software (SPSS, Inc., Chicago, IL, USA) and GraphPad Prism 9.5 to analyze all the data.Data are presented as mean ± standard deviation (SD).A paired t-test should be used for comparing two groups, and a one-way analysis of variance (ANOVA) should be used to compare multiple groups.The homogeneity of variance test uses Levene's method.Dunnett's t-test and LSD-t-test are used for pairwise comparisons if the variances are homogeneous.Dunnett's T3 test should be used if the variances are not homogeneous.P < 0.05 indicates that the difference between the two groups is statistically significant. Single-cell transcriptomic sequencing analysis revealed that astrocytes play a critical role in the pathogenesis and progression of Alzheimer's disease We analyzed a single-cell transcriptome sequencing dataset related to Alzheimer's disease obtained from the GEO database, which included two AD samples (GSM4996461, GSM4996463) from GSE164089.Upon data integration using the Seurat package, the results revealed that the majority of cells had nFeature_RNA > 500, 1000 < nCount_RNA < 20,000, and percent.mt< 10% (Figure S1A).Following these criteria, we removed low-quality cells to obtain the expression matrix.Computational analysis of sequencing depth correlations indicated that the filtered cell data exhibited good quality (Figure S1B), permitting their utilization for subsequent analyses. We analyzed the Alzheimer's disease (AD)-related single-cell transcriptome sequencing dataset obtained from the GEO database, which includes two AD samples (GSM4996461, GSM4996463) from GSE164089.Upon data integration using the Seurat package, the results revealed that the majority of cells had nFeature_ RNA > 500, 1000 < nCount_RNA < 20,000, and percent.mt < 10% (Figure S1A). Following these criteria, we removed low-quality cells to obtain the expression matrix.Computational analysis of sequencing depth correlations indicated that the filtered cell data exhibited good quality (Figure S1B), permitting their utilization for subsequent analyses. We further analyze the filtered cells and identify highly variable genes based on gene expression variance.We selected the top 1000 genes with high variability in variance for downstream analysis (Fig. 1A).Afterward, we used principal component analysis (PCA) to reduce the dimensionality of the data linearly and presented the heatmap of the major correlated gene expression profiles of PC_1 to PC_6 (Figure S1C), as well as the distribution of cells in PC_1 and PC_2 (Fig. 1B).The results show that there is no noticeable batch effect among the samples.We then use ElbowPlot to sort the principal components (PCs) by the standard deviation (Fig. 1C).The results indicate that PC_1 -PC_15 could fully reflect the information in the selected highly variable genes and have good analytical significance. In addition, we apply the t-SNE algorithm for nonlinear dimensionality reduction on the first 15 principal components.By clustering, we obtained 12 clusters (Fig. 1D) and extracted each cluster's marker gene expression profiles (Figure S1D).Cell type annotation analysis was performed using the single package and the online website CellMarker (Fig. 1E) [44].We have identified three types of cells, namely neurons, neural stem cells, and astrocytes.Clusters 3, 8, and 10 are annotated as neural stem cells, clusters 0, 5, 9, and 11 are annotated as neurons, and clusters 1, 2, 4, 6, and 7 are annotated as astrocytes. Therefore, we performed KEGG and GO analysis on the marker genes of astrocytes.The KEGG analysis results showed (Fig. 1F): the marker genes of astrocytes were mainly enriched in entries related to Alzheimer's disease, Huntington's disease, Parkinson's disease, and other diseases. The results of the GO functional analysis (Fig. 1G) show that the glial cell marker genes are mainly enriched in biological processes (BP) such as axon development, axonogenesis, and regulation of neuronal projection development.Cellular components (CC) are mainly enriched in the neuronal cell body, synaptic junction, and postsynaptic specialization of neurons.In molecular function (MF), they are mainly enriched in actin binding, microtubule binding, and calcium binding, among others. The above results indicate that astrocytes play a crucial role in the pathogenesis of Alzheimer's disease. Transcriptomic sequencing analysis revealed that the NOX4 gene plays a critical role in the occurrence and progression of Alzheimer's disease To investigate the potential molecular mechanisms underlying the occurrence of Alzheimer's disease, we (F) KEGG pathway enrichment analysis of marker genes in astrocytes, with GeneRatio on the x-axis, KEGG functional terms on the y-axis, circle size indicating the number of enriched genes in the term, and color representing enrichment p-value.(G) GO functional analysis of marker genes in astrocytes at the biological process, cellular component, and molecular function levels, with GeneRatio on the x-axis, GO functional terms on the y-axis, circle size indicating the number of enriched genes in the term, and color representing enrichment p-value screened the Alzheimer's disease-related chip GSE48350 from the GEO database and obtained differentially expressed genes (DEGs) (Fig. 2A).We took the intersection of these DEGs with the top 2000 genes ranked by AD correlation score and the top 1000 genes ranked by astrocyte correlation score in the gene cards database (Fig. 2B) to obtain differentially expressed genes related to AD astrocytes.The GO enrichment analysis revealed (Fig. 2C) that these intersecting genes are enriched in pathways such as oxidative stress, neuronal death, and response to hypoxia. Therefore, we intersected those mentioned above differentially expressed genes related to AD astrocytes with the top 50 ranking ferroptosis-related proteins in the Genecards database (Fig. 2D), resulting in three essential genes: NOX4, NFE2L2, and YAP1.NOX is considered to be the primary source of reactive oxygen species (ROS) production [67].NOX4 is one of the main subtypes expressed in the central nervous system [68].It is a crucial participant in the progression of multiple neurological disorders [28,69,70].The expression level of NOX4 is increased in patients with Alzheimer's disease and animal models.NOX4 promotes astrocyte ferroptosis through oxidative stress-induced lipid peroxidation [38,71]. Further research and exploration into the role of NOX4 may help uncover the pathogenesis of Alzheimer's disease and provide potential therapeutic strategies.We chose NOX4 as the target gene for subsequent experiments.We constructed an in vitro AD model by inducing normal human astrocytes with Aβ25-35 peptide.The WB results showed that, compared to the control group, the expression of NOX4 was significantly increased in the AD group (Fig. 2E).The above results indicate that the NOX4 gene may play an essential role in the occurrence and development of Alzheimer's disease (AD). Elevated levels of NOX4 occur in astrocytes of 3.3 APP/PS1 mice, leading to ferroptosis The relationship between NOX4 and iron death in astrocytes in Alzheimer's disease (AD) is currently unclear.To investigate the role of NOX4 in Alzheimer's disease, we conducted immunofluorescence staining to measure the levels of NOX4 protein in GFAP-positive astrocytes in the cortical region of APP/PS1 transgenic mice and wild-type (WT) mice.The results revealed an increase in the intensity of NOX4-positive staining in GFAP-positive astrocytes in the cortical region of APP/PS1 transgenic mice as shown in Fig. 3A. Next, we analyzed whether the level of lipid peroxidation is elevated in the astrocytes of the cortical regions of APP/PS1 mice.Immunofluorescent staining results showed (Fig. 3B-C) that compared to wild-type mice, the intensity of 4-HNE and MDA-positive staining increased in GFAP-positive astrocytes in the cortical area of APP/ PS1 mice.Subsequently, we used flow cytometry to isolate astrocytes from the brain cortex of mice, assessing the expression of NOX4, 4-HNE, and MDA.Additionally, we examined the status of cell iron death by measuring the expression of FTH1, SAT1, GPX4, SLC7A11, and ACSL4. Western blotting results showed (Fig. 3D) that compared to wild-type mice, the expression levels of SAT1, GPX4 and SLC7A11 were decreased in the astrocytes of APP/PS1 mice, while the expression level of NOX4, 4-HNE, MDA, FTH1, and ACSL4 were increased.The results of lipid peroxidation detection showed (Fig. 3E) that, compared with the wild-type group, the degree of lipid peroxidation in astrocytes of the APP/PS1 group was enhanced.The GSH determination results showed (Fig. 3F) that the GSH content in APP/PS1 mice astrocytes decreased compared to wild-type mice.Iron ion content determination results showed (Fig. 3G) that compared with the wild-type group, the yellow fluorescence in star-shaped glial cells in the APP/PS1 group was enhanced, and the iron ion content was increased. These results indicate that the levels of NOX4 in astrocytes of APP/PS1 mice are elevated, leading to ferroptosis. Silencing NOX4 could attenuate iron-induced astrocyte cell death In the in vitro AD model experiments involving NOX4 silencing, the Western blot analysis results (Fig. 4A) indicated a significant reduction in NOX4 expression, with the most optimal silencing effect observed after the first attempt.The CCK-8 assay (Fig. 4B) showed a remarkable decrease in astrocyte activity in the AD model.Subsequent NOX4 silencing led to a significant enhancement in astrocyte activity.Additionally, we conducted Western blot detection of cell iron death-related proteins in this model.As shown in the results (Fig. 4C), the levels of SAT1, GPX4, and SLC7A11 in astrocytes of the AD model decreased, while the levels of NOX4, 4-HNE, MDA, FTH1, and ACSL4 increased.However, NOX4 silencing resulted in an increase in SAT1, GPX4, and SLC7A11 levels and a decrease in NOX4, 4-HNE, MDA, FTH1, and ACSL4 levels.GSH assay results (Fig. 4D) demonstrated a decrease in GSH content in astrocytes of the AD model, which significantly increased upon NOX4 silencing. Analysis of MDA content (Fig. 4E) showed an increase in astrocytic MDA levels in the AD model, which significantly decreased upon NOX4 silencing.Iron ion content determination (Fig. 4F) revealed an increase in iron ion levels in astrocytes of the AD model, which significantly decreased following NOX4 silencing.Moreover, the GPX4 enzyme activity assay results (Fig. 4G) indicated a decrease in GPX4 enzyme activity levels in astrocytes of the AD model, which significantly increased upon NOX4 silencing.Finally, FerroOrange assay results (Fig. 4H) displayed enhanced yellow fluorescence and increased iron ion levels in astrocytes of the AD model, which decreased notably with NOX4 silencing, resulting in a significant reduction in iron ion content. These findings suggest that NOX4 silencing can inhibit iron death in astrocytes. Silencing NOX4 could attenuate iron-induced cell death in APP/PS1 mice To further investigate the impact of NOX on ironinduced cell death in APP/PS1 mice, we conducted a NOX4 knockdown experiment in the APP/PS1 mice. WB results revealed a significant decrease in NOX4 expression (Fig. 5A) after silencing NOX4.Among the groups, the first group exhibited the best silencing effect.Thus, we selected it for subsequent experiments.Immunofluorescent staining results showed that compared to the WT group, the intensity of NOX4, 4-HNE, and MDA-positive staining in GFAP-positive astrocytes in the cortical area increased in the APP/PS1 group (Fig. 5B-D).After silencing NOX4, the intensity of NOX4, 4-HNE, and MDA-positive staining decreased. Later, we sorted mouse cortical astrocytes using flow cytometry and examined their ferroptosis status.The results of WB (Fig. 5E) showed that compared to the WT group, the expression levels of SAT1, GPX4 and SLC7A11 were decreased, while the expression level of NOX4, 4-HNE, MDA, FTH1 and ACSL4 was increased in astrocytes of the APP/PS1 group.After the silence of NOX4, the expression levels of SAT1, GPX4 and SLC7A11 increased, while the expression level of NOX4, 4-HNE, MDA, FTH1 and ACSL4 decreased.The results of lipid peroxidation detection showed that compared with the WT group (Fig. 5F), the degree of lipid peroxidation in astrocytes of the APP/PS1 group was enhanced.After silencing NOX4, the degree of lipid peroxidation decreased. The GSH measurement results (Fig. 5G) showed that compared to the WT group, the GSH content in the astrocytes of the APP/PS1 group decreased.However, after silencing NOX4, the GSH content increased.The iron ion measurement results (Fig. 5H) showed that compared to the WT group, the yellow fluorescence in the astrocytes of the APP/PS1 group increased, indicating an increase in iron ion content.After silencing NOX4, the yellow fluorescence increased while the iron ion content decreased.These results suggest silencing NOX4 could inhibit ferroptosis in APP/PS1 mice. Silencing NOX4 improves APP/PS1 mouse models of Alzheimer's disease To investigate the potential of silencing NOX4 in mitigating iron-induced cell death and alleviating Alzheimer's disease, we conducted experiments on APP/PS1 transgenic mice by employing NOX4 knockdown and erastin treatment.Initially, the Morris water maze test was carried out to assess spatial learning and memory in the mice.The findings revealed a prolonged escape latency in APP/PS1 transgenic mice compared to controls, whereas silencing NOX4 significantly reduced the escape latency in these mice. Escape latency was prolonged in the APP/PS1 + sh-NOX4 + erastin group.In addition, the distances traveled by APP/PS1 mice in the target quadrant, the time spent within the target quadrant during the exploration test, and the number of passages through the platform were significantly reduced.However, these indicators were significantly improved after silencing NOX4. There was a decrease in these indicators in the APP/ PS1 + sh-NOX4 + erastin group.There was no significant difference in swimming speed among the groups of mice in the Morris water maze (Fig. 6A).Immunohistochemical analysis of Aβ protein and p-Tau levels revealed (Fig. 6B) significantly increased levels of Aβ and p-Tau proteins in the brains of APP/PS1 mice.However, silencing NOX4 significantly reduces the levels of Aβ and p-Tau proteins, while they are elevated in the APP/ PS1 + sh-NOX4 + erastin group. Congo red staining was used to detect the aggregation of amyloid protein plaques in the brains of APP/ PS1 mice compared to WT mice (Fig. 6C).However, the aggregation of β-amyloid protein was reduced when NOX4 was silenced but increased in the APP/PS1 + sh-NOX4 + erastin group. TEM results showed (Fig. 6D) that, compared to WT mice, the brains of APP/PS1 mice exhibited mitochondrial swelling and loss of mitochondrial cristae.However, the swelling of mitochondria and disappearance of mitochondrial cristae were reduced after silencing NOX4.However, in the APP/PS1 + sh-NOX4 + erastin group, mitochondrial swelling and mitochondrial cristae disappearance were more severe. The above results indicate that silencing NOX4 could improve Alzheimer's disease in APP/PS1 mice. Discussion The results of this study offer novel insights into the pathogenesis of Alzheimer's disease (AD).Our research successfully unveils the pivotal role of NOX4 in ironinduced astrocytic cell death, leveraging single-cell sequencing technology.This discovery is of paramount significance since, to our knowledge, although astrocytes have been recognized as key players in AD's progression, their precise functions and the specific interplay between astrocytes and AD development remain largely obscure [72,73].Our findings have unveiled a potential relationship between NOX4 and iron-induced cell death in Alzheimer's disease, offering a novel insight into unraveling the intricate pathogenesis of Alzheimer's. Prior investigations have extensively demonstrated the critical role of iron metabolism dysregulation and oxidative stress in AD.However, these studies have predominantly explored global phenotypes and overarching mechanisms, with relatively limited scrutiny of specific cell types and cellular states [23,74,75].In our study, single-cell sequencing technology played a vital role [35,76].This approach enabled us to investigate the transcriptomic expression at a single-cell level, providing more detailed information compared to conventional wholesample-based research methods [77].Undoubtedly, this significantly enhanced the depth and breadth of our study.Our research unveiled, for the first time, the high expression of NOX4 in astrocytes in AD.Subsequent in vitro and in vivo experiments confirmed that elevated NOX4 expression in astrocytes led to iron-induced cell death. Historically, the concept of iron-induced cell death has been predominantly applied in cancer research, with relatively limited exploration in neurodegenerative diseases, particularly Alzheimer's disease [25,78,79].Our study uncovered the significant role of iron-induced cell death in Alzheimer's disease, establishing a connection between astrocytes and iron-induced cell death, thus forging a new path for investigating the relationship between ironinduced cell death and neurodegenerative diseases.Based on the aforementioned results, we tentatively conclude that NOX4 mediates astrocytic ferroptosis and fosters Alzheimer's disease progression (Fig. 7).This study has revealed that NOX4 (NADPH oxidase 4) plays a crucial role in the iron-induced cell death process of astrocytes.Silencing NOX4 can effectively inhibit ironinduced cell death, consequently enhancing spatial learning and memory functions in AD mice while reducing levels of Aβ and p-Tau proteins.These findings offer a novel potential for the treatment of Alzheimer's disease.From a clinical perspective, our discovery presents a promising therapeutic target. One strength of this study lies in the precise identification of the differential gene NOX4 in Alzheimer's disease patients using single-cell sequencing technology, linking astrocytes with iron-induced cell death and offering a new perspective on the complex pathogenesis of Alzheimer's disease.While the study has yielded some positive results, there are still some limitations to address.Firstly, our reliance on publicly available databases for gene expression analysis and screening may introduce inherent biases [80].Secondly, while we have elucidated NOX4's role in iron-induced astrocytic cell death in AD, further research is essential to uncover its specific regulatory mechanisms.Moreover, our study predominantly relies on a mouse model for experimental validation, necessitating further confirmation in human studies.Future investigations should delve into the precise biological mechanisms through which NOX4 governs ferroptosis in astrocytes.Previous studies have indicated that TNFα acts as an upstream factor of NOX4, regulating the expression of NOX4 [81][82][83][84].Therefore, further research is needed to determine whether NOX4 is the most direct target.Subsequent experiments could investigate the impact of TNFα regulation on NOX4 in astrocytes, exploring its influence on iron-induced cell death and its role in Alzheimer's disease.Additionally, comprehensive and extensive clinical studies are warranted to corroborate our findings and ascertain whether NOX4 represents a viable therapeutic target.Furthermore, the potential of single-cell sequencing technology to unravel the microscopic underpinnings of Alzheimer's disease progression beckons further exploration.In conclusion, our research paves new pathways and offers hope for a deeper comprehension of Alzheimer's disease pathogenesis and the development of innovative AD treatments. Fig. 1 Fig. 1 scRNA-seq Cell Clustering and Annotation Note: (A) Differential gene expression analysis identified highly variable genes, with red representing the top 1000 highly variable genes, black representing genes with low variability, and the top 10 gene names in highly variable genes.(B) Distribution of cells on PC_1 and PC_2, with each point representing a cell.(C) Distribution of standard deviation of PCs, with important PCs having more significant standard deviations.(D) Visual representation of tSNE clustering results showing the aggregation and distribution of cells from different sources, with each color representing a cluster.(E) Visualization of cell annotation results based on tSNE clustering, with each color representing a cell subpopulation.(F) KEGG pathway enrichment analysis of marker genes in astrocytes, with GeneRatio on the x-axis, KEGG functional terms on the y-axis, circle size indicating the number of enriched genes in the term, and color representing enrichment p-value.(G) GO functional analysis of marker genes in astrocytes at the biological process, cellular component, and molecular function levels, with GeneRatio on the x-axis, GO functional terms on the y-axis, circle size indicating the number of enriched genes in the term, and color representing enrichment p-value Fig. 2 Fig. 2 Identification of Target Genes in Alzheimer's Disease Note: (A) Volcano plot of differentially expressed genes in transcriptome sequencing data (x-axis represents -log10 p-value, y-axis represents log FC, green dots represent downregulated genes, red dots represent upregulated genes, black dots represent no significant difference, Control group, n = 173; Model group, n = 80).(B) Venn diagram showing the intersection of DEGs and Genecards database-related genes in Alzheimer's Disease (AD) and astrocytes.(C) GO enrichment analysis of AD-related differentially expressed genes in astrocytes (circle size indicates the number of enriched genes in the term, color represents enrichment p-value).(D) Venn diagram showing the intersection of DEGs and Genecards database-related genes in AD, astrocytes, and ferroptosis-related proteins.(E) WB detection of NOX4 expression levels in astrocytes of control and AD groups.**P < 0.01, all cellular experiments were repeated 3 times Fig. 3 Fig. 4 Fig. 5 Fig. 3 NOX4 and Ferroptosis in Astrocytes of APP/PS1 Mice Note: (A) Representative immunofluorescence images and statistical analysis of NOX4 expression in the cortical region of mice (astrocytes stained with GFAP in red, NOX4 positive staining in green, cell nuclei stained with DAPI in blue.Scale bar: 20 μm.White arrows indicate NOX4 and GFAP positive cells).(B-C) Representative immunofluorescence images and statistical analysis of 4-HNE and MDA in the cortical region of mice (astrocytes stained with GFAP in red, 4-HNE and MDA positive staining in green, cell nuclei stained with DAPI in blue.Scale bar: 20 μm.White arrows indicate 4-HNE, MDA, and GFAP positive cells).(D) WB detection of NOX4, 4-HNE, MDA, FTH1, SAT1, GPX4, SLC7A11, and ACSL4 expression levels in astrocytes of different groups of mice.(E) Detection of lipid peroxidation using C11-BODIPY 581/591 oxidative probe in astrocytes of different groups.(F) Measurement of GSH content in astrocytes of different groups.(G) Intracellular Fe2 + detection using FerroOrange (Scale bar: 25 μm).Each group contains 6 mice.***P < 0.001 Fig. 6 Fig. 6 Effects of Silent NOX4 on Alzheimer's Disease in APP/PS1 Mice Note: (A) Spatial learning and memory function of each group of mice were evaluated using the Morris water maze paradigm in this study.The figure presents the trajectories of mice in the Morris water maze, the escape latency of mice in the platform test on the fifth day, the percentage of distance traveled by mice to the target quadrant, the time spent by mice in the target quadrant, the number of platform crossings, and the swimming speed of mice in the Morris water maze.(B) The Aβ protein and p-Tau levels were detected using immunohistochemical methods in this study.The figures present the results with a scale of 100 μm.(C) This study detected the aggregation of amyloid plaques using Congo red staining.The figures present the results with scales of 500 μm and 100 μm.(D) The mitochondrial damage in the hippocampal region was observed using transmission electron microscopy (TEM) in this study.The arrows indicate mitochondrial damage.The figures present the results with scales of ×8000 and ×25,000.The study results show significant differences (P < 0.001) between the treatment groups and the control group in the mouse model.Each group consisted of 6 mice
2024-07-03T13:08:25.204Z
2024-07-02T00:00:00.000
{ "year": 2024, "sha1": "4bf2ff6ae81e9d5635f9b9a9184cd7b33daee1d9", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "19022ecf2f2dfa1420c118ad49abc124b538fc38", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
55856934
pes2o/s2orc
v3-fos-license
Appendicular skeleton and dermal armour of the Late Cretaceous titanosaur Lirainosaurus astibia ( Dinosauria : Sauropoda ) from Spain Lirainosaurus astibiae is the best-known titanosaurian sauropod species from the Iberian Peninsula. It was described by Sanz and collaborators in 1999 on the basis of several cranial and postcranial remains from the Late Cretaceous of Laño (northern Spain); new remains from this and other Iberian fossil-sites have recently been referred to this species. This paper focuses on the description of the appendicular skeleton and dermal armour of Lirainosaurus. Comparison with other European titanosaurs confirms that Lirainosaurus astibiae clearly differs from them, and highlights two diagnostic appendicular features: the presence of a dorsal prominence together with a ventral ridge on the medial surface of the scapular blade, and the combination of an anterolateral process and an anteroventral ridge on the sternal plate. Equations for predicting body mass and size in sauropods suggest a body size up to 6 meters and a body mass of at least 2-4 tonnes for the largest individuals of Lirainosaurus astibiae, it being one of the most slender titanosaurs found to date. The study of the non-axial postcranial skeleton supports the hypothesis that Lirainosaurus astibiae is a derived lithostrotian close to Saltasauridae. Verónica Díez Díaz. Universidad del País Vasco/Euskal Herriko Unibertsitatea, Facultad de Ciencia y Tecnología, Apdo. 644, 48080 Bilbao, Spain. diezdiaz.veronica@gmail.com Xabier Pereda Suberbiola. Universidad del País Vasco/Euskal Herriko Unibertsitatea, Facultad de Ciencia y Tecnología, Apdo. 644, 48080 Bilbao, Spain. xabier.pereda@ehu.es José Luis Sanz. Universidad Autónoma de Madrid, Facultad de Ciencias, Dpto. Biología, Ud. Paleontología, 28049 Cantoblanco, Madrid, Spain. dinoproyecto@gmail.com INTRODUCTION Lirainosaurus astibiae is a titanosaurian sauropod from the Late Cretaceous of Laño (northern Spain), first described by Sanz et al. (1999).In fact, it is the best-known titanosaur from the Iberian Peninsula.In recent years, several papers have been produced with a revised description of the published material, as well as new remains referred to this titanosaur, i.e., new cranial specimens such as a braincase (Díez Díaz et al., 2011) and a large sample of teeth that show ontogenetic variation (Díez Díaz et al., 2012).Postcranial remains from other Spanish localities have also been referred to Lirainosaurus (Company et al., 2009;Ortega and Pérez-García, 2009). Knowledge of the axial skeleton of Lirainosaurus astibiae has also been increased, thanks to a detailed study of the vertebral laminae and fossae in the axial series as well as the inclusion of new vertebral remains in its description (Díez Díaz et al., 2013).In this paper, we proceed to describe the appendicular skeleton and dermal armour of this Iberian titanosaur, including all the known girdle elements, limb bones and osteoderms.In addition, comparisons are made between Lirainosaurus and other titanosaurians, mainly the European forms. All this previous work and the detailed description of the material (Sanz et al., 1999;Company et al., 2009;Company, 2011;Díez Díaz et al., 2011, 2012, 2013, this paper), as well as its inclusion in numerous phylogenetic analyses, have made Lirainosaurus astibiae one of the world's best-known titanosaurian species, and a main reference for the study of the sauropod faunas of the Late Cretaceous of Europe. GEOLOGICAL SETTING The Laño quarry is located between the villages of Laño and Albaina in the Condado de Treviño, an enclave of the province of Burgos that lies within Alava in the Basque Country, in the north of the Iberian Peninsula (Figure 1).From a geological point of view, Laño and the adjacent region lie within the Sub-Cantabrian Synclinorium in the southeastern part of the Basque-Cantabrian Region (Baceta et al., 1999).Laño has yielded a diverse continental vertebrate assemblage of Late Cretaceous (probably late Campanian to early Maastrichtian) age, including fossil remains of bony fish, amphibians, lizards, snakes, turtles, crocodilians, pterosaurs, dinosaurs and mammals (Astibia et al., 1990(Astibia et al., , 1999;;Pereda Suberbiola et al., 2000).The continental fossiliferous beds (L1A, L1B and L2) of the Laño quarry were deposited in an alluvial system composed primarily of fluvial sands and silts.The sedimentary structures are consistent with channel areas within an extensive braided river (Astibia et al., 1990(Astibia et al., , 1999)). MATERIAL AND METHODS For the anatomical structures we use "Romerian" terms (Wilson, 2006) for their orientation (e.g., "anterior", not "cranial").Osteological descriptions are organized as follows: pectoral girdle, forelimb, pelvic girdle, hind limb and osteoderms.The eccentricity index (ECC) has been calculated for both humeri and femora as the mid-shaft mediolat- eral width divided by the anteroposterior width (Wilson and Carrano, 1999).The robustness index (RI) has been calculated for all the appendicular remains as the average of the greatest widths of the proximal and distal ends and the mid-shaft divided by the length of the element in question (Wilson and Upchurch, 2003).For the body mass and size we have used the equations proposed by Packard et al. (2009), Seebacher (2001) and Campione and Evans (2012), in which M(g) = 3.352Per H+F 2,125 , M(kg) = 214.44L(m) 1.46 , and logM(g) = 2.754logPer H+F − 1.097, where M: body mass, Per H+F : sum of the perimeters of the humerus and femur in mm, L: body length. Remarks.Lirainosaurus, a small-sized sauropod, is the largest vertebrate in the Laño association (Pereda Suberbiola et al., 2000).Sanz et al. (1999) erected L. astibiae on the basis of a fragment of skull, several isolated teeth and a set of postcranial elements.A braincase, a number of additional teeth and several axial remains from Laño have been referred to Lirainosaurus (Díez Díaz, 2013;Díez Díaz et al., 2011, 2012, 2013).The material of Lirainosaurus astibiae comes from three fossiliferous strata and corresponds to several different individuals.Elements with the same morphology but also with shared autapomorphies, such as caudal vertebrae, iliac fragments and fibulae, have been found in two or in all three beds, strengthening the hypothesis of the presence of a single titanosaurian taxon in Laño.As a whole, the remains of L. astibiae found in Laño are quite homogenous and show little osteological variation, indicating that they probably belong to the same species (Sanz et al., 1999;Díez Díaz, 2013;Díez Díaz et al., 2011, 2012, 2013). DESCRIPTION AND COMPARISONS The appendicular skeleton of Lirainosaurus astibiae is represented by pectoral girdle (scapula, coracoid, sternal plate), forelimb (humerus, ulna), pelvic girdle (ilium, pubis) and hindlimb (femur, tibia, fibula, metatarsal) bones.In addition, a few elements from the dermal armour are also known.All elements were recovered as isolated specimens, but some of them were found in close proximity to each other and could belong to the same individual.The measurements of these appendicular bones are detailed in Tables 2-5. Pectoral Girdle (Figure 2) (Table 2) Four scapulae are known: two left (MCNA 7459, 13855), one right (MCNA 14461), and a poorly preserved fragment (MCNA 14462).Three coracoids have been recovered: two right (MCNA 1846, 7460) and one left (MCNA 3158).The scapula MCNA 14461 and the coracoid MCNA 3158 probably belong to the same individual, as they were found close together.Only one right sternal plate is known (MCNA 7461).Scapula (Figure 2.1-3).The scapulae will be described with the blade oriented horizontally.The best-preserved and largest specimen is MCNA 14461.The acromion is medially curved and concave in its medial surface.The dorsal margin of the acromion is damaged, so it is not possible to infer its total extension to the coracoid articulation.None of the specimens shows an acromial ridge.The glenoid is shorter than the coracoid articulation and faces anteroventrally.Both surfaces are medially deflected.Noteworthy is the absence of a subtriangular process at the posteroventral corner of the acromial plate.The scapular blade is straight, flat medially and slightly convex laterally, with a Dshaped cross-section, especially in the proximal half.In MCNA 14461 and MCNA 14462 a dorsal prominence is present on the medial surface, close to the junction of the acromion with the scapular blade.On the medial surface, MCNA 7459 shows both a dorsal prominence close to the dorsal edge of the scapular blade and the acromion, and a ventral ridge.MCNA 7459 is the only specimen that presents both structures.Also, the dorsal prominence is more developed and sharper in MCNA 14461 than in MCNA 7459, which is more rounded and less pointed.The distal end of the scapular blade is not preserved in the Laño specimens, but it seems to have been only slightly dorsoventrally expanded in relation to the rest of the blade.Coracoid (Figure 2.4-7).The coracoids will be described with the glenoid surface oriented pos- teroventrally.The coracoid presents a subquadrangular outline.The lateral surface is convex, while the medial one is concave.The anterodorsal edge is thinner than the posteroventral one, where the surface of the glenoid is located.The coracoid foramen, which pierces the coracoid lateromedially, is situated dorsal to the glenoid articulation.It is open in MCNA 3158 and 7460 (in the former the borders are broken, but the position of the foramen and comparison with MCNA 7460 lead us to believe that it was open).The glenoid surface is larger than the scapular articulation.In lateral view, a rough surface near the anteroventral edge is present, probably for the insertion of the coracobrachialis brevis (Borsuk-Bialynicka, 1977).There is no infraglenoid lip. Sternal Plate (Figure 2.8-9).The only sternal plate is an incomplete right specimen.It preserves its lateral and anterior borders, the lateral one being strongly concave.This element probably had a semilunar shape, as in most titanosaurs (González Riga et al., 2009).It also has an anteroventral ridge and an anterolateral prominence.The surface of the sternal plate becomes flatter towards its medial margin. Forelimb (Figure 3) (Table 3) Five humeri are known: two right (MCNA 7462, 7464), two left (MCNA 7463, 7465), and a poorly-preserved specimen (MCNA 14463).In addition, a right ulna is preserved (MCNA 3157).Humerus (Figure 3.1-4).The humerus presents a long shaft, which is flattened anteroposteriorly, as in most neosauropods (Curry Rogers, 2009).The shaft is straight in anterior and posterior views.The cross-section of the diaphysis is elliptical at its narrowest point.The ends are greatly expanded lateromedially, especially the proximal one, which has a square-shaped edge.The anterior surface in its proximal half is concave.In anterior view, near the proximolateral edge, a thick, scarcely expanded and medially directed deltopectoral crest appears, which almost reaches the midheight of the diaphysis.In MCNA 7462 and 7463 a small rounded bulge can be seen just above midheight on the posterior surface, closer to the lateral than the medial margin.This feature cannot be seen in the other humeri due to the iron oxide covering.The posterodistal surface includes a concave area delimited by two ridges.The distal condyles cannot be studied in detail in any of the specimens because they are highly eroded or are not preserved.The average humeral eccentricity reaches a value of 2. Ulna (Figure 3.5-8).The right ulna MCNA 3157 is a slender bone with a broad proximal end, whereas the distal end is only slightly anteroposteriorly expanded.The proximal end is triradiate, slightly lateromedially compressed, with the longest process slightly posteromedially directed.The olecranon process is not prominent.The anterolateral process is shorter than the anterior one.These processes form a deep fossa for the reception of the radius.The upper medial surface is shallowly concave, and the shaft has a triangular cross-section: a prominent ridge runs from the proximal end towards the middle of the medial surface of the shaft.The posterior edge is more curved than the anterior one, which is almost straight. Pelvic Girdle (Figure 4) (Table 4) Four iliac remains are known: two right ones (MCNA 7466, 8609) and two left ones (MCNA 13861, 14464).Also, a left pubis (MCNA 7467) has been recovered.Ilium (Figure 4.1-3).The left ilium MCNA 14464 only preserves the iliac blade and the pubic peduncle.The other three specimens are fragments of the dorsal margin of the acetabulum, i.e., the junction of the iliac blade and the proximal part of the pubic peduncle (MCNA 7466, 8609, 13861).The pubic peduncle seems to have been elongate and anteroventrally oriented.In lateral view, just at the base of the pubic peduncle, a triangular hollow appears.The internal structure is highly pneumatized.Pubis (Figure 4.4).A left pubis, broken at its midlength, is known.The proximal part is well preserved, and the distal part is eroded.The obturator foramen is large and oval.It is situated close to the dorsal end of the ischial articulation surface, which is the wider edge of the proximal plate of the pubis.The acetabulum and the iliac articulation are not well preserved.The distal blade is lateromedially compressed.The pubis of L. astibiae seems to have been a slender and gracile bone. Femur (Figure 5.1-5).The shaft is straight, anteroposteriorly compressed and with an elliptical cross section.The posterior surface of the diaphysis is slightly more concave than the anterior one, and presents a flange from below the greater trochanter to the upper third of the shaft, the trochanteric shelf.The femoral head is convex, and it is directed dorsomedially.These specimens show a medial deflection that occupies the proximolateral third of the femur.The lateral bulge lies distal to the greater trochanter.The fourth trochanter is a scarcely developed ridge on the posteromedial surface of the diaphysis, just above the midshaft, which is not visible in anterior view.The distal articular surface is expanded onto the anterior and posterior surfaces of the femur, and it is beveled relative to the long axis of the shaft, especially in the posterior surface.The fibular condyle -which is unequally divided into a small posterodorsal portion and a larger ventral surface -is anteroposteriorly more compressed than the tibial condyle, and they are separated by an intercondylar groove. Both distal condyles are dorsomedially directed.The average femoral eccentricity is higher than 2. Tibia (Figure 5.6-9).The tibiae are slender, have a mediolaterally compressed diaphysis, and the proximal articular surface is anteroposteriorly compressed and oval.The cnemial crest is an expanded flat rounded ridge.With the flat subtriangular surface of the distal end treated as the anterior surface, the cnemial crest projects anterolaterally.This cnemial crest delimits a shallow surface for the reception of the proximal end of the fibula.In MCNA 2203, a prominent anteromedial ridge close to the distal end delimits two concave surfaces, one anteriorly located and the other medially directed.In the other specimens these surfaces and the ridge are not as conspicuous as in MCNA 2203.The distal end of the tibia is more or less subquadrangular in MCNA 2203 and 7471, but longer anteroposteriorly in MCNA 13860.The distal posteroventral process is short.The articular surfaces for the ascending process of the astragalus and for the posteroventral process of the tibiae of L. astibiae are not well pronounced.Fibula (Figure 5.10-13).The fibula is more slender than the tibia.The cross section of the diaphysis at its medial point is more or less subcircular.The extremities are more expanded than the diaphysis, especially the proximal one.The proximal end is lateromedially compressed, the lateral surface being convex and the medial one concave. Although the preservation of the proximal end of most of the fibulae is not perfectly preserved, in some specimens -e.g., MCNA 14471 -the absence of an anteromedial crest, like the one present in the fibula of the basal somphospondylan sauropod Tastavinsaurus sanzi (Canudo et al., 15.E), can be confirmed.In lateral view, the shaft of the fibula is slightly sigmoidal to concave in its anterior surface.The oval lateral trochanter appears in the middle part of the lateral surface of the diaphysis.The distal end has a triangular profile.In medial view, a concavity can be seen; this is the articulation of the fibula with the astragalus.Metatarsal (Figure 6).The left metatarsal III (MCNA 14474) is slender; the proximal end is expanded and has a rectangular profile.The distal end is less expanded and has two condyles, the medial being larger than the lateral one.Proximally, the medial and lateral surfaces have welldefined triangular areas of articulation for metatarsals II and IV. Dermal Armour (Figure 7) Two incomplete osteoderms (MCNA 14473, 14474) have been recovered.They seem to have a flat base and two lateral surfaces that join dorsally, perhaps producing a triangular profile, one of them TABLE 5. Measurements of the best-preserved specimens (in cm) of the hindlimb of the titanosaurian sauropod Lirainosaurus astibiae from the Late Cretaceous of Laño (northern Spain).The measures with a ≈ symbol are approximate as iron oxides cover the surface of the bones.Abbreviations: **: eroded proximal and/or distal ends; ECC = femoral mid-shaft width/femoral anteroposterior width (Wilson and Carrano, 1999); Max.: maximum diameter in the narrowest part of the diaphysis; Max.MLW D/Prx: maximum mediolateral width of the distal/proximal end; MDW: maximum width of the distal end; Min.: minimum diameter in the narrowest part of the diaphysis; MLW: mediolateral width; MPW: maximum width of the proximal end; Per: perimeter in the narrowest part of the diaphysis; RI = average of the greatest widths of the proximal end, mid-shaft and distal end of the element/length of the element (Wilson and Upchurch, 2003).being more perpendicular to the ventral surface than the other.Although most of the osteoderm bodies are missing, they could have belonged to the ellipsoid morphotype described by D'Emic et al. ( 2009), being part of the "root" first described in the osteoderms of Ampelosaurus atacis (Le Loeuff et al., 1994;Le Loeuff, 1995, 2005). COMPARISONS Pectoral Girdle Elements from the scapular girdle are known for several European titanosaurs: Lirainosaurus astibiae, Ampelosaurus atacis, Atsinganosaurus velauciensis and Magyarosaurus dacus (but the preservation of the remains of the latter taxon complicates its comparison with the other taxa).As observed in the material from the type locality of Laño, the scapula of L. cf.astibiae is laterally convex and has a ventral ridge and a dorsal prominence on the medial surface.A. atacis also has a ventral crest on its medial surface, but not the dorsal one (it shows a medio-dorsal protuberance). However, the ventral crest of Ampelosaurus is not as prominent as the one present in the scapula of Lirainosaurus.Opisthocoelicaudia presents a ventral rugosity in the medial surface of the scapular blade for the insertion of the serratus superficialis muscle (Borsuk-Bialynicka, 1977).The ventral ridges shown by the scapulae of Lirainosaurus and Ampelosaurus probably have the same function as that of Opisthocoelicaudia skarzynskii, although in these European titanosaurs they are much more developed.In Atsinganosaurus and Magyarosaurus (NHMUK R. 3816) there are no medial ridges.Saltasaurus loricatus and Neuquensaurus australis do present a dorsal prominence in the medial surface of the scapular blade similar to that of Lirainosaurus -though more developed, they are more like a ridge -but these Argentinean titanosaurs do not show a medial ventral ridge (Powell, 1992).Therefore, the combination of a dorsal prominence and a ventral ridge on the medial surface of the scapular blade is only known in the scapulae of Lirainosaurus.The coracoid has a subquadrangular profile in all the European titanosaurs, but this is a condition usually found in Titanosauria, except in some basal forms such as the African titanosaur Malawisaurus (Gomani, 2005).Also, all of the European titanosaurs present the coracoid foramen close to the suture of the scapula with the coracoid, but only in L. astibiae and most of the specimens of A. atacis does this foramen open dorsally.Le Loeuff (2005) described some coracoids of A. atacis, which have this foramen closed dorsally by a thin wall of bone that can be easily damaged.The dorsal surface of the coracoid foramen is broken in MCNA 3158, so it could have been closed as in A. atacis.In Rapetosaurus krausei the coracoid foramen is open in a juvenile specimen (Curry Rogers, 2009), a condition that could have also applied to the specimens of L. astibiae.Other titanosaurs also present the coracoid foramen close to the articulation with the scapula, such as the Argentinean Rinconsaurus caudamirus (Calvo and González Riga, 2003; V.D.D. personal observation).The coracoids of L. astibiae do not present an infraglenoid lip, as seen in some saltasaurids such as Saltasaurus loricatus (Powell, 1992, V.D.D. personal observation) or Opisthocoelicaudia skarzynskii (Borsuk-Bialynicka, 1977). The lateral surface of the sternal plate of Lirainosaurus astibiae is strongly concave (as observed in the Laño and Chera specimens), as in all titanosaurs (e.g., Rapetosaurus krausei).Also, the surface of the sternal plates of these titanosaurs becomes flatter towards its medial edge.Maxakalisaurus topai (Kellner et al., 2006) presents an anteroventral ridge in the sternal plate, and the same elements of Neuquensaurus australis (Huene, 1929) show an anterolateral ridge.However, the combination of this ridge with the anterolateral process and the concave lateral edge of the sternal plate has only been seen in Lirainosaurus (Sanz et al., 1999;Company et al., 2009). Forelimb Forelimb bones are known from all the titanosaurs of the Ibero-Armorican Island and Magyarosaurus.Lirainosaurus, Atsinganosaurus and Magyarosaurus present slender humeri, with greatly expanded ends, especially the proximal one.The humeri of Ampelosaurus are more robust and have the proximal end greatly expanded, with similar proportions to the humeri of saltasaurids, the breadth of the proximal end being more than 50% of the humeral length.The dorsal edge of the proximal end is flat in all of these titanosaurs, as is typical in most titanosaurs except Saltasaurus loricatus (Powell, 1992; V.D.D. personal observation), where it is sigmoidal.Also, the main axis of the diaphyses of all the European titanosaurs are straight with an elliptical cross-section, and the deltopectoral crest is thick and medially directed in all of them.NHMUK R. 3849 presents a diaphysis that is narrower at mid-shaft than that in other European titanosaurs.The medial margin of the humeri of all the European titanosaurs is concave, especially at the mid-shaft.Nevertheless, the lateral margin is straight in the French taxa, while Lirainosaurus and Magyarosaurus have slightly concave lateral margins.The ulnae of A. atacis are more robust than those of L. astibiae, L. cf.astibiae and M. dacus (NHMUK R. 3849), and the lateral ridge is more prominent and the medial upper surface is flatter in the French specimens (MDE C3- 1296, 1490).L. astibiae has a more lateromedially compressed proximal end, and the fossa between the anterior and anterolateral process is deeper and narrower than that in the other European titanosaurs.The olecranon process is more prominent in L. cf.astibiae than in L. astibiae (Company et al., 2009). Pelvic girdle Besides Lirainosaurus, only Ampelosaurus and Paludititan preserve pelvic elements.The ilia of L. astibiae are anteroposteriorly more concave above the acetabulum than in the specimen of P. nalatzensis; the pubic peduncles are robust and anteroventrally oriented in both taxa.Only L. astibiae presents a triangular hollow at the base of the pubic peduncle of the ilium, as in the Argentinean saltasaurine Rocasaurus muniozi (Salgado and Azpilicueta, 2000;V.D.D., personal observation), but it is also present as a presumable convergence in the basal eusauropod Cetiosaurus (Upchurch and Martin, 2003 fig.11A, B). Damage to the pubis of L. astibiae makes comparison with A. atacis and P. nalatzensis difficult.However, the obturator foramen seems to be larger and more dorsally located in the pubis of L. astibiae. Hindlimb Hindlimb bones are known for all the European titanosaurian taxa.All of them except A. velauciensis preserve femora, which are anteroposteriorly compressed and have a straight diaphysis.They also show a lateral bulge and, just above it, the proximolateral margin is medial to the lateral margin of the distal half of the shaft -as in most Titanosauriformes (Wilson, 2002;Upchurch et al., 2004;Mannion et al., 2013) -but in the Romanian titanosaurs this feature cannot be observed due to the preservation of the specimen (NHMUK R. 3849).The femora of L. astibiae and L. cf.astibiae are more slender than those of A. atacis.The femoral head is dorsomedially directed, and the fourth trochanter is reduced and posteromedially placed in the diaphysis in all of them.In the upper third of the posterior surface of the femoral shaft of L. astibiae, the greater trochanter extends distally like a flange.This flange, or trochanteric shelf, is also present in the titanosaurs Saltasaurus loricatus, Neuquensaurus australis, Rocasaurus muniozi, Rapetosaurus krausei and Jainosaurus cf.septentrionalis, but also in more basal somphospondylans too (Curry Rogers, 2009;Wilson et al., 2011;Mannion et al., 2013;V.D.D., personal observation). The distal condyles are more medially directed in the Spanish specimens than in A. atacis.Vila et al. (2012) studied several femora assigned to titanosaurs from upper Campanian to uppermost Maastrichtian fossil-sites of Spain and Southern France, referring them to different types according to a set of anatomical features.One of the types comprises two specimens from Fox-Amphoux and probably three more femora from Bellevue (Campagne-sur-Aude).This material has been referred to cf.Lirainosaurus astibiae, because of its ECC of about 2 (200%), the scarcely developed, ridgeshaped fourth trochanter, and the anteroposteriorly compressed distal end with medially directed distal condyles.However, these French femora do not share with Lirainosaurus astibiae the following features: Lirainosaurus presents an RI of about 0.2, whereas the French femora exhibit an RI of almost 0.16; in the French femora the lateral bulge is anteriorly projected in anterior and lateral views, whereas in Lirainosaurus it is straight; Lirainosaurus presents a trochanteric shelf in posterior view; and the intercondylar sulcus is much more developed in Lirainosaurus than in the French specimens.Also, Vila et al. (2012) noted that the femora assigned to cf.Lirainosaurus astibiae are morphologically similar to those they had assigned to Ampelosaurus atacis, differentiating them only in the proximodistal development of the lateral bulge and the position of its distalmost edge.This highlights the need for particular care in attributing isolated femora of this type to Lirainosaurus. All the Ibero-Armorican taxa are represented by tibiae  M. dacus preserves a distal fragment of a tibial diaphysis (NHMUK R. 3850)  and these are very similar.The tibiae of Ampelosaurus and Atsinganosaurus seem to be more robust than those of Lirainosaurus (except for the right tibia MDE C3-1303 of A. atacis, which is more gracile).A. velauciensis and NHMUK R. 3850 also have a prominent anteromedial ridge close to the distal extremity of the tibia delimiting two concave surfaces.All of them show a distal end whose transverse diameter is longer than the anteroposterior one except Lirainosaurus astibiae, whose distal end is more subquadrangular.This feature is probably a diagnostic character of the Iberian taxon. Only L. astibiae and A. atacis preserve complete fibulae (M.dacus preserves some fragments of the diaphysis).The main difference between the two taxa is that the proximal and distal extremities of the tibiae of A. atacis are more expanded than those of L. astibiae.Also, L. astibiae does not present a sigmoidal ridge associated with the lateral trochanter, as present in the fibulae of A. atacis (Le Loeuff, 1992). Dermal Armour The presence of osteoderms is a diagnostic feature of lithostrotian titanosaurs (Upchurch et al., 2004).Osteoderms are known in Lirainosaurus and Ampelosaurus, but the poor preservation of those referred to Lirainosaurus makes detailed comparison between these two taxa difficult: A. atacis has the diagnostic large spines (Le Loeuff et al., 1994) which are included in the ellipsoid morphotype of D 'Emic et al. (2009), a morphotype that the osteoderms of Lirainosaurus might also have displayed.Recently, several osteoderms have been described from the "Lo Hueco" site (Cuenca, Spain) (Ortega et al., 2012), and these also differ from those of L. astibiae.They are elongated with a concave base and a convex external surface, and their morphology varies between the "spines" described by Le Loeuff et al. (1994) and the "bulb and root" ellipsoid morphotype proposed by D 'Emic et al. (2009).The bulb-shaped end is formed by a circular surface and is delimited by a cingulum.They also present a shallow sagittal crest.The available evidence shows that Lirainosaurus did not possess dermal armour like that of Ampelosaurus.Lirainosaurus astibiae is represented by abundant remains in Laño, but only two fragmentary osteoderms have been found.This suggests that its dermal armour could have been lighter than that present in other lithostrotians, such as Ampelosaurus or the titanosaurs from Lo Hueco. DISCUSSION Lirainosaurus astibiae shares one synapomorphy with Titanosauriformes: the proximal third of the femoral shaft, just above the lateral buldge, is deflected medially (Salgado et al., 1997;Wilson and Sereno, 1998;Wilson, 2002;Mannion et al., 2013).Mannion and Calvo (2011) suggest that the presence of an oval (rather than subcircular) obturator foramen in the pubis (like the one that Lirainosaurus astibiae presents), with the long axis oriented in the same plane as the long axis of the pubic shaft, might be a titanosauriform characteristic.Like somphospondylans, L. astibiae has a scapular glenoid surface that is strongly bevelled medially, and the proximal end of the humerus is flat in anterior view (Wilson and Sereno, 1998;Wilson, 2002).Moreover, the femoral fourth trochanter is reduced to a subtle buldge (D'Emic, 2012).Titanosaurs have femoral eccentricity greater than 1.85 (Wilson and Carrano, 1999), and the femora of L. astibiae easily reach a value of 2. In addition, the sternal plate has a strongly concave lateral edge, as in all other titanosaurs (Curry Rogers, 2009).It also shares with Lithostrotia the presence of osteoderms (Upchurch et al., 2004;D'Emic, 2012), and the anterior crest of the humerus extends medially across its anterior face (Mannion et al., 2013).Lirainosaurus shares with Saltasauridae the rectangular profile of the coracoid, the distal expansion of the humeral deltopectoral crest, the femoral midshaft lateromedial diameter that is more than 185% the anteroposterior diameter, the distal condyles of the femur bevelled dorsomedially (Wilson, 2002), and the presence of a posterolateral bulge around the level of the deltopectoral crest of the humerus (D'Emic, 2012).However, the coracoid does not have an infraglenoid lip, which is a feature shown by several saltasaurids, such as Opisthocoelicaudia and Saltasaurus (Wilson, 2002). The trochanteric shelf seems to be a diagnostic feature of the femora of derived lithostrotians -D'Emic (2012) established it as a synapomorphy of the group Alamosaurus + "Saltasaurini" -, as it is present in Saltasaurus loricatus, Neuquensaurus australis, Rocasaurus muniozi, Rapetosaurus krausei, Jainosaurus cf.septentrionalis, and also in Lirainosaurus astibiae.In addition, a subtriangular process at the posteroventral corner of the acromial plate of the scapula is generally absent in more derived titanosaurs, although it is present in some of them, e.g., Alamosaurus (D'Emic et al., 2011) and Elaltitan (Mannion and Otero, 2012). One interesting character observed in Lirainosaurus is the presence of a deep triangular hollow on the lateral surface of the ilia, just at the base of the pubic peduncle.This hollow is also present in the ilia of the saltasaurine titanosaur Rocasaurus muniozi (Salgado and Azpilicueta, 2000; V.D.D., personal observation), but it has not been known in other titanosaurs until now. The rough anteroventral surface of the coracoid, visible in lateral view, seems to be a genuine feature shared by Lirainosaurus astibiae and Opisthocoelicaudia skarzynskii.This character is an unusual feature among Sauropoda, and, although it is not described in Alamosaurus, it could be a synapomorphy of the opisthocoelicaudine titanosaurs. As previously hypothesized by Sanz et al. (1999) and Díez Díaz et al. (2011, 2012, 2013), and after the study of the appendicular material in this paper, Lirainosaurus astibiae is considered to be a derived member of Lithostrotia, which also shares several synapomorphies with Saltasauridae.However, Lirainosaurus is not a saltasaurid as it lacks some of the most diagnostic features of this clade, such as the presence of an infraglenoid lip in the coracoid, a rounded process at the junction of the proximal and lateral surfaces of the humerus, and humeral distal condyles that are divided (Wilson, 2002;Upchurch et al., 2004).Sanz et al. (1999) considered there to be two appendicular autapomorphies of Lirainosaurus astibiae, both related to the scapular girdle: the presence of a ridge in the ventral margin on the medial side of the scapular blade, and an anterolateral process on the sternal plate.Upchurch et al. (2004) expressed their doubts about the validity of these autapomorphies, as they also appear in other titanosaurs, e.g., Opisthocoelicaudia (Borsuk-Bialynicka, 1977).As said above, the ventral ridge on the medial surface of the scapular blade also appears in Ampelosaurus, but is not as prominent as that of Lirainosaurus, and Opisthocoelicaudia presents a rugose ventral surface.The dorsal prominence is present in Saltasaurus loricatus and Neuquensaurus autralis as well (Powell, 1992), but the combination of a dorsal prominence and a ventral ridge on the medial surface of the scapular blade is only known in the scapulae of L. astibiae (including the material from Laño and the referred remains from Valencia), although some specimens from Laño do not show this ventral ridge.This variation could be due to ontogenetic changes or sexual dimorphism.Nevertheless, the small sample of scapular remains referred to Lirainosaurus from the fossil-sites of Laño and Chera makes it difficult to confirm this hypothesis.As there is no current evidence to support either ontogenetic changes or sexual dimorphism, this difference is regarded as being due to individual variation.Some titanosaurs, such as Maxakalisaurus topai (Kellner et al., 2006), present an anteroventral ridge in the sternal plate, so this cannot be considered a diagnostic feature of L. astibiae.Nevertheless, the presence of this ridge together with the anterolateral process and the strongly concave lateral edge of the sternal plate is considered to be diagnostic of Lirainosaurus astibiae.Also, the subquadrangular profile of the distal end of the tibia of Lirainosaurus astibiae could be regarded as an autapomorphy within Titanosauria, as the other titanosauriforms show tibial distal ends that are more expanded transversely than anteroposteriorly.However, Mannion and Otero (2012) also observed a subcircular distal end for the fibula of Antarctosaurus wichmannianus, and noted it as a possible autapomorphy of this taxon. Estimation of Body Size and Mass for Lirainosaurus astibiae We have followed the equations proposed by Seebacher (2001), Packard et al. (2009) and Campione and Evans (2012) in estimating the body size and mass of Lirainosaurus astibiae.With the smallest humerus and femur recovered from Laño we obtain a size of 3.86 meters and a mass of 1.54 tonnes (Packard et al., 2009) and 1.74 tonnes (Campione and Evans, 2012), whereas with the largest of these elements the results are 5.98 meters and 2.92 tonnes (Packard et al., 2009) and 3.98 tonnes (Campione and Evans, 2012).However, these results should be taken cautiously, as pneumaticity means that sauropods may increase their size by reducing their weight (Gascó, 2009). Taking these results into account, we tentatively propose a body length of 4 meters, or as much as 6 meters for the largest individuals, and a body mass of ca.2-4 tonnes for adult individuals of Lirainosaurus astibiae.The measures and the RI of the appendicular skeleton, in conjunction with the body size and mass, confirm that this Iberian taxon was a small-sized, slender titanosaur, as previously suggested by Company (2011) on the basis of histological analyses of fore and hindlimb bones referred to Lirainosaurus cf.astibiae from Chera (Spain). CONCLUSIONS The description of the appendicular skeleton and dermal armour of Lirainosaurus astibiae increases what is known about this Iberian sauropod, which is now one of the best-known titanosaurs.Our study supports the idea that Lirainosaurus is clearly different from other European titanosaurs (see previous works by Sanz et al., 1999;Díez Díaz et al., 2011, 2012, 2013).These differences are centred mainly on the structures observed in the scapula and sternal plate, the triangular hollow at the base of the pubic peduncle, and the slenderness of the forelimb and hindlimb bones.The presence in L. astibiae of a dorsal prominence and a ventral ridge in the medial surface of the scapular blade, the combination of an anterolateral process and an anteroventral ridge in the sternal plate, and the subquadrangular profile of the distal end of the tibia are considered to be diagnostic features.Lirainosaurus is a derived lithostrotian that could be closely related to Salta-sauridae according to the appendicular characters that it shares with this group of derived titanosaurs. Equations for predicting body mass and size in sauropods suggest a body size of 4-6 meters and a body mass of at least 2-4 tonnes for the largest individuals of Lirainosaurus astibiae, it being one of the most slender titanosaurs known to date. FIGURE 1 . FIGURE 1. Map showing the geographic location of the Laño quarry.
2018-12-15T10:36:12.350Z
2013-08-19T00:00:00.000
{ "year": 2013, "sha1": "3c66119719d7cb2159e010e43b8d8143288b0d3d", "oa_license": "CCBYNCSA", "oa_url": "http://palaeo-electronica.org/content/pdfs/350.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3c66119719d7cb2159e010e43b8d8143288b0d3d", "s2fieldsofstudy": [ "Biology", "Geology" ], "extfieldsofstudy": [ "Geology" ] }
105557452
pes2o/s2orc
v3-fos-license
Research analysis of mechanical impurities of wells The article is devoted to the investigation of the mineralogical and granulometric composition of mechanical impurities which destroy the pipeline network and the pumping hydraulic system in the fields of Western Siberia. Premature failure of the productive work of the equipment is due to the aggressiveness of groundwater pumped into the reservoir. A physicochemical analysis of the compositions of solid deposits on the damaged parts of oilfield equipment was performed. Measures were proposed to prevent the processes of saline deposits. Introduction Since the development of Western Siberia, over sixty years have passed. Pipeline transportation of oil and gas industry is pretty dilapidated. In injection wells, reservoir pressure is maintained by pumping sewage into the reservoir, which is pumped through the pipeline. Fields in the Far North have changed considerably -the share of reservoir water with oil extracted from the fields is over 95%. This fluid has an aggressive environment, leading to the destruction of the pipeline network and the hydraulic pumping system. The cause of aggressiveness, leading to the destruction of oilfield pipes and equipment, is the mineralization and granulometric composition of the pumped liquid. Research On the basis of the Nizhnevartovsk Research and Design Institute of the Oil Industry (NIPIneft), determination of the mineralogical and granulometric composition is carried out in the laboratory "Physics of Oil and Gas Reservoirs and Systems". Under the microscope, a complete analysis of the mechanical impurities of the disintegrated metal entering the exploration from the fields is determined. Microscopic analysis makes it possible to determine the shape, structure and characteristics of the destroyed surface, as well as optical properties, which makes it possible to decide on the expediency of a method of inhibitors of sediments and destruction of the pipes under investigation. To prepare an inhibitor for research under a microscope, less than 1 mg of the substance is required. Measurement of spherical particles under the microscope delivers reliable and accurate values. Measurement of grains size is carried out using a petrographic microscope "Polam". In this case, the scale with the divisions or the grid located on the eyepiece is projected onto a sample to measure the granulometric composition of the cuttings. To determine the magnification of the microscope, an object micrometer is placed under the lens. To count the number of particles in an automatic mode using a photocell, the particles in the field of view are measured and recorded, and by the result they are assigned to the selected classes of a certain range of sizes. From these data, the frequency distribution of the number of particles is constructed. The granulometric composition is shown in the histograms of the report. The mineral composition of mechanical impurities is represented by microcrystalline carbonate, barite and iron hydroxides. The size of the micro-grain of the carbonate is smaller than 0.02 mm. Carbonate forms compressed lamellae, with irregular and isometric micrometers, with irregular jagged edges. Sphericity is 0.9. Roundness is 0.5. Barium admixture is noted, in the form of thin lamellar, acicular and scaly aggregates, no larger than 0.15 mm. Iron hydroxides colored in a rusty brown color are found in the form of colloidal and amorphous accumulations, up to 0.03 mm in size. The mineral composition of mechanical impurities is represented by microcrystalline carbonate, barite and iron hydroxides. The analysis of the studied mechanical impurities of wells in the mineralogical relation is represented by the presence of carbonate salts and a small amount of iron hydroxides admixed with carbonaceous matter. Of all the examined samples, the presence of clastic grains of quartz and rarely plagioclase was noted in the wells: Priobskoye, Ombinskoye and Malobalykskoe West Siberian fields, where the percentage ratio was 61% to 24%, respectively. The fragments of plagioclase were not more than 5%. The admixture of iron hydroxides did not exceed 3-32%, and the carbonaceous matter did not exceed 7-33%. According to the granulometric composition, the studied mechanical impurities of oil were represented by the psammo-aleurite fraction, with a predominance of coarse-grained siltstone fraction, with grain sizes from 0.05 to 0.1 mm. The large-medium-fine-grained psammite fraction was noted in the range from 15 to 69%. The pelitic fraction was 21-100%. Samples contained a mixed composition with a predominance of the psammitic and siltstone fraction in an equal ratio. Deposits of calcium and magnesium carbonate, calcium sulfate, barium, strontium, chlorides and other salts in wells, equipment, etc. occur during the development and exploitation of fields. On the deposit, calcium carbonate CaCO3 and, in rare cases, deposits carried up from the reservoir are noted. The sedimentation of inorganic salts occurs when all methods of well operation -fountain, pump, gas lift, but most when pumping one. Of the total number of wells with saline deposits, about 60% of the fields in Western Siberia are equipped with submersible ESP pumps. Aggressiveness of groundwater is formed due to the presence in them of a certain chemical composition -dissolved components that enhance the dissolution of carbonate-clay cement, with subsequent removal of rocks. Aggressive waters affect the chemical composition and concentration of certain chemical components. Great influence is exerted by temperature and filtration rate of the reservoir solution. Conclusion Thus, the physics-chemical analysis of the compositions of saline deposits allows to draw the following conclusions: firstly, the presence of samples with a clear advantage of a particular type of sediment indicates the corresponding process of formation of hardness deposition; secondly, carbonate salts are present in all types of deposits, which indicates that the process of deposition of carbonate salts takes place at all points of sampling, but with different rates of reaction of their formation; thirdly, in order to combat carbonate-type sediments, it is necessary to influence the production of wells in order to prevent their formation processes, to carry out salt deposition prevention measures on them, namely, inhibitors of saline deposits LEYSAN 3003 make 3, IPRODENT.
2019-04-10T13:11:39.152Z
2018-08-16T00:00:00.000
{ "year": 2018, "sha1": "85152bceb3717d0d76e46072c1245814c8e61c45", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/181/1/012026", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b5ea2b3b799559bdee627500533aa273f9178fb5", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
661455
pes2o/s2orc
v3-fos-license
Cardiac tamponade and paroxysmal third-degree atrioventricular block revealing a primary cardiac non-Hodgkin large B-cell lymphoma of the right ventricle: a case report Introduction Primary cardiac lymphoma is rare. Case Presentation We report the case of a 64-year-old non-immunodeficient Caucasian man, with cardiac tamponade and paroxysmal third-degree atrioventricular block. Echocardiography revealed the presence of a large pericardial effusion with signs of tamponade and a right ventricular mass was suspected. Scanner investigations clarified the sites, extension and anatomic details of myocardial and pericardial infiltration. Surgical resection was performed due to the rapid impairment of his cardiac function. Analysis of the pericardial fluid and histology confirmed the diagnosis of non-Hodgkin large B-cell lymphoma. He was treated with chemotherapy. Conclusion The prognosis remains poor for this type of tumor due to delays in diagnosis and the importance of the site of disease. Introduction Primary cardiac tumors are rare. Cardiac lymphoma is the rarest primary cardiac tumor and it is usually fatal. The prognosis is poor because of diagnostic delay and the importance of the site of disease. It often begins with a pericardial effusion. Its treatment is based on chemotherapy. Case presentation A 64-year-old immunocompetent Caucasian man with no history of cardiac disease presented with chest pain, dyspnea and edema of his lower limbs associated with a degeneration of his general state. On physical examination he had a temperature of 37°C, blood pressure of 100/74 mmHg, and heart rate of 30 bpm. His jugular venous pressure was high. The first and second heart sounds were normal without any audible murmurs, rubs or gallops. His chest was clear to auscultation. His hemogram, hepatic enzymes and inflammation markers were all normal. The patient was HIV-negative. His chest X-ray revealed cardiomegaly as well as bilateral pleural effusion. The standard 12-lead ECG indicated an atrioventricular block of the third-degree. It returned to normal spontaneously one hour later. Transthoracic echocardiography (TTE) (Figures 1 and 2), demonstrated not only a pericardial effusion of 23 mm by 35 mm with signs of tamponade but also the presence of a large mass at the level of the right ventricle. The mass had a wide base and was heterogeneous. It appeared lobulated with a tissular echo texture that measured 5.5 cm by 5 cm. It was also attached to the tricuspid valve creating a right ventricle inflow obstruction. The tumor spread over the right atrium. He underwent an urgent pericardial drainage which returned 600 cm 3 of hemorrhagic liquid. Bacteriological and cytological analyses revealed large cells suggestive of a lymphoproliferative disorder. A computed tomography scan showed the presence of a right heart tumor on both sides of the tricuspid valve as well as peritoneal effusion. No other organ involvement was observed ( Figure 3). Coronary angiography accentuated an increase of a myocardial blush in favor of the highly vascular nature of the tumor ( Figure 4). This examination was performed because the patient was more than 40 years old. It was thought that emergency surgery might be necessary at any time because of size of his tumor. Due to the rapid impairment of his cardiac function and the life-threatening hemodynamic instability, an echocardiography was performed which showed an obstruction of the right ventricle inflow. He underwent an emergency thoracotomy. The purpose of this surgery was not to remove the entire tumor. It was limited to freeing the tricuspid valve and the intra-right ventricle obstruction. Surgical resection of the mass was difficult and incomplete. The tumor had infiltrated his right atrium, the atrioventricular septum and the proximal side of the right ventricle. Surgical removal was laborious but without complications. After the first course of chemotherapy TTE demonstrated a reduction in the size of the mass (Figure 7). Discussion Primary cardiac tumors are extremely rare in immunocompetent persons. They are more frequent in patients with acquired immunodeficiency syndrome (AIDS) or in transplant recipients. This was not the case in our patient. Approximately 25% of primary cardiac tumors are malignant. Cardiac tumors are classified according to their location and the degree of intra-cavitary obstruction. It is interesting to separate primary cardiac lymphoma in which cardiac events are the first indications, from secondary locations in which general events are predominant and the discovery of the cardiac involvement is often fortuitous [1]. Primary cardiac lymphoma is an extranodal non-Hodgkin lymphoma exclusively located in the heart and/ or pericardium [2]. It represents 1.3% of primary cardiac tumors (PCL) and less than 1% of all lymphomas [2][3][4]. The right atrium and right ventricle are the two most frequently involved sites with two-thirds of cases involving the right atrium [2][3][4][5]. Clinical presentations associated with primary cardiac lymphoma are heterogeneous. They are generally related to the site of involvement in the heart which makes early diagnosis difficult. In their series, Fuzellier et al. reported right-sided heart failure, dyspnea, tamponade and arrhythmias as the most frequent manifestations [2]. Cardiac tamponade is a frequent mode of presentation. The association of a tamponade with an alteration of the general state or the general signs leads directly to a neoplasia disease [2][3][4][5][6]. Congestive heart failure is explained by myocardial involvement. The disorders of conduction are the consequence of the invasion of the inter-atrial septum likely with an extension to the nodal tissue. The mechanism of cardiac arrhythmia could be the infiltration of the roof of the side wall of the right atrium by the tumor tissue [7]. TTE visualizes the pericardial effusions easily. It also allows an estimation of its tolerance and reveals the presence of any intracardiac mass. TEE is considered an initial imaging method when an intra-cardiac mass is doubtful [8,9]. 'It is better for identifying tumoral masses, allowing suspicion for an infiltrated cardiac tumoral mass to be a primary cardiac lymphoma' [6]. The sensitivity of TEE for the detection of primary cardiac lymphoma approaches 100% in some series in specialized units that have experience with this kind of laboratory investigation [5]. It is also a good follow-up examination allowing the verification of the regression of the tumor after chemotherapy in a few centers [5]. TEE is excellent at visualizing tumors in the atria, but much less so for anterior masses (for example, near the right ventricular apex), where TTE is superior. In our department, we are experienced in diagnosing cardiac tumors and monitoring their regression by TTE. Computed tomography allows the delineation of the cardiac mass and the specification of its connections with the cardiac structures as well as the extent of the disease. An MRI becomes the examination reference for the diagnosis of cardiac tumors. It offers superior anatomic details of myocardial and pericardial infiltration. This examination can also serve as a reference for the follow-up of patients undergoing chemotherapy [2]. However, fast moving tumors (such as some myxomas) will adversely affect the quality of the MRI image. In our patient who had an auriculoventricular block of the third degree in the electrocardiogram, echocardiography followed by computed tomography can help in arriving at a hypothesis to explain the origin of the conduction disorder. It is probable that the tumor in this case report invaded the interatrial septum and the atrioventricular node. We do not have any explanation for why it was paroxysmal. It is probable that the inflammation process around the tumor is the cause. Cytological analysis of the pericardial liquid does not always permit a diagnosis because the effusion can be reactive [2]. The cytology results in the pericardial fluid are often nonspecific. It demonstrates atypical lymphoid cells [5]. Most cases require biopsy or surgical excision for diagnosis [2]. In the presence of a right-sided cardiac mass, an aggressive approach to obtain a rapid histological diagnosis is important. Less invasive procedures, such as TEE guided biopsy, endomyocardial transvenous biopsy, mediastinoscopy and thoracoscopic pericardial window have been performed with success [5]. The treatment of primary cardiac lymphoma is not clearly codified. It differs according to the clinical team. Surgical treatment is discouraging because surgical resection of primary cardiac lymphoma is often difficult and incomplete. It is reserved for patients with lifethreatening hemodynamic compromise caused by mechanical complications (as was the case with our patient) or tamponade [7]. Early systemic treatment appears to be the only chance for cure. Chemotherapy remains the preferred initial treatment. It should be guided by the immunohistological characteristics of the lymphoma and its extension to other organs. At the end of the treatment, we can expect a reduction of any rhythm disorders due to regression of the tumor mass [6]. Conclusion Primary cardiac lymphoma is rare. The presence of a right cardiac mass raises the possibility of primary cardiac lymphoma. Echocardiography is the preferred procedure for diagnosis and follow-up. In addition, it allows an estimation of the hemodynamic state. Rapid histological diagnosis is important because systemic therapy can influence the prognosis in the presence of a primary cardiac lymphoma [2]. Consent Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.
2017-06-25T09:40:06.426Z
2011-09-05T00:00:00.000
{ "year": 2011, "sha1": "beace51ee543c788f2aa0e4b0d019e93bd1d95fc", "oa_license": "CCBY", "oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/1752-1947-5-433", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "336e15bc07f041d2a8947de3d481e51c300aa87b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
253014398
pes2o/s2orc
v3-fos-license
Magic-state resource theory for the ground state of the transverse-field Ising model Ground states of quantum many-body systems are both entangled and possess a kind of quantum complexity as their preparation requires universal resources that go beyond the Clifford group and stabilizer states. These resources - sometimes described as magic - are also the crucial ingredient for quantum advantage. We study the behavior of the stabilizer R\'enyi entropy in the integrable transverse field Ising spin chain. We show that the locality of interactions results in a localized stabilizer R\'enyi entropy in the gapped phase thus making this quantity computable in terms of local quantities in the gapped phase, while measurements involving $L$ spins are necessary at the critical point to obtain an error scaling with $O(L^{-1})$. Ground states of quantum many-body systems are both entangled and possess a kind of quantum complexity as their preparation requires universal resources that go beyond the Clifford group and stabilizer states. These resources -sometimes described as magic -are also the crucial ingredient for quantum advantage. We study the behavior of the stabilizer Rényi entropy in the integrable transverse field Ising spin chain. We show that the locality of interactions results in a localized stabilizer Rényi entropy in the gapped phase thus making this quantity computable in terms of local quantities in the gapped phase, while measurements involving L spins are necessary at the critical point to obtain an error scaling with O(L −1 ). In this paper, we set out to show the role that magic state resource theory plays in the ground state of local integrable quantum many-body systems. The model studied here is the transverse field Ising model for a spin one-half chain with N sites. We show how to compute the stabilizer Rényi entropy in terms of the ground-state correlation functions. In this way, we see how the decay of correlation functions influences the many-body non-stabilizerness. Away from the critical point, * s.oliviero001@umb.edu where the ground state is weakly entangled and two-point correlation functions decay exponentially, it is possible to estimate the stabilizer Rényi entropy by single spin measurements reliably. At the critical point, on the other hand, one needs to measure an entire block of spins to obtain a reliable estimate, with an error scaling with a characteristic power-law O(L −1 ). The result is of notable importance for experimental measurements of nonstabilizerness in a quantum many-body system, as in a gapped phase this can be performed by few spin measurements (even just a single spin). As a last comment, our findings can be relevant for the investigation of the emergence of quantum spacetime in the context of AdS+CFT correspondence: in a recent paper [61], the authors speculate on the role of non-stabilizerness in AdS+CFT, and argue that it is a key ingredient to fill the complex structure of the AdS black hole interior, dual to a CFT state. Magic state resource theory indeed reveals itself as an important piece of information that cannot be detected by only looking at the entanglement. In this context, it is well known that a quantum many-body system at the criticality is described by a CFT [20,66]. Our results thus give insights regarding the role played by non-stabilizerness in AdS+CFT correspondence: This resource is delocalized in spatial degrees of freedom as, at criticality only, it can be extracted by a system containing L spins with an error decaying only polynomially in L. From this result, it can be reasonably argued that delocalization of non-stabilizerness is a universal property in CFT quantum states -being the correlation functions decaying polynomially -thus revealing fascinating perspectives in the AdS+CFT correspondence. Setup and model.-Let us start by briefly reviewing the stabilizer Rényi entropy [62]. Consider an N -qubit system and the decomposition of a state ρ in the Pauli basis given by ρ = 1 2 N P ∈P(N ) tr(P ρ)P with P(N ) being the Pauli group. The 2−stabilizer Rényi entropy M 2 (ρ) arXiv:2205.02247v3 [quant-ph] 23 Oct 2022 is then defined as: i.e., as the average of tr 2 (P ρ) on a state-dependent probability distribution defined as P(ρ) := {2 −N tr 2 (P ρ) tr −1 (ρ 2 )}. It is interesting to note that for ρ pure, M 2 (ρ) reduces to the two-Rényi entropy of the classical probability distribution P(ρ) (modulo an offset of −N ). We study the behavior of M 2 in the ground state of the transverse field Ising model for a spin one-half N -site chain with Hamiltonian where σ α i , for α = x, y, z, are Pauli matrices defined on the i-th site. The model displays a quantum phase transition at λ = 1 between a disordered and a symmetry-breaking phase. The critical point corresponds to a conformal field theory with c = 1 2 [67]. For λ → ∞ and λ = 0 the Hamiltonian reduces to a stabilizer Hamiltonian [68] with stabilizer groups Z P and X P respectively. The model H(λ) is integrable through standard techniques [69,70]. First by a Jordan-Wigner transformation introducing fermionic modes and subsequently by a Fourier and a Bogoliubov transformations [71]. Following these techniques, let us introduce the Majorana operators A l and B l : These operators obey the anti-commutation rela- The computation of M 2 for the ground state |G(λ) of such a class of Hamiltonians relies on the fact that the ground state can be fully characterized by just the two-point correlation functions, by virtue of the Wick theorem: One can compute all the correlation functions of an arbitrary product of Majorana fermions by just knowing the 2−point is a set of ordered indexes ranging over all the sites. The computation of C({i} k , {j} l ) can be done through the Pfaffian technique [72] which leads to C({i} k , {j} l ) = 0 unless k = l and: to compute the generic 2k-point correlators of Majorana fermions, it is sufficient to compute the determinant of a k × k matrix, which can be efficiently done numerically by a poly(k) algorithm. All the 2k-point correlations functions, can be also obtained by considering the maximum rank 2N -point correlation function of Majorana fermions it is easy to see that one can obtain any correlation function of order 2k by considering Ground state non-stabilizerness.-In this section, we compute M 2 in the ground state |G(λ) and discuss some of its properties. To this end, we need the knowledge of all the 4 N expectation values of Pauli strings P ∈ P(N ) on the ground state |G(λ) . Except for λ = 0, λ → ∞, all the other points feature a non-trivial value for the stabilizer Rényi entropy because the state cannot be factorized. It is easy to see that any P ∈ P(N ) can be written (up to a global phase) as an ordered product of Majorana fermions, as P ∝ A i1 · · · A i k B j1 · · · B j l for some {i} k , {j} l , which means that we can write the two-stabilizer Rényi entropy for |G(λ) as: As the above formula shows, the computation of the non-stabilizerness requires ∼ 4 N determinants, which makes the computation exponentially hard in N . Let us provide an upper bound to the two-stabilizer entropy given by the zero-stabilizer entropy M 0 (λ) ≥ M 2 (λ) [62], which essentially counts the number of nonzero entries card(|ψ ) in the probability distribution P(|ψ ψ|) as M 0 (|ψ ) := log 2 card(|ψ ) − N . As explained above, there are 2N for any λ = 0, ∞: with both slope α(λ) and intercept β(λ) depending on intensity λ of the external magnetic field. In particular, we observe an increasing slope α(λ) from λ = 0 towards the criticality at λ = 1, where α(λ) approaches its maximum α(1) ≈ 0.44, and then it starts decreasing again in the disordered phase, λ > 1. We thus find agreement with the result in Ref. [13]: the ground state at the critical point, and the corresponding 1 2 CFT, achieves the highest value of non-stabilizerness among the λs. However, this result does not tell us the full story, as the behavior of non-stabilizerness with λ is quite smooth and is O(N ) for every value of λ. As we show in the following section, the locality of the interactions together with a gap implies that non-stabilizerness is localized, whereas at the critical point non-stabilizerness cannot be resolved by local measurements. Access non-stabilizerness by local measurements.-Although more amenable than a minimization procedure [73], computing the stabilizer entropy is an exponentially difficult task. However, the locality of the interactions in the Hamiltonian and the presence of a gap results in a fast decay of correlation functions in the ground state, while a power-law characterizes the critical point. One thus wonders if one can exploit this locality to access the stabilizer Rényi entropy by local quantities. This results both in the possibility of a realistic experimental measurement of non-stabilizerness in the ground state of quantum many-body systems and a computational advantage. Let us focus on asymptotic behavior in N , so that M 2 (λ) ≈ α(λ)N . We refer to α(λ) as the density of non-stabilizerness. In the above, ≈ stands for 'up to an order N −1 '. Now, it is clear that if one is able to measure the density α(λ), then one accesses the non-stabilizerness of the ground state. Can we measure the density of non-stabilizerness α(λ), by just looking at the local properties of the reduced density matrix of L spins? To answer the question, we first divide the chain of N sites into N/L sub-chains of L first neighbor sites. Consider the following quan- where M 2 (ρ L ) is the Stabilizer Rényi entropy of the mixed state ρ L (see Eq. (1)) which in terms of Majorana correlation functions reads: (9) The latter equation, unlike Eq. (6), contains only correlation functions on at most L sites, thus it does not involve global measurements, rather it involves just measurements on local observables via the reduced density matrix ρ L , which makes it analytically computable for a reasonable L. First note that for L → N , one has α L (λ) → α(λ). Then, how good is the approximation for a finite L, and how does it depend on λ? Let us look at the accuracy of the measurement of the L−density of non-stabilizerness by looking at the percent error we make by measuring the density of non-stabilizerness via local measurements. We find that, away from the criticality, i.e. in the regions λ 1 and λ 1, λ (L) < 0.001 for any L. We thus conclude that, away from the critical point, one can access the non-stabilizerness of the ground state by just measuring the nonstabilizerness of the density matrix of an O(1) of spins, in fact, even a single qubit density matrix ρ 1 . We show the agreement between the the 1−density of non-stabilizerness α 1 (λ) and the density of nonstabilizerness α(λ) in Fig. 2 for λ > 1. The region λ < 1 features the same behavior, indicating that the non-stabilizerness does not reveal the symmetry of the ground state. Thus, away from the critical point, the approximation works great also for L = 1, which can be computed by hand: The single site density matrix reads [23] ρ 1 (λ) = 1 2 (I + σ z σ z ), whose stabilizer Rényi entropy: where | σ z | = G 0 (λ), cfr. Eq. (4); see the inset in Fig. 2 for a plot. In the following, we lay down a theoretical argument supporting the fact that measuring the single spin density of non-stabilizerness is already sufficient away from the critical point λ = 1. It is well known that [71], away from the criticality (w.l.o.g. let us say λ 1), the two-point correlation functions in Eq. (4) decay faster than exponentially with r. By making the first order expansion G r (λ) G 0 (λ)δ r,0 , one gets a fair approximation of G r (λ) as long as the higher terms in r = 0 are exponentially suppressed. By using the above form of the two-correlation functions to compute higher-order functions as in Eq. (5), one This means that the only nonzero correlation functions correspond to Pauli operators belonging to the subgroup Z ≤ P(N ) containing all the σ z Pauli strings. The fact that the Pauli strings that count are those belonging to Z can be also understood by looking to the Hamiltonian in Eq. (2): For λ 1 the dominant term is λ i σ z i whose eigenstates are stabilizer states belonging to the stabilizer group Z. In other words, we are estimating the average in Eq. (6) by (importance) sampling the probability distribution with Pauli strings P ∈ Z. Thus, the estimated density of non-stabilizerness can be computed as where we introduced a normalization over the sampling given by {i} k ,{j} k ≤N C({i} k , {j} k ) 2 , cfr. Eqs. (1) and (6). The straightforward computation of Eq. (11), together with the fact that G 0 (λ) 2 = σ z 2 leads to Eq. (10). Thus, the density of non-stabilizerness estimated by importance sampling does coincide with the L−density of non-stabilizerness with L = 1. The fact that one can access non-stabilizerness from local measurements is nontrivial and in general, is not true. We can show it by considering a simpler example: Suppose having a bipartite system AB, a random pure state |Ψ AB and consider the percent different in non- here M AB , M A , M B are the stabilizer Rényi entropies of |Ψ AB and ρ A = tr B (|Ψ AB Ψ AB |) and ρ B respectively. Thanks to the typicality of the stabilizer Rényi entropy [62] and the two-Rényi entropy [36] over the set of Haar-random states, one gets AB ≈ 1 (up to an exponentially small error in dim(AB)), which means that the nonstabilizerness cannot be accessed locally for the majority of states in the Hilbert space. The above argument can be straightforwardly generalized to the case of the multipartite system A 1 A 2 · · · A h . Conclusions and Outlook.-The complex pattern of the ground-state wave-function of a quantum many-body system depends on the interplay between its entanglement and the non-Clifford resources, or non-stabilizerness, that it contains. Although both in the gapped phase and at the critical point the ground state of the transverse field Ising model contains an extensive amount of non-stabilizerness, away from criticality this is localized. On the other hand, at the critical point, its non-stabilizerness is delocalized and described by a power law. These results raise a number of questions. First, one could extend these methods to models featuring localization through disorder or frustration. One expects that any form of localization would result in being able to evaluate non-stabilizerness by few-site quantities. Second, the same methods can be used to study the dynamics of a quantum many-body system after a quench. It would be interesting to see whether non-stabilizerness delo-calizes as the system evolves in time and if equilibration ensues. Moreover, it is very intriguing to study the behavior of non-stabilizerness in such systems when integrability is broken. The role of quantum complexity implied in the conjunction of non-stabilizerness and entanglement for the onset of thermalization and non-integrable behavior has been recently studied in the context of doped quantum circuits [74][75][76] and Hamiltonians [77,78], but a local quantum many-body system is its most natural setting. The main result of this paper opens the way to the experimental measurement of non-stabilizerness by local measurements, for instance, in ultra-cold atom gases realizing the Bose-Hubbard model. Finally, although further investigation is necessary, we can argue that the delocalization of non-stabilizerness at the critical point suggests that the CFT theory, underlying critical many body systems, enjoys delocalization of non-stabilizerness as well.
2022-10-20T15:33:19.549Z
2022-05-04T00:00:00.000
{ "year": 2022, "sha1": "0556cabd857cdf371a42037c60275972dd8ea9a0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a7fee3e574588d4825dce710007a520285857470", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
236653597
pes2o/s2orc
v3-fos-license
Effects of bio-organic fertilizer on soil fertility, microbial community composition, and potato growth The excessive and irrational use of chemical fertilizers poses a series of environmental problems. A growing number of research studies have focused on the application of beneficial microorganisms to reduce the use of chemical fertilizers. Here, potato field experiments were conducted to investigate whether partial replacement of chemical fertilizers with bio-organic fertilizers containing Bacillus velezensis BA-26 had an effect on plant growth, soil fertility, and soil microbial community composition. Three treatment methods were used in this study: organic fertilizer (OF), bio-organic fertilizer (BOF), and chemical fertilizer (CF). The results showed that the biomass and soluble sugar content of potato were significantly increased with BOF treatment. The soil electrical conductivity, available phosphorus (AP), available potassium (AK), urease, and alkaline phosphatase activity also improved with BOF treatment. Further analysis revealed that BOF treatment increases bacterial diversity and reduces fungal diversity. Potentially, pathogenic microbials; such as Fusarium, Verticillium, and Botryotrichum; treated with BOF were significantly decreased compared with CF treatment. Redundancy analysis showed that soil conductivity and AP had significant effects on bacterial and fungal community composition. Thus, the results suggest that the application of bio-organic fertilizer could reduce the use of chemical fertilizers by promoting potato growth, improving soil fertility, and affecting microbial community INTRODUCTION Potato is an important global food resource and one of the major economically significant crops in China. Chemical fertilizers are widely used in potato cultivation; however, their excessive use has resulted in lower yield, deteriorating quality, and weakening resistance to pathogens. Longterm large-scale application of chemical fertilizers not only results in soil consolidation [1] but also leads to increasing pollution of soil, atmospheric, and aquatic systems, which severely affects further agricultural development [2][3][4]. Research studies have recently focused on fertilizer reduction and the use of alternative fertilizers. Straw, animal dung, and other organic fertilizer are applied to replace a portion of the chemical fertilizer. In addition to using organic fertilizers, plant growth-promoting microorganisms (PGPM) can be applied to reduce the amount of fertilizer used [5,6]. Studies have found that a variety of microbials can promote plant growth and increase plant stress resistance, such as Pseudomonas [7], Bacillus [8], Azospirillum [9], and endophytic actinobacteria [10]. Without adding organic material, when PGPMs are directly added to the soil, they are not effective in their function because of lack of nutrition. They can be combined with organic fertilizers, thus benefiting from the anti-stress and pro-growth effects of both microorganisms and organic fertilizers, to achieve the sustained and stable release of fertilizer nutrients [11]. A bio-organic fertilizer (BOF) combines functional microorganisms with suitable substrates and is more effective than microorganisms directly added to the soil. It is widely regarded as a promising way for organisms to inhibit soil-borne pathogens and promote plant growth [12,13]. The composition and diversity of the soil microbial community is very important to soil health, and soil enzyme activity is the index of soil biological activ-ity [14,15]. Increasing continuous cropping years results in a decrease in soil nutrients and enzyme activity [16]. Previous studies have reported that biological organic fertilizer improves soil quality and soil enzyme activity [17] and increases the activity of functional microorganisms [18], as well as effectively inhibits soil-borne diseases and promotes plant growth [19]. This experiment selected a representative potato rotation field in Chengde City, Hebei Province, China. The purpose of this study was to explore whether partial replacement of fertilizer with BOF containing B. velezensis BA-26 had effects on plant growth, soil fertility, and soil microbial community. Preparation of BOF B. velezensis BA-26 was isolated from the rhizosphere soil of healthy potatoes. It was cultured in Nutrient Broth (NB) (Shanghai Bowei Co., Ltd., China) at 28°C and stored at 4°C on slants. Preserved B. velezensis BA-26 was inoculated in the NB and cultured in a shaker at 32°C, 180 rpm, for 48 h. Calculations indicated that the number of spores in the prepared fermentation broth was 4 × 10 8 /ml. The prepared fermentation broth was mixed with an organic fertilizer. The ratio of fermentation broth to organic fertilizer was 1:10. Organic manure was prepared from mature pig manure compost, which contained 40.5% organic matter, 29.4% H 2 O, 3.7% N, 2.4% P 2 O 5 , and 1.1% K 2 O. Field experiments The experimental field is located in Zhangjiawan, Weichang County, Chengde City, Hebei Province, China (117.9205 E, 42.3476 N). The soil type in this area is sandy loam. It has deep soil, good aggregate structure, abundant soil moisture, and good permeability. The basic physical and chemical properties of the experimental field are as follows: available nitrogen, 41.45 mg/kg; available phosphorus, 26.12 mg/kg; available potassium, 115.00 mg/kg; organic matter, 7.10 g/kg; pH 6.11 (soil-water ratio: 2.5:1); and conductivity, 101.42 us/cm. The potato, variety Favorita, was planted on April 5, 2018. The field experiment used a random block design: each block area was 6 × 6 = 36 m 2 , single row planted, row spacing was 70 cm, and plant spacing was 25 cm. The field treatments were designed as follows: (1) CF: 100% chemical fertilizer (N:P:K = 12:19:16); (2) OF: 75% chemical fertilizer + organic fertilizer; and (3) BOF: 75% chemical fertilizer + BOF. The application rate of organic fertilizer or bio-organic fertilizer was 1800 kg/ha, and the field was treated with regular watering. Sample collection Sample collection was conducted on July 17, 2018. Each plot was sampled using a five-point sampling method. Three potato plants were randomly selected at each point. Plant height, stem diameter, and chlorophyll content were measured, and then the potatoes were dug out for biomass measurement. Soil samples were collected from the root area, and five soil samples were evenly mixed together into one combined sample, kept in a Ziplock ® bag, and stored at −80°C until DNA extraction. Determination of potato growth index and tuber quality Potato plant height was determined using a folding ruler and measuring the distance between the highest growing point and the ridge surface. Stem diameter was determined using a Vernier caliper. Chlorophyll content was determined using an ECA-051 portable chlorophyll meter. Biomass was measured by drying potatoes and then weighing. Protein and soluble sugar contents were determined using a Pierce™ rapid gold BCA protein assay kit (Thermo Fisher Scientific Co., Ltd., USA) and a Plant Soluble Sugar assay kit (Comin Biotech Co., Ltd., China), following the manufacturer's instructions, respectively. Vitamin C (VC) content was determined using the 2,6-dichloroindophenol titration method [20]; fresh potatoes were ground with 5 ml of 2% oxalic acid for dissolution in 50 ml volumetric flasks for volumetric adjustment and fully dissolved. The pure filtrate was precipitated using filter paper and, then, titrated with 2,6-dichlorophenol indophenol solution. Soil physical and chemical property analysis Soil bulk density was measured using the cutting ring method after drying the soil cores at 105°C for 48 h. Soil samples were air-dried and passed through a 2-mm aperture sieve. The method of measuring soil pH and conductivity involved weighing 10 g of dry soil and placing it in a beaker containing 30 ml of distilled water. The mixture was thoroughly mixed, and after standing for 30 min, the pH and conductivity of the soil were, respectively, measured with a pH and conductivity meter (Mp521 Lab pH/conductivity meter, Japan) [21]. The soil organic matter was determined using the www.scienceasia.org oil bath heating-potassium dichromate (K 2 Cr 2 O 7 ) volumetric method [22]. Briefly, the temperature of the oil bath was 180°C, boiled for 5 min, 0.4 mol/l K 2 Cr 2 O 7 -H 2 SO 4 solution was used to oxidize the soil organic matter, and the remaining K 2 Cr 2 O 7 was used in FeSO 4 for titration. Soil available phosphorus (AP) and available potassium (AK) were determined following Shen et al [23], and soil available nitrogen (AN) was determined using an alkaline hydrolysis diffusion method [24]. Soil enzyme activity Soil urease, catalase, sucrase, and phosphatase activities were, respectively, determined using a soil urease activity detection kit, soil catalase activity detection kit, soil sucrase activity detection kit, and soil acid phosphatase activity detection kit (Solarbio Technology Co., Ltd., China), following the manufacturers' instructions. DNA extraction Total microbial genomic DNA was extracted using a DNeasy PowerSoil Kit (QIAGEN, Inc., Netherlands), following the manufacturer's instructions and stored at −20°C until analysis. The quantity and quality of the extracted DNAs were measured using a NanoDrop ND-1000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA) and agarose gel electrophoresis, respectively. PCR amplification and illumina sequencing PCR amplification of the bacterial 16S rRNA genes V3-V4 region was performed using the forward primer 338F (5 -ACTCCTACGGGAGGCAGCA-3 ) and the reverse primer 806R (5 -GGACTACH VGGGTWTCTAAT-3 ). For amplification of fungal ITS sequences, the forward primer ITS5F (5 -GGAAGTAAAAGTCGTAACAAGG-3 ) and the reverse primer ITS1R (5 -GCTGCGTTCTTCATCGATGC-3 ) were used. Sample-specific 7-bp barcodes were incorporated into the primers for multiplex sequencing. The PCR amplification system consisted of 5 µl of a 5 × reaction buffer, 5 µl of a 5 × GC Buffer, 2 µl of 2.5 mM dNTPs, 1 µl of 10 µM forward primer, 1 µl of 10 µM reverse primer, 2 µl of 20 ng/µl the DNA template, 8.75 µl of ddH 2 O, and 0.25 µl of Q5 DNA polymerase (New England Biolabs, Inc., USA). The thermal cycling conditions comprised an initial denaturation at 98°C for 2 min, followed by 25 cycles consisting of denaturation at 98°C for 15 s, annealing at 55°C for 30 s, and extension at 72°C for 30 s, with a final extension of 5 min at 72°C and hold at 10°C. PCR amplicons were purified with Agencourt AMPure Beads (Beckman Coulter, Indianapolis, IN, USA) and quantified using the PicoGreen dsDNA assay kit (Invitrogen, Carlsbad, CA, USA). After the individual quantification step, amplicons were pooled in equal amounts, and paired-end 2 × 250 bp sequencing was performed using the lllumina Novaseq platform with NovaSeq 6000 SP reagent kit (Shanghai Personal Biotechnology Co., Ltd., Shanghai, China) for 500 cycles. Statistical analysis The majority of the result parameters were analyzed with a single factor ANOVA. The IBM SPSS 25.0 software was used to calculate and count the results using an ANOVA and Duncan multirange test. Statistical significance was considered at p < 0.05. Sequence data analyses were mainly performed using QIME and R packages (V3.2.0). OTU-level alpha diversity indices were calculated using the OTU table in QIIME. A redundancy analysis (RDA) was conducted using CANOCO5. Excel software version 2016, GraphPad Prism version 8, and Origin Pro 2018 were used for statistical analysis and mapping. Accession number The sequence data generated in this study were deposited to the NCBI database under accession numbers PRJNA646630 (bacterial sequences) and 646645 (fungal sequences). Potato growth indicators and tuber quality parameters Different treatment methods strongly influenced potato growth and tuber quality (Table 1). Growth indicators showed that the plant height, stem diameter, and biomass of potato plants treated with BOF significantly (p < 0.05) increased by 7.77%, 8.42%, and 11.07%, respectively, compared with those of plants with chemical fertilizer (CF) treatment. The height and biomass of potato plants treated with organic fertilizer (OF) were also significantly (p < 0.05) higher than those of the CF treatment. This study showed that replacing some CFs with BOFs promotes the growth of potatoes. BOFs have been shown to have a great potential in promoting plant growth [25]. Soluble sugar content of the BOFtreated potatoes significantly (p < 0.05) increased by 45.37% and 53.92% compared with the OF and the CF treatments, respectively. Compared with the CF treatment, vitamin C content in the BOF and OF treatments significantly increased by 6.25% [26]. BOF treatment increased the content of soluble sugar and vitamin C in potatoes (Table 1). This is consistent with the results of Ye et al [9], who found that BOFs can increase soluble sugar and vitamin C content in tomatoes. Table 2 shows the physical properties and nutrient content of soil. BOF treatment significantly (p < 0.05) increased soil pH compared with CF and OF treatments, whereas OF treatment showed no significant difference compared with CF. Electrical conductivity of the BOF and OF treatments significantly (p < 0.05) improved by 30.65% and 20.70%, respectively. Soil pH and electrical conductivity are important soil properties. They play key roles in the formation of soil and the growth of plants and animals in the soil. This study found that BOF treatment significantly improved soil pH and conductivity. Compared with CF and OF treatments, soil organic matter (OM), available phosphorus (AP), and available potassium (AK) levels significantly (p < 0.05) increased with BOF treatment. The increase in soil fertility was related to OM and a variety of beneficial microorganisms. Soil fertility and plant health improve with increasing organic matter content [27]. Moreover, the beneficial microorganisms contained in the BOF promoted the conversion of soil nutrients, induced the accumulation of available nutrients, and increased the levels of effective nutrients in soil [13]. The application of BOF significantly increased soil nutrient levels and improved soil fertility ( Table 2). The degree of enzyme activity in the soil is an important indicator of soil health. Soil urease is the driving force of soil metabolism and reflects soil fertility to some extent. Soil phosphatase can accelerate the dephosphorization rate of organophosphorus and affect the decomposition and transformation of organophosphorus in soil [15]. Soil sucrase is an important catalytic enzyme that affects the soil carbon cycle. Soil catalase catalyzes the decomposition of hydrogen peroxide in the soil, reducing the toxic effect of hydrogen peroxide on crops [28]. Fig. 1 shows that the urease activity in soil significantly increased with BOF treatment by 33.37% and 17.41% (p < 0.05), respectively, compared with the activity with CF and OF treatments. The alkaline phosphatase activity also increased by 29.90% and 7.5% (p < 0.05), respectively, compared with CF and OF treatments. Compared with CF and OF treatments, the sucrase activity was increased in BOF treatment, but there was no significant (p > 0.05) difference between BOF and CF treatments. No significant (p > 0.05) difference in catalase activity was observed among the three treatments. This study found that the application of BOF significantly increased the activity of rhizosphere soil alkaline phosphatase and urease activity (Fig. 1), which is consistent with the findings of Marcote et al [29]. Table 3 shows the alpha diversity index of bacteria and fungi. Chao1 and Ace indices were used to represent abundance, Shannon to represent diversity, Pielou's evenness to represent evenness, and Good's coverage for coverage. For bacteria, compared with CF treatment, Chao1, Shannon, and Ace indices significantly increased with BOF treatment (p < 0.05). Compared with OF treatment, Chao1 and Ace indices increased with BOF treatment. No obvious difference in the evenness and Good's coverage was observed among the three treatments. The alpha diversity of fungi was contrary to that of bacteria; BOF treatment significantly reduced the Chao1, Ace, and Shannon indexes. Soil microbial diversity is a key factor affecting soil health and www.scienceasia.org quality. Agricultural treatments can affect soil microbial diversity [30]. Soil microflora plays a central role in promoting the decomposition of loaded OM and nutrient cycling, particularly the most abundant bacteria group, which is indispensable for soil ecological services [30,31]. This study found that BOF treatment significantly increased bacterial diversity and decreased fungal diversity (Table 3). Bacterial and fungal community composition Microbial community can be used as an important factor for evaluating soil fertility, and beneficial microorganisms in the soil can prevent soil-borne diseases [32]. Understanding the species and distribution of microorganisms is essential to the control of plant diseases [33]. Fig. 2 shows the relative abundance of bacteria and fungi at the phylum level. Proteobacteria, Acidobacteria, Actinobacteria, Chloroflexi, Gemmatimonadetes, Bacteroidetes, Rokubacteria, and Nitrospirae (relative abundance > 1%) were the predominant bacteria in all treatments (Fig. 2a). No significant (p > 0.05) difference in the relative abundance of the predominant phyla was observed among the three treatments. For fungi, Ascomycota, Basidiomycota, Mortierellomycota, and Olpidiomycota (relative abundance > 1%) were the predominant fungi in all treatments (Fig. 2b). The results showed that the main components of soil fungi were Ascomycota and Basidiomycota, which are similar to those observed in the soil of peas [34] and peanuts [35]. Ascomycota contains many plant pathogens. Ascomycetes are often inhibited in soils where diseases are controlled [36]. The relative abundance of Ascomycota in the BOF treatment was significantly (p < 0.05) decreased compared with the CF treatment. The abundance of Ascomycota was also decreased in the OF treatment, which may be because the application of OF increased beneficial bacteria in the soil, thereby inhibiting some fungi [37]. The relative abundance of bacteria and fungi at the genus level is shown in Fig. 3. The top 10 bacterial genera were Sphingomonas, RB41, MND1, Nitrospira, Gaiella, Lysobacter, Haliangium, Ochrobactrum, Ellin6067, and Subgroup 10 (Fig. 3a). The relative abundance of MND1 in the BOF treatment was higher than in the OF and CF treatments and was significantly (p < 0.01) different from the CF [39]. Mortierella has not yet been used as a biological control agent, but some strains have been shown to produce antifungal and antibacterial metabolites [40]. www.scienceasia.org Effects of environmental factors on bacterial and fungal communities To determine which environmental factors affect the composition of soil bacterial and fungal communities, RDA analysis was conducted (Fig. 4). For bacteria, the first two components of RDA accounted for 45.91% and 19.43% of the total variation (Fig. 4a). For fungi, the first two components of RDA explain 54.11% and 18.63% of the total variation (Fig. 4b). In bacteria, the electrical conductivity and AP were positively correlated to Sphingomonas, MND1, Subgr10, and Lysobacter. In fungi, the electrical conductivity and AP were negatively correlated to Verticillium, Botryotrichum, and Fusarium and positively correlated to Mortierella and Sollicocozyma. RDA analysis revealed that the soil electrical conductivity and AP significantly influence microbiological compositions (Table S1), which is consistent with previous studies that AP [39] and soil electrical conductivity [41] play important roles in bacterial community formation. A previous study showed that a higher soil P content was associated with a lower incidence of wheat Rhizoctonia root rot [42], and AP content is negatively correlated to banana Fusarium wilt [39]. Based on the above results, BOF treatment may affect the soil microbial community by increasing soil electrical conductivity and AP content. CONCLUSION Field experiments using bio-organic fertilizers as replacement for chemical fertilizers had shown that potato growth and improve tuber quality could be promoted. The application of bio-organic fertilizers also improved soil fertility, increased bacterial diversity, and reduced fungal diversity. The relative abundance of harmful fungi, such as Fusarium, Verticillium, and Botryotrichum, was reduced by bioorganic fertilizers. Soil bacterial and fungal composition was primarily driven by electrical conductivity and AP. This work provided a preliminary theoretical basis for reducing the use of chemical fertilizers. Appendix A. Supplementary data Supplementary data associated with this article can be found at http://dx.doi.org/10.2306/ scienceasia1513-1874.2021.039. S1 Appendix A. Supplementary data
2021-08-03T00:06:17.222Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "72404c0837e8c85fe168606394b4ba568d0f1bdd", "oa_license": null, "oa_url": "http://www.scienceasia.org/2021.47.n3/scias47_347.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "76615a02a9eff40b3a7e64d7c55c465196e5a7be", "s2fieldsofstudy": [ "Environmental Science", "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Environmental Science" ] }
119631542
pes2o/s2orc
v3-fos-license
Minimal Polynomials of Some Matrices Via Quaternions This work provides explicit characterizations and formulae for the minimal polynomials of a wide variety of structured $4\times 4$ matrices. These include symmetric, Hamiltonian and orthogonal matrices. Applications such as the complete determination of the Jordan structure of skew-Hamiltonian matrices and the computation of the Cayley transform are given. Some new classes of matrices are uncovered, whose behaviour insofar as minimal polynomials are concerned, is remarkably similar to those of skew-Hamiltonian and Hamiltonian matrices. The main technique is the invocation of the associative algebra isomorphism between the tensor product of the quaternions with themselves and the algebra of real $4\times 4$ matrices. Introduction The minimal polynomial of a matrix is the unique monic polynomial of minimal degree which annihilates the matrix. It has several theoretical and practical uses. It provides information about the Jordan structure of the matrix, and in some situations can nearly determine it. Its principal utility, arguably, is in computing functions of a matrix such as the matrix exponential and the Cayley transform. While any annihilating polynomial can be used for this purpose, the complexity of the resultant expression is naturally minimal when the minimal polynomial is used. If one knows the Jordan structure of the matrix then its minimal polynomial is easily computed. However, since the former is difficult to arrive at, this is rarely advisable. Essentially any mechanism which explicitly detects linear dependence at the earliest stage in the matrices I, A, A 2 , . . . , A n (in that order) will yield the minimal polynomial, [7,8]. In this work we use quaternions to achieve the same for specific structured 4 × 4 matrices. Whilst, 4 × 4 matrices are amenable to the techniques of [7,8], the corresponding calculations can be quite difficult, and would not produce the closed form expressions for minimal polynomials presented herein. More discussion on this issue is presented in Section 5. In the method proposed here one replaces matrix calculations (specifically computing A k ) via quaternion calculations. Not only does this simplify such calculations, but it also yields elegant geometric interpretations of situations wherein the minimal polynomial is a particular polynomial. Of course, this methodology does not extend to higher dimensions immediately (see, however, the discussion in Section 5), but 4 × 4 matrices already cover several important applications to warrant the investigation of such a technique. In quantum computation, quantum optics, computer graphics, robotics etc., much of the analysis is reducible to the study of 4 × 4 matrices [see, for instance, [3,4,16]]. The isomorphism of H ⊗ H with M (4, R) is a central point in the theory of Clifford algebras. So a natural question is whether Clifford algebra isomorphisms can be used for similar purposes. In fact, the interesting work of [1] uses the symbolic and numerical computation the (real) minimal polynomial of matrices via their Clifford algebra representatives, for exponentiation of matrices. The difference between the work of [1] and the results here, insofar as the problem of computation of minimal polynomials of matrices in M (4, R) is concerned, is that the structural (i.e., "geometrical") conditions given here on the entries of a matrix for it to possess a given minimal polynomial are missing in [1]. There are of course other differences. Section 5 discusses this issue briefly. It is appropriate at this point to record some history of the linear algebraic applications of the isomorphism between H ⊗ H and M 4 (R). This isomorphism is central to the theory of Clifford algebras, [10]. However, it is only relatively recently been put into use for linear algebraic purposes. To the best of our knowledge, the first instance seems to be the work of [9], where it was used in the study of linear maps preserving the Ky-Fan norm. Then in [6], this connection was used to obtain the Schur canonical form explicitly for real 4 × 4 skew-symmetric matrices. Next, is the work of [5,11,12], wherein this connection was put to innovative use for solving eigenproblems of several classes of structured 4 × 4 matrices. In [14,15], this isomorphism was used to explicitly calculate the exponentials of a wide variety of 4 × 4 matrices. Finally, in [2] it was used to obtain, among other things, the polar decomposition of 4×4 symplectic matrices via the solution of 2×2 linear systems of equations. The balance of this paper are organized as follows. In the next section basic notation and preliminary facts are reviewed. The next section contains all the main results on minimal polynomials obtained by our method. Since many of the proofs are similar we provide proofs for only a part of the announced results. The fourth section contains three applications. The first is to the complete determination of the Jordan structure of 4 × 4 skew-Hamiltonian matrices. The second illustrates the usage of the results on minimal polynomials to calculate the Cayley transform in closed form. The final application is to the determination of the singular values of 3 × 3 real matrices. The next section discusses extension of the results of section 3 via the use of Clifford Algebras. In particular, classes of matrices are uncovered which behave very similar to skew-Hamiltonian matrices insofar as minimal polynomial matrices are concerned. Their block structures do not suggest this similarity. This section also provides a brief comparison of our technique with that of [1,8]. The final section offers some conclusions. The classes of real matrices discussed in this work are as follows: • Skew-symmetric matrices, i.e., X satisfying X T = −X. • Hamiltonian matrices, i.e., matrices H satisfying • Perskewsymmetric matrices, i.e, matrices X satsifying X T R n = −R n X, where R n is the n × n matrix containing 1s on its main anti-diagonal and 0s elsewhere. R n is sometimes denoted F and is called the flip matrix. • For a matrix X ∈ M (n, R), we denote by X F , its adjoint with respect to the nondegenerate bilinear form defined by R n , i.e., it is the matrix R n X T R n . • For a matrix X ∈ M (2n, R), we denote by X H , its adjoint with respect to the nondegenerate bilinear form defined by J n , i.e., it is the matrix −J 2n X T J 2n . • Special orthogonal matrices, i.e., X with X T X = XX T = I n and det(X) = 1. These classes were picked because i) they are ubiquitous in applications; and ii) in most cases, as will be seen subsequently, elegant geometric conditions on their quaternionic representations can be given which ensure their possessing a certain minimal polynomial. Matrices such as persymmetric and symplectic matrices do not seem that amenable by the latter consideration, and therefore are not considered here. We note, however, that in the final section we discuss how quaternion techniques can be used to compute the minimal polynomial of a general matrix in M (4, R). Definition 2.1 H stands for the real division algebra of the quaternions. P stands for the purely imaginary quaternions. We will tacitly identify an element of P with the corresponding vector in R 3 . With this understood, the following two identities will be frequently used. • Let p, q, r ∈ R 3 . Then H ⊗ H and gl(4, R): The algebra isomorphism between H ⊗ H and M 4 (R) (also denoted by gl(4, R)), which is central to this work, may be summarized as follows: • Associate to each product tensor p ⊗ q ∈ H ⊗ H, the matrix, M p⊗q , of the map which • Extend this to the full tensor product by linearity. This yields an associative algebra isomorphism between H ⊗H and M 4 (R). Furthermore, a basis for gl(4, R) is provided by the sixteen matrices M ex⊗ey as e x , e y run through 1, i, j, k. In particular, R 4 , the matrix intervening in the definition of perskewsymmetric matrices, and J 4 , the matrix used in the definition of Hamiltonian and skew-Hamiltonian matrices, represented respectively, by M j⊗i and M 1⊗j , belong to this basis. Quaternion Representations of Special Classes of Matrices: Throughout this work, the following list of H ⊗ H representations of the above classes of matrices will be used: • Skew-Symmetric Matrices: s ⊗ 1 + 1 ⊗ t with s, t ∈ P. These can be easily obtained from the entries of the 4 × 4 matrix in question (see [11,5,12] for some instances). The key to this consists of the following two observations • Conjugation in H ⊗ H corresponds to matrix transposition in gl(4, R), i.e., Mp ⊗q = (M p⊗q ) T . This is why, for instance, symmetric matrices correspond to c(1 ⊗ 1) + p ⊗ i + q ⊗ j + r ⊗ k with p, q, r purely imaginary, and skew-symmetric matrices correspond to s ⊗ 1 + 1 ⊗ t, s, t ∈ P . If such a matrix is simultaneously symmetric, then α = β = 0, etc., See [11,5,12] for these expressions. For special orthogonal matrices, the entries of the matrix are quadratic in u and v. See [4] for an algorithmic determination of the unit quaternions u and v from the entries of a special orthogonal matrix. By way of illustration, the requisite expression for the quaternionic representation for a skew-Hamiltonian matrix is provided below. and p a purely imaginary quaternion. The formulae relating these to X's entries are as follows: We close this section with the notion of the reverse of a polynomial. Definition 2.2 If p(x) = n i=0 a i x i is a polynomial of degree n, then its reverse is the polynomial p rev (x) = n i=0 a n−i x i . We begin with a simple proposition, applicable in arbitrary dimensions, which reduces the list of possible minimal polynomials for some of the matrices to be considered here. If the degree of the minimal polynomial of A is even, then its minimal polynomial is an even polynomial. If the degree of the minimal polyomial is odd, then it is an odd polynomial. • II) Let A −1 be similar to A T . Then the constant term in its minimal polynomial is either +1 or −1. If it is the former, then its minimal polynomial equals its reverse. If it is the latter it is minus its reverse. a i x i be the minimal polynomial of A (and thus of A T ). Then clearly the polynomial p( which is monic, has to be the minimal polynomial of −A. was the minimal polynomial of −A (with l < k) then x l + contradicting the minimality of q A (x). Thus, since −A and A T are similar, q A (x) = p(x), and the result follows. Hamiltonian, perskewsymmetric and special orthogonal matrices, since in each of these cases When A T is similar to A, there are no such general results. Next follow our main results about minimal polynomials. As mentioned in Section 1, we detail only those cases where one has an "elegant" condition on the H ⊗ H representations of the matrix in question which is equivalent to the matrix having the said polynomial as its minimal polynomial. This already contains an extensive collection of useful matrices. Furthermore, since the proofs are similar, we present details only for some cases. • S has a quadratic minimal polynomial, which equals x 2 + λ 2 , iff precisely one of s or t is equal to zero. Furthermore, in this case, λ 2 is either s.s or t.t. • H has a cubic minimal polynomial, which equals p(x) = x 3 − (ω + 2k)x, with ω as in the quadratic minimal polynomial case and k as specified below, iff one of the following five mutually exclusive conditions hold [See Remark (3.2), below, for special cases of these conditions]. 1. b = 0 and the matrix G = X T X, with X = [p | q | r], has the the matrix In this case the coefficient k in the given cubic minimal polynomial 2. b = 0, r × q = 0, p = 0 r.q = 0 and q.q = r.r. In this case k = r.r + q.q In this case k = r.r. In this case, k = q.q. • If none of the above conditions hold, the minimal polynomial is the characteristic polynomial, which equals Theorem 3.3 Minimal Polynomials of Perskewsymmetric Matrices: Let P be a perskewsym- • P has a quadratic minimal polynomial iff one of the following three mutually exclusive sets of conditions hold. These are: i)α = 0, β = 0, s = 0; ii)β = 0, α = 0, r = 0; iii) α = β = 0 and either r × j = 0 or s × i = 0. In each of these cases the minimal • P has a cubic minimal polynomial iff α 2 −β 2 = s.s−r.r (without any of the conditions in the quadratic minimal polynomial case occurring). In this case the minimal polynomial • If none of the above conditions hold the minimal polynomial is the characteristic polynomial which equals Sketch of the Proof: We will illustrate the calculations involved by proving the conditions for quadratic and cubic minimal polynomials for a Hamiltonian matrix H. Quadratic Case: H 2 's quaternionic representation is According to Proposition (3.1) if at all H 2 is linearly dependent on a lower power of H, that power has to be 1 ⊗ 1. A necessary and sufficient condition for that to happen is evidently Cubic Case: By a direct calculation, which makes copious use of the vector triple identity [Equation (2.1)], one finds that In view of Proposition (3.1), for H to have a cubic minimal polynomial, therefore there has to be a real k such that and further that When this happens, in absence of the conditions for a quadratic minimal polynomial, the minimal polynomial of H is There are now two possibilities. In the former case, we find Next, noting that G is the Gram matrix of X = [p | q | r], one finds that taking the inner product on both sides of Equation (3.3) successively with p, q, r yields . Hence H has the stated minimal polynomial. and Now the analysis of the conditions equivalent to H having the stated minimal polynomial may be divided into two further cases: Suppose first that p is zero. Then Equation (3.4) and the first Equation in the system (3.5) are trivially satisfied, while the remaining two equations of Equation (3.5) yield These two equations contradict r × q = 0, unless r.r = q.q = k and q.r = 0. Conversely these two conditions are trivially sufficient to ensure that H has the said cubic minimal polynomial Next, suppose p = 0. Then certainly the linear independence of q and r, and the linear dependence of p on them is required. Now at least one of p.q or p.r is not zero, for otherwise p becomes zero, contradicting the starting assumption for this case. Now the analysis may be divided into three cases: • p.q = 0, but p.r = 0. Then taking the inner product of the first equation in (3.5) with q forces r.q = 0. Next, by taking the inner product of the same equation with r yields k = r.r. The second equation also forces k = r.r. The third equation is trivially satisfied upon taking inner product with q, while taking inner product with . Hence, we necessarily require (r.r) 2 + (p.r) 2 = (q.q)(r.r). Conversely, if these conditions hold, then the vectors formed by the left hand sides of Equation (3.5), which are in the span of q and r, are by construction orthogonal to q and r. Hence they must be zero. Thus, Equations (3.4) and (3.5) are satisfied and hence H has the stated cubic minimal polynomial. • Neither p.q nor p.r is zero. Then, first by taking inner product with q of the last equation of the system of (3.5), for instance, one sees that r.q = 0. Next taking the inner product with respect to q, first and then with respect to r of all equations in the system (3.5), one arrives at six possible expressions for k. Of these two are already equal to − q.p q.r (p.r). The remaining four are Hence necessarily these four quantities are equal to each other and to − q.p q.r (p.r). Conversely, these conditions are sufficient to ensure that H has the said cubic minimal polynomial with k = − q.p q.r (p.r). ♦ Remark 3.2 There are some special cases of the above result for the stated cubic minimal polynomial for a Hamiltonian matrix H, which deserve mention. • First if b = 0, p = 0, and p, q, r are collinear, then H has the given cubic minimal In this case k = −p.p. Indeed, in this case both the matrices G and Y are rank one matrices, and the condition GY = 0 then is equivalent to b 2 = p.p + q.q + r.r. Note this contains the special case that q = r = 0. In this case, H is also skewsymmetric, and we find that a necessary and sufficient condition for H to have the given polynomial as its minimal polynomial is p.p = b 2 . This, as is easily seen, is in keeping with the conditions for a skew-symmetric matrix to have a cubic minimal polynomial. • A second special case, diametrically opposed to the previous one, occurs when the vectors p, q, r are all non-zero, and satisfy q × r = αp, r × p = βq, p × q = γr, for some non-zero real numbers α, β, γ. One then finds that α = b (for otherwise, we would have a quadratic minimal polynomial). In this case k = (α − b)b and G and Y are both diagonal. Then the condition GY = bp.(q × r)I 3 is equivalent to β = γ (equivalently q.q = r.r) and These conditions are satisfied if, for instance, b = β = γ, α = b and p.p = b 2 . • Note when b = 0 = p, H is a symmetric, Hamiltonian matrix. The conditions stated above for a cubic minimal polynomial for H also follow from Theorem (3.5) below. Remark 3.3 Note that there is an asymmetry in the role of p (vis a vis q, r) in the matrix Y intervening in the conditions for a cubic minimal polynomial for a Hamiltonian matrix H. This is not surprising since p stems from the anti-symmetric part of H, while q, r stem from the symmetric part of H. Next we study minimal polynomials for skew-Hamiltonian and symmetric matrices. Now Proposition (3.1) does not apply. Nevertheless we will find that the former always have quadratic minimal polynomials, and this is an illustration of the utility of quaternions. For the latter, in order to minimize bookkeeping, we suppose they are traceless. Once the minimal polynomial of these are found, those of symmetric matrices with non-zero trace are easily found. symmetric matrix with representation p ⊗ i + q ⊗ j + r ⊗ k. Then • i) S has the quadratic minimal polynomial p(x) = x 2 − λ 2 iff the rank of X = [p, q, r] is one. In this case λ 2 = (p.p + q.q + r.r). • S has the cubic minimal polynomial p(x) = x 3 − (λ 2 + 2α)x iff the rank of X is two and one of the three following mutually exclusive conditions hold: -Upto cyclic permutations of p, q, r, p.q = 0, r × p = 0 = q × r, p.p = q.q. In this case α = p.p. -None of p × q, q × r or r × p is zero and the following set of equalities holds q.r . • When the degree of the minimal polynomial is four, the minimal (and characteristic) Note: In the case of symmetric matrices, there are other cubic minimal polynomials. Expressions and conditions for them can be found, but they do not have elegant geometric interpretations, and so we omit them. Sketch of the proof: Once again we illustrate the quadratic and cubic minimal polynomial case for traceless, symmetric matrices. One first finds that S 2 is given by rank one. When this holds λ 2 = p.p + q.q + r.r. Next a calculation shows that It follows that for S to have the desired minimal polynomial one needs p.(q × r) = 0 and that the following condition,and all cyclic permutations of it, have to hold for the same non-zero α. The condition p.(q × r) = 0 forces the rank of X = [p, q, r] to be atmost two. It has to be two, since the rank one case corresponds to a quadratic minimal polynomial. Hence rank of X T X = 2. Since X T X is positive semidefinite, at least one principal minor of order two has to be non-zero. Hence further analysis can be divided into three mutually exclusive cases: • Precisely one 2 × 2 principal minor of X T X is non-zero -say the one corresponding to the pair (p, q). Thus r × p = 0 = q × p, but p × q = 0. So the system (3.6) reduces to Hence, the linear independence of p, q first forces p.q = 0 and q.q = p.p = α. This implies α = 0 and hence r = 0. Conversely these conditions are sufficient for S to have the stated minimum polynomial. • Precisely two of the 2 × 2 principal minors of X T X are zero, say those corresponding to the pairs (r, p) and (q, r). In particular, p × q = 0. Writing out the system (3.6) under these assumptions, we find that r.p = 0, α = r.r, r.q = 0, α = p.p + q.q So the stated conditions are necessary and it is easy to see their sufficiency as well. • None of the 2 × 2 principal minors of X T X are zero. Thus each of the pairs (p, q), (q, r) and (r, p) are linearly independent, but each of the three vectors is linearly dependent on the remaining two. Then the system (3.6) is equivalent to (q.q + r.r − α)p = (q.p)q + (r.p)r (3.7) (r.r + p.p − α)q = (q.r)r + (q.p)p for α, which have therefore got to coincide, i.e., it is necessary that Conversely if the above equalities hold, then the vectors represented by the left hand sides of the system (3.7) are equal to the corresponding right hand sides. Hence these conditions are necessary and sufficient for S to have the stated cubic minimal polynomial. Remark 3.4 It is not enough for X = [p, q, r] to have rank 2 for a symmetric S to have the stated cubic minimal polynomial. The remaining conditions are needed. In fact, it turns out that all other cases when X has rank two correspond to fourth degree minimal polynomials. • If none of the above conditions hold, G's minimal polynomial is its characteristic polynomial which equals Sketch of the proof: First, since G is neither I nor −I, Im(u) and Im(v) cannot be simultaneously zero. Next, using u 2 = (2u 2 0 − 1) + 2u 0 Im(u) (and a similar expression for v 2 ), we see G 2 is represented by In view of Proposition (3.1) the only possible candidates for a quadratic minimal polynomial are p(x) = x 2 − 1 and p(x) = x 2 + ax + 1. Suppose, Im(u) = 0. Then u 2 0 = 1 and Im(v) = 0. So the second condition above forces v 0 = 0. But then the first condition above is not satisfied. Similarly the condition Im(v) = 0 is untenable. Hence, Im(u) = 0 = Im(v). Now the fourth and the first conditions together force u 0 = 0 = v 0 . Conversely, when u 0 = 0 = v 0 we see, from conditions above, that G 2 = I. Suppose Im(u) = 0. Then u 0 is +1 or −1. By absorbing the negative coefficient in v, we may suppose u 0 = 1. In this case a further necessary condition is a = −2v 0 . Similarly, if Im(v) = 0, a = −2u 0 is required. Conversely, these conditions are easily seen to be sufficient for p(x) = x 2 + ax + 1 to be the minimal polynomial of G. Next we study the necessary and sufficient conditions for G to have cubic minimal polynomials. First, we find that By Proposition (3.1) it suffices to consider when G 3 +aG 2 +aG+I = 0 or G 3 −aG 2 +aG−I = 0 for suitable constants a. Writing out the former we get for some polynomials f i , i = 1, . . . , 4, whose explicit form we omit for brevity. Necessarily Im(u) = 0 = Im(v) (for otherwise we would be in the case of lower degree minimal polynomials). Hence a necessary and sufficient condition for G 3 + aG 2 + aG + I = 0 is In [2], a quaternionic representation for Sp(4, R) was obtained. In particular, this was used to find a closed form formula for the characteristic polynomial of such matrices. Extending this to find expressions for the minimal polynomial remains to be investigated. Illustrative Applications In this section we work out a few sample applications of the foregoing results. The first appli- Here Proof: First since W is non-scalar, the quantity θ 2 =|| p || 2 +c 2 + d 2 is non-zero. Per This polynomial has roots (b + µ, b − µ), which are distinct iff µ = 0, whence the first conclusion. The algebraic multiplicity of both the roots, b + µ and b − µ, as roots of the characteristic polynomial has to be two each, for any other configuration of algebraic multiplicities would not yield Tr(W ) = 4b. This yields the stated characteristic polynomial. Note that when µ = 0, the sole eigenvalue is b with algebraic multiplicity four. Next, when µ = 0, W is diagonalizable and, in view of the algebraic multiplicities mentioned above, the corresponding Jordan form is diag This can be seen in a variety of ways. For instance, rk(Y ) = rk(Y T Y ) and the latter has rank 2 precisely when at least one of its 2 × 2 principal minors is non-zero. We will now show that at least one 2 × 2 principal minor M ij of Y T Y has to be non-zero. Now the above facts regarding the (1, 2) and (3,4) minors are equivalent to dp 1 = 0 (since This leads to the following system of equations for c 0 , c 1 : This yields c 0 = 2b+1−κ 2b+1+κ and c 1 = −2 2b+1+κ . Hence, From this expression and the fact that X's minimal polynomial has to have distinct roots, one can infer the following relation between X's minimal polynomial, and therefore the corresponding geometric conditions on p, q, r stated in Theorem (3.5), and Y 's singular values: • σ 2 = 0 = σ 3 , σ 1 = 0iff X has minimal polynomial x 2 − c 2 . • σ 1 = σ 2 = σ 3 = 0 iff X has minimal polynomial x 2 − 2lx − λ 2 . can be used to augment the list of minimal polynomials of X. The corresponding conditions on p, q, r are too cumbersome to state. Partly because of this, and partly since that would have been contrary to the spirit of the paper, these minimal polynomials were not presented in Theorem (3.5) (cf., the note, immediately following the statement of Theorem (3.5) and Remark (3.4). • If Y has rank 2 and σ 1 = σ 2 then X has a quartic minimal polynomial. • Y has a cubic minimal polynomial other than x 3 +cx iff τ = 0 and either i) In this case no eigenvalue of X is zero; or ii) σ 2 = σ 3 = σ 1 . In this case X has a zero eigenvalue iff σ 1 = 2σ 2 ; Extensions There are a few potential extensions of this work which we will discuss in this section. One trivial way to extend the above results is to consider block diagonal matrices, with each block 4 × 4. The minimal polynomial of such a matrix is the least common multiple of the minimal polynomials of the individual blocks. Thus, when each of these blocks belongs to any of the classes of matrices considered here, one can find in closed form their minimal polynomials. A second extension is to apply the theory of Clifford Algebras to calculate minimal polynomials, since each Clifford algebra arises as a suitable matrix algebra. In this regard we mention the interesting work of [1], where a symbolic calculation of the so-called real minimal polynomial is used to calculate exponentials of matrices. This, however, does not take into account the involutions of Clifford algebras, and thus the structure of the matrix is not used in finding minimal polynomials. In particular, there are no analogues of the geometric conditions on quaternions in the previous sections. To understand the crux of the differences between our work and that in [1], it is useful to note the three features of H ⊗ H which enable our approach : • i) H ⊗ H has a basis in which every element squares to plus or minus 1. Furthermore, any two elements in this basis commute or anti-commute. • ii) The matrix analogue of the natural conjugation on H ⊗ H is matrix transposition. • iii) The multiplication in H ⊗ H is intimately related to the geometry of vectors in R 3 . For Clifford algebras the first feature goes through verbatim. The second feature's effect is somewhat diluted, inasmuch as the natural involutions of the theory of Clifford Algebras (Clifford conjugation and reversion), [10,13], have easy matrix theoretic interpretations only in certain cases. Finally, the third feature is completely lost. In the work of [1], only the first feature is used. Hence the structural (i.e., geometric) conditions in this work on a matrix's H ⊗ H representation, for it to have a specific minimal polynomial, have no analogues in [1]. As mentioned in the previous paragraph, the three enabling features for the H ⊗ H isomoprhism of M (4, R) are diluted for Clifford algebra isomorphisms of matrix algebras. Nevertheless, there are two ways in which the theory of Clifford algebras can be used for the purpose at hand. First, one can uncover more classes for 4 × 4 matrices whose minimal polynomials can be calculated, and whose Jordan structure is akin to those of skew-Hamiltonian matrices. Let us now explore the first extension. To that end, note that there are two standard involutions in the theory of Clifford algebrasreversion and Clifford conjugation., [10,13] These are both anti-automorphisms. The matrix versions of these two involutions are easy for two classes of Clifford algebras. For Cl(n, 0), reversion is Hermitian conjugation, while for Cl(0, n) Clifford conjugation is Hermitian conjugation. Representing Cl(p + 1, q + 1) as M (2, Cl(p, q)) (the algebra of 2 × 2 matrices with entries in Cl(p, q)), it is known that Clifford conjugation is represented as follows Here, and in the balance of this section, Z cc (respectively, Z rev ) stands for the Clifford conjugation (respectively, reversion) of a matrix (or its Clifford representation) Z. Let us illustrate how this can be used to find minimal polynomials for matrices stemming . On Cl(1, 1) reversion sends X to R 2 X T R 2 (which, in the notation introduced in Section 2, is X F ), while Clifford conjugation sends X to −J 2 X T J 2 , which is X H . Equivalently, since X is 2 × 2, X CC is adj(X), where, as usual, adj(X) is the classical adjugate of X. Thus, on Cl(2, 2) we get Thus, if X ∈ Cl(2, 2) equals its own reversion, then A and D are each other's adjugates, while B and C are 2 × 2 skew-Hamiltonian. A basis of 1-vectors for Cl (2,2) consists of the following four matrices (written in H ⊗ H form): This yields expressions for 2-vectors etc., which we omit. Now, since X equals its own reverse, it must be a linear combination of the identity, 1-vectors and 4-vectors. The last equation, for a basis of one-vectors for Cl(2, 2), thus yields the following H ⊗ H representation of of the most general X ∈ Cl(2, 2) satisfying X rev = X. with p, s pure-quaternions, with the latter having no k-component. Thus such an X is remarkably similar to skew-Hamiltonian matrices, with the difference that the roles of j and k have been interchanged. Thus, we find, for instance that X 2 = (p.p − s.s − a 2 )1 ⊗ 1 + 2aX and hence such an X's minimal polynomial is quadratic. We omit the similar statements about the Jordan structure of such matrices, that this minimal polynomial yields. If X ∈ Cl(2, 2) satisfies X rev = −X, then it has an H ⊗ H representation akin to that for a Hamiltonian matrix, and therefore an analogue of Theorem (3.2) applies to it. Similarly, if X ∈ Cl(2, 2) is minus its own reversion, then i) A = D F and B and C are both perskewsymmetric; and ii) the H ⊗ H representation of X is given by with a ∈ R, p, q ∈ P and q.k = 0. Once again, this yields a quadratic minimal polynomial for X. • By passing to Cl(3, 1) and performing an analysis akin to the one above for Cl (2,2) one can show that those X ∈ Cl(3, 1) satisfying X rev = X are again given by Equation (5.11), while those satisfying X CC = X are skew-Hamiltonian matrices. • It is worth emphasizing that the block structures of the matrices considered in the previous paragraphs do not themselves reveal the simplicity of their minimal polynomials. It is only by passing to their H ⊗ H representations that we are lead to these results. For higher dimensional matrix algebras arising from Clifford Algebras, we do not (yet) have an exhaustive set of results. Nevertheless, some conclusions can be drawn, which would have been difficult to arrive at without passing to Clifford Algebras. Let us illustrate this via Cl(0, 6). This is M (8, R). Furthermore, Clifford conjugation is precisely matrix transposition in this case and thus a matrix is anti-symmetric iff it is minus its Clifford conjugation. Since Clifford conjugation of a p-vector in Cl(0, 6) is minus itself iff p = 1, 2, 5, 6, an 8 × 8 matrix is anti-symmetric iff it is a linear combination of of these p-vectors. We use the following basis of 1-vectors: Here the σ's are the usual Pauli matrices. Using this one can write down a basis of p-vectors for p = 2, 5, 6 which we omit for brevity. The typical 8 × 8 anti-symmetric matrix is thus a real linear combination p ijklm e ijklm + p 123456 e 123456 (5.14) One can now list a set of mutually exclusive conditions on these coefficients which are necessary and sufficient for X to have a quadratic minimal polynomial. From Proposition (3.1) we know that the minimal polynomial has to have the form p(x) = x 2 − λ 2 . Due to the more complicated structure of Clifford multiplication on Cl(0, 6) this list of conditions, even for the quadratic case, are far too long to enlist. Therefore, we will just give sample instances of these conditions. To that end, it is first noted that this set contains conditions of two types. The first consists of conditions which merely equate some of the coefficients, p J , J ⊆ {1, 2, 3, 4, 5, 6} in Equation (5.14) to zero. The latter consist of more complicated algebraic relations between the p J . To understand the difference between the two, it is first noted that a p-vector and a q-vector either commute or anti-commute. Conditions of the first type arise precisely when all the summands in Equation (5.14) anti-commute. Under these circumstances the minimal polynomial of X is clearly quadratic. The latter set of condition arises when there are some commuting summands in Equation (5.14). In this case the corresponding coefficients have to satisfy certain relations to ensure that p(x) = x 2 − λ 2 is the minimal polynomial of X. By carefully considering the commutation relations between the 1, 2, 5 and 6-vectors in Cl(0, 6) one can arrive at the aforementioned conditions. Enlisted below are instances, first of the first type of conditions and then of the second type of conditions. Examples of the second type of conditions are • X = p 1 e 1 + p 2 e 2 + p 13 e 13 + p 23 e 23 with p 1 p 23 = p 2 p 13 . Finally, in all the cases above λ 2 is the Euclidean length squared of the vector of coefficients describing X. Octonions and Quadratic Minimal Polynomials: One special class of 8 × 8 matrices which always have quadratic minimal polynomials can be obtained via octonions. Whilst the octonions are not associative, one can attach two 8 × 8 matrices, ω(a), θ(a) to an octonion a, [17]. We end this section with a discussion of how the method of [8] can be combined with those of this work to compute minimal polynomials of 4 × 4 matrices not covered above. The same discussion will also reveal why the method of [8] requires more computation than that proposed here. We first briefly recall the method of [8] for computing the minimal polynomial of a matrix X of size n × n. One first associates to the sequence {I, X, X 2 , . . .} the matrices G i , i = 1, . . . , n, where G i is the Gram matrix of the set of matrices {I, X, X 2 , . . . , X i } with respect to the inner product < Y, Z >= Tr(Y T Z) (here, for brevity, all matrices are assumed to be real). Thus, for instance Tr(I) Tr(X) Tr(X 2 ) Tr(X T ) Tr(X T X) Tr(X T X 2 ) Tr((X T ) 2 ) Tr((X T ) 2 X) Tr(X T ) 2 X 2 ) The method then, in essence, consists of two steps: • One computes the ranks of the G i 's. Then the degree of the minimal polynomial of X is r iff the first i for which the rank of G i is lower than i + 1 is r. • In this case it is also known that the kernel of G r is of dimension one. Furthermore, it is guaranteed that there is a vector in the kernel of G r whose last coefficient is non-zero. Normalizing this coefficient to one yields a vector (a 0 , a 1 , . . . , a r−1 , 1) in the kernel of G r . This vector yields the minimal polynomial of X to be p(x) = x r + r−1 i=0 a i x i . Thus, this method requires two steps i) Calculating the G i and their ranks successively till one detects a drop in rank. Thus, this step requires a requisite number of trace calculations plus one's favourite method to compute ranks; ii) Computing a non-zero vector in the kernel of G r . The first step is amenable to the methods used in this work, since to find the trace of a matrix being represented in quaternion (or Clifford Algebra) form, one has to only find the coefficient of the 1 ⊗ 1 term in the matrix. This rarely requires the full quaternionic expansion of the matrix. However, even for the classes of structured matrices considered here, these calculations involve more than those required by our methods. We illustrate this issue via the case of 4 × 4 real symmetric matrices. To detect a quadratic minimal polynomial, our method requires finding only X 2 . However, to find G 3 and check if its rank is two, one needs terms such as Tr(X 3 ). While, this does not require the full calculation of X 3 , it requires more than a calculation of X 2 , because one has to find the 1 ⊗ 1 term in X 3 . Even when the ranks of the G i have been computed and the degree of the minimal polynomial found, one has to still find a non-zero element of the kernel of G r . This is typically difficult to do in closed form, whereas the methods used here do produce the minimal poly-nomials (for the classes of matrices considered here) in closed form. Conclusions In this work a complete characterization of the minimal polynomials of several important classes of 4 × 4 real matrices, including those of interest in applications, was provided. These were illustrated by relevant applications such as the determination of the Jordan structure of 4 × 4 skew-Hamiltonian matrices. Extensions of these results via the usage of Clifford algebras was indicated. In particular, classes of matrices were found whose block structures bely their close similarity, vis a vis minimal polynomials, to skew-Hamiltonian and Hamiltonian matrices. Extensions of the preliminary results announced here for M 8 (R) will be the subject of future investigations.
2010-09-01T19:38:18.000Z
2010-09-01T00:00:00.000
{ "year": 2010, "sha1": "410416ed27738c8d0a5cd39c7425714aaedbebf1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "410416ed27738c8d0a5cd39c7425714aaedbebf1", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
18295213
pes2o/s2orc
v3-fos-license
The relationship between body system-based chronic conditions and dental utilization for Medicaid-enrolled children: a retrospective cohort study Background Dental care is the most common unmet health care need for children with chronic conditions. However, anecdotal evidence suggests that not all children with chronic conditions encounter difficulties accessing dental care. The goals of this study are to evaluate dental care use for Medicaid-enrolled children with chronic conditions and to identify the subgroups of children with chronic conditions that are the least likely to use dental care services. Methods This study focused on children with chronic conditions ages 3-14 enrolled in the Iowa Medicaid Program in 2005 and 2006. The independent variables were whether a child had each of the following 10 body system-based chronic conditions (no/yes): hematologic; cardiovascular; craniofacial; diabetes; endocrine; digestive; ear/nose/throat; respiratory; catastrophic neurological; or musculoskeletal. The primary outcome measure was use of any dental care in 2006. Secondary outcomes, also measured in 2006, were use of diagnostic dental care, preventive dental care, routine restorative dental care, and complex restorative dental care. We used Poisson regression models to estimate the relative risk (RR) associated with each of the five outcome measures across the 10 chronic conditions. Results Across the 10 chronic condition subgroups, unadjusted dental utilization rates ranged from 44.3% (children with catastrophic neurological conditions) to 60.2% (children with musculoskeletal conditions). After adjusting for model covariates, children with catastrophic neurological conditions were significantly less likely to use most types of dental care (RR: 0.48 to 0.73). When there were differences, children with endocrine or craniofacial conditions were less likely to use dental care whereas children with hematologic or digestive conditions were more likely to use dental care. Children with respiratory, musculoskeletal, or ear/nose/throat conditions were more likely to use most types of dental care compared to other children with chronic conditions but without these specific conditions (RR: 1.03 to 1.13; 1.0 to 1.08; 1.02 to 1.12; respectively). There was no difference in use across all types of dental care for children with diabetes or cardiovascular conditions compared to other children with chronic conditions who did not have these particular conditions. Conclusions Dental utilization is not homogeneous across chronic condition subgroups. Nearly 42% of children in our study did not use any dental care in 2006. These findings support the development of multilevel clinical interventions that target subgroups of Medicaid-enrolled children with chronic conditions that are most likely to have problems accessing dental care. Background The 2011 Institute of Medicine Report Improving Access to Oral Health Care for Vulnerable and Underserved Populations highlights the problems children with chronic conditions have in accessing dental care [1]. Over 20% of children in the U.S. have chronic conditions [2,3]. Based on the definition of children with special health care needs developed by the Maternal and Child Health Bureau, chronic conditions are behavioral, intellectual, developmental, or physical ailments expected to last ≥12 months in ≥75% of patients identified with the condition [4]. Examples of common chronic conditions include uncontrolled asthma, attention deficit and hyperactivity disorder, and cerebral palsy. Dental caries is the most common disease among all children, including those with chronic conditions [2,5]. As a group, children with chronic conditions are believed to be at increased risk for caries for the following reasons: (1) use of sugar-containing, acidic, or xerostomic medications; (2) frequent exposure to carbohydrates because of dietary needs or oromuscular problems; (3) behavioral comorbidities that make it difficult for caregivers to brush the child's teeth regularly with fluoridated toothpaste; and (4) dentists' unwillingness to treat children with chronic conditions. A comprehensive strategy to ensure optimal oral health for children with chronic conditions includes regular visits to a dentist for preventive care (e.g., examinations; cleanings; topical fluoride; sealants) as well as restorative care (e.g., fillings; stainless steel crowns; extractions) when needed. However, dental care is the most common unmet health care need among children with chronic conditions [2], which has renewed interests in developing strategies aimed at improving dental utilization for medically vulnerable children. Medicaid is the largest public source of dental care funding for children with chronic conditions in the U.S. [6]. State Medicaid programs are required by the federal Early and Periodic Screening, Diagnosis, and Treatment (EPSDT) Program to provide all child enrollees with comprehensive dental care [7]. While Medicaid-enrolled children are more likely to visit a dentist than uninsured children [8,9], studies have documented disparities in dental care use among subgroups of Medicaid-enrolled children [10,11]. A recent publication reported that Medicaid-enrolled children with chronic conditions are slightly more likely to use dental care than Medicaidenrolled children without chronic conditions [12]. Compared to Medicaid-enrolled children with less complex chronic conditions, those with more complex chronic conditions were less likely to use any dental care [12]. In another study, Medicaid-enrolled children with an intellectual or developmental disability (defined as children with a non-acquired cognitive impairment) were equally as likely to use preventive dental care as Medicaid-enrolled children without an intellectual or developmental disability [13]. Collectively, these studies suggest heterogeneity in dental care use across subgroups of Medicaid-enrolled children with chronic conditions. While this is consistent with anecdotal evidence, there are no empirical studies to support this statement. The lack of data demonstrating heterogeneity in dental use may be one reason why current interventions fail to target children with chronic condition at greatest risk for disparities in dental use. Population-based interventions that target all children with chronic conditions are inefficient and may misallocate scare resources, which can lead to suboptimal outcomes. In this study, we used 3M Clinical Risk Grouping (CRG) Software, a validated risk adjustment tool [14], to identify Medicaid-enrolled children with chronic conditions. Our goal was to assess dental use across 10 body system-based chronic condition subgroups. This approach is consistent with the specialty-focused medical care system into which most children with chronic conditions are integrated. Based on previous findings that children with chronic conditions have higher levels of unmet dental needs than those without [2], we compared dental care use for children with chronic conditions across each of the following chronic condition subgroups: hematologic; cardiovascular; craniofacial; diabetes; endocrine; digestive; ear/nose/throat; respiratory; catastrophic neurological; and musculoskeletal. The knowledge generated from this study will help us to identify the subgroups of children with chronic conditions who are at greatest risk for disparities in dental care use and to develop future interventions aimed at ensuring that these children have optimal access to dental care. Study design This was a retrospective study based on enrollment and claims data from the Iowa Medicaid Program (2003)(2004)(2005)(2006). We received approvals from the University of Washington and the University of Iowa Institutional Review Boards. Conceptual model The study was based on a sociocultural oral health disparities model presented by Patrick and colleagues [15]. Model covariates were organized into five domains: Ascribed factors (immutable individual-level determinants); Proximal factors (modifiable individual-level health behaviors); Immediate factors (household-level mediators between proximal and intermediate factors); Intermediate factors (community-level factors); and Distal factors (system-level factors). Study subjects We focused on children with chronic conditions ages 3-14 years who were enrolled in the Iowa Medicaid Program for ≥11 months in 2005 and in 2006. Children under age 3 were excluded because chronic conditions are typically not diagnosed until the child's third birthday [16]. We also excluded children ≥15 years of age because the determinants of dental use for older adolescents are different from younger children [17]. Children with chronic conditions were identified by applying the 3M Clinical Risk Grouping (CRG) Software to each child's medical claims data from 2003-2005 (Wallingford, CT) [18]. The CRG algorithm uses diagnostic codes (International Classification of Disease-Version Nine-Clinical Modification [ICD-9-CM]) and health service utilization codes (Current Procedural Terminology [CPT]) to classify each child into one of nine mutually exclusive Core Health Status Groups (CHSGs) [4]. We excluded children in CHSGs 1 (healthy children) or 2 (children with an acute condition) and focused on children in CHSGs 3 (minor chronic condition) through 9 (catastrophic chronic condition). The final study population consisted of 25,993 Iowa Medicaid-enrolled children with chronic conditions ages 3-14 years (Figure 1) Model covariates Based on Patrick's model [15], we considered the following 10 variables (organized into five domains) for inclusion in our models, measured in 2005. Ascribed factors: age (three categories based on the child's dentition: 3-7 [primary and early mixed dentition]; 8-12 [mixed dentition]; 13-14 [early permanent dentition] years); sex (male/female); race/ ethnicity (White, Black, other, missing/unknown); and chronic condition severity (using previously validated methods [14], the seven CHSGs were reorganized into a four-category hierarchical, mutually exclusive variable referred to as modified CHSGs: episodic chronic condition; life-long chronic condition; malignancy; or catastrophic chronic condition). Proximal factors: use of preventive medical care in 2005 (no/yes); previous use of any dental care in 2005 (no/ yes). Immediate factors: whether the child had any Medicaid-enrolled siblings (no/yes) or adults in the household (no/yes). Intermediate factor: rurality (a four-category variable [13] based on the 2003 USDA Rural-Urban Continuum Codes and the child's county of residence: metropolitan; urban adjacent to metropolitan; urban non-adjacent to metropolitan; rural). Distal factor: whether the child lived in a dental Health Professional Shortage Area based on the child's residential zip code (no/yes). Statistical analyses After generating descriptive statistics, we used the Pearson chi-square test (α = 0.05) to test the bivariate relationships between model covariates and (1) the 10 chronic condition subgroups and (2) the five outcome measures. Next, we assessed for collinearity between the rurality and dental HPSA variables. There was no evidence of collinearity and both variables were included in the models. Then we constructed five Poisson regression models (use of any dental care, diagnostic care, preventive care, routine restorative care, and complex restorative care) for each of the 10 chronic condition subgroups. We reported covariateadjusted relative risk ratios and estimated 95% confidence intervals using robust general estimating equation estimators of variance [21]. We tested for a statistical interaction between the two immediate factors (whether the child had a Medicaid-enrolled sibling or adult in the household) and included the interaction term in the regression model only if it was statistically significant. To address the problem of high correlation between use of any dental care in 2005 and the outcome measures, we dropped this variable from the final regression models. We analyzed the data using SPSS Version 19.0 for Windows. Descriptive statistics The mean age for children in the study was 8.9 ± 3.4 years (data not shown). About 40% of children were female (Table 1). Over 70% were White; 9.1% were Black; 7.5% were another race or ethnicity; and 13.2% had missing/ unknown race or ethnicity data. In regards to chronic condition severity 69.3% had episodic chronic conditions; 28.2% had life-long chronic conditions; 0.3% had a malignancy; and 2.2% had catastrophic chronic conditions. Nearly 90% of children utilized preventive medical care in 2005. About 67.1% had a Medicaid-enrolled sibling and 55.5% had an adult in their household enrolled in Medicaid. Most children lived in a metropolitan area (55.2%) and 65.8% lived in a dental HPSA. Bivariate statistics The bivariate relationships between model covariates and exposure variables (each of the 10 chronic condition subgroups) are summarized in Tables 1 and 2. Even though most children were male, when there were statistically significant differences, larger proportions of children across the chronic condition subgroups were female. Across every subgroup, significantly larger proportions of children with the chronic condition utilized preventive medical care in 2005 than children without the specific chronic condition. There were no other consistent findings. The bivariate relationships between model covariates and the primary outcome variable (any dental care use in 2006) as well as the secondary outcome measures (use of diagnostic, preventive, routine restorative, or complex restorative dental care) are summarized in Table 3. Significantly larger proportions of children who utilized each type of dental care were White. In addition, significantly larger proportions of children who utilized preventive medical care in 2005 subsequently utilized all types of dental care in 2006. Unadjusted dental utilization in 2006 About 58.3% of Medicaid-enrolled children with chronic conditions used any dental care in 2006; 54.7% used diagnostic care; 49.5% used preventive care; 18.8% used routine restorative care; and 9.1% used complex restorative care (Table 4). Significantly lower proportions of children with catastrophic neurological conditions used all types of dental care, expect for complex restorative care, for which there was no difference. Larger proportions of children with respiratory, ear/nose/throat, digestive, or musculoskeletal conditions used most types of dental care than did children with chronic conditions without these specific conditions. There was no difference in use across all types of dental care for children with and without diabetes or cardiovascular conditions. Utilization was inconsistent across the different types of dental care for children with hematologic, endocrine, or craniofacial conditions and those children with chronic conditions but without these specific conditions. In regards to other variables, significantly larger proportions of children ages 3-7 and 8-12 years utilized all types of dental care, except routine restorative care, than did children ages 13-14 years. Compared to children with the least severe chronic conditions (episodic), larger proportions of children with a malignancy (the second highest severity group) utilized all types of dental care except for preventive dental care whereas children with a catastrophic condition (the most severe chronic condition group) utilized all types of dental care at the lowest rates. Larger proportions of children who utilized preventive medical care in 2005 subsequently utilized dental care in 2006. Significantly larger proportions of children with a Medicaid-enrolled sibling utilized all types of dental care, whereas this relationship was statistically significant only for routine and complex restorative dental care for the children with a Medicaid-enrolled adult in the household. Significantly larger proportions of children in metropolitan areas utilized any, preventive, or diagnostic dental care; there were no significant differences by rurality for routine and complex restorative dental care. Finally, children who lived in a dental HPSA were less likely to utilize all types of dental care, though these differences were significant only for any dental care and for preventive dental care. Poisson regression models The statistical interaction between the two immediate factors, having a Medicaid-enrolled sibling or adult in the household, was statistically significant for routine restorative dental care use across all 10 chronic condition subgroups and for preventive dental care use for some of the chronic condition subgroups. The interaction term was included only in the models in which it was statistically significant. Covariate-adjusted relative risks (RR) corresponding to the 10 chronic condition subgroups are summarized in Table 5. Relative risks for other model covariates are available upon request. Our findings are organized into 4 groupings. First, children with catastrophic neurological conditions were significantly less likely (RR: 0.48 to 0.73) to use most types of dental care than other children with chronic conditions but without a catastrophic neurological condition. There was no difference in complex restorative dental care use (p = .56). Children with an endocrine condition were slightly less likely to use preventive care and routine restorative dental care than children with chronic conditions but without an endocrine condition (p = .049 and p < .0001; respectively). Children with craniofacial conditions were also less likely to use routine restorative care and children with hematologic conditions were less likely to utilize complex restorative dental care than children with chronic conditions who did not have these particular conditions (p = .02 and p = .03; respectively). In other words, when there were differences, children with catastrophic neurological, endocrine, craniofacial, or hematologic conditions were less likely to utilize dental care than children with chronic conditions but without these specific conditions. Second, children with respiratory or musculoskeletal conditions were significantly more likely to use most types of dental care than other children with chronic conditions but without these specific conditions (RR: 1.06 to 1.13 and 1.06 to 1.08; respectively). Among children with chronic conditions, there was no difference in complex restorative dental care use for children with and without musculoskeletal conditions. Children with ear/ nose/throat conditions were significantly more likely to use diagnostic, preventive, and complex restorative dental care and there was no difference in use of any or routine restorative dental care. Children with digestive conditions were significantly more likely to use any dental care or diagnostic dental care than other children with chronic conditions without these specific conditions (RR: 1.03 for both types of dental care). There was no difference in use of the other three types of dental care. In other words, when there were differences, children with respiratory, musculoskeletal, ear/nose/throat, or digestive conditions were significantly were more likely to utilize dental care than other children with chronic conditions who did not have these particular chronic conditions. Third, there was no significant difference across all five outcome measures for children with diabetes or cardiovascular conditions and children with other types of chronic conditions but without these specific conditions. Fourth, in regards to other model covariates there are three sets of findings (data not shown). In the any, diagnostic, and preventive dental care use models (Models A), children in the following subgroups were significantly less likely to use dental care: children ages 13-14 (referent = ages 3-7); males; Blacks (referent = Whites); children with the most severe chronic health conditions; children who did not use preventive medical care in 2005; children without a Medicaid-enrolled sibling; those living in urban areas (referent = metropolitan); and those living in a dental HPSA. In the routine restorative dental care use models (Model B), findings were similar to those from Models A except that children ages 13-14 were more likely to use routine restorative care. There were no significant differences in the risk ratios of routine restorative dental care use across sex and dental HPSA status. For the complex restorative dental care use models (Model C), findings were similar to those from Models A except that there were no significant differences across sex, whether the child used preventive medical care in 2005, whether the child had a Medicaidenrolled sibling, or whether the child lived in a dental HPSA. Discussion This is the first known study, to our knowledge, that examined dental care use for Medicaid-enrolled children with chronic conditions with an emphasis on body system-based subgroups. We compared dental care use for Medicaid-enrolled children across 10 chronic condition subgroups. Collectively, our data support two findings that are new to the dental health services literature: (1) dental care use is heterogeneous across chronic condition subgroups; and (2) the determinants of dental care use vary across different types of dental care. There were three main findings in regards to specific chronic conditions. The first is that when there were differences children in certain subgroups (e.g., catastrophic neurologic, endocrine, craniofacial, hematologic conditions) were significantly less likely to use dental care than other children with chronic conditions who did not have these particular conditions. Children with these chronic conditions may be at the greatest risk for disparities in dental care use. There are two possible explanations. Many of these children have developmental or acquired cognitive deficits and may have difficulty cooperating during dental visits. Dentists could be less willing to treat these children because of inadequate training [22]. Another explanation is that caregivers may have high levels of stress associated with managing the child's other systemic health care needs [23], which pushes oral health down on the priority list. It is particularly worrisome that children with catastrophic neurologic conditions were significantly less likely to use preventive dental care. This finding has oral health-related implications especially if the child has a poor diet or behavioral comorbidities that make it difficult for caregivers to brush the child's teeth regularly with fluoridated toothpaste. These findings appear to conflict with previous work suggesting that Medicaid-enrolled children with intellectual or developmental disabilities are equally as likely to use preventive dental care as those without [13]. A possible explanation for this inconsistency is that children with intellectual or developmental disabilities present with varying degrees of disability. The previous study did not control for this factor while the current study did. The second finding is that children with respiratory, musculoskeletal, ear/nose/throat, or digestive conditions were more likely to use most types of dental care compared to children with other types of chronic conditions but without these spe-cific conditions. Children with respiratory conditions (e.g., asthma, cystic fibrosis) may require medications or have enamel defectsfactors that increase their risk for dental caries [24][25][26]. Children with musculoskeletal conditions (e.g., arthritis) are also at risk for oral health problems [27]. Children with ear/nose/throat conditions undergo procedures involving the mouth and oral structures, making it plausible that these children receive team-based medical care. These factors may increase caregiver awareness of the importance of dental visits or the likelihood of dental referrals by physicians, though there are no published data to support these hypotheses. Studies from the medical literature report low adherence to inhaler medication for Medicaid-enrolled children with asthma because of caregiver misunderstanding of medications, which makes the former explanation unlikely [28]. We recognize that the risk ratios from our models are small (ranging from 1.02 to 1.13). However, on a population-level, small risk ratios are meaningful, especially when the prevalence of a particular chronic condition is high [29]. The prevalence of respiratory conditions was over 80% and over 40% of children in our study had a musculoskeletal or ear/nose/throat condition. Identifying the mechanisms underlying higher rates of dental use for children with specific types of chronic conditions in future studies may provide insight on how to improve utilization rates for children in other chronic condition subgroups that are not as likely to use dental care. The third finding is that there was no difference in dental use for children with diabetes or cardiovascular conditions compared to children with other chronic conditions but without these conditions. Non-significant differences in dental care use may not be a clinically significant problem as long as children are receiving appropriate dental care. However, this is unlikely, especially because these chronic conditions have oral healthrelated sequelae that make dental visits important. For instance, the link between pediatric diabetes and periodontal disease [30,31] underscores the importance of regular maintenance and monitoring therapy that might require additional dental visits for children with diabetes. Future studies should investigate whether no differences in dental care use across subgroups actually means that children in these subgroups are receiving appropriate dental care. In addition to the findings related to specific chronic conditions, we found that children who used preventive medical care are significantly more likely to use all types of dental care, except for complex restorative care. While there is potential for selection bias [32], this finding reinforces the importance of strengthening the clinical ties between pediatric medicine and dentistry [33]. The mechanisms between use of medical and dental care have not yet been elucidated and require further investigation. In term of the research significance of the our study, any dental care use, a standard measure of access to dental care services, may be a more appropriate proxy for use of diagnostic or preventive dental care services rather than routine or complex restorative dental care. When developing oral health intervention and polices, it may be most effective for planners to specify the particular types of dental care the program seeks to improve use of by taking into consideration the differential determinants of dental care. This maximizes the likelihood that children have appropriate access to preventive as well as restorative dental care when needed [34]. As with all studies, our investigation has strengths and limitations. The primary strength is that we used validated methods, 3M Clinical Risk Groups, to identify children with chronic conditions and to adjust for the severity of those chronic conditions in the models. In addition, we adopted an a priori conceptual model that helped to guide model covariate selection. Finally, we examined use of different types of dental care to obtain a more complete view of dental utilization for children with chronic conditions. The major limitation is the lack of clinical oral health data, which precluded us from determining whether the observed utilization rates were appropriate. This limitation can be addressed with future studies by collecting clinical data and linking these data with dental claims data. Another limitation is that we measured dental use during a single calendar year, which provides a snapshot rather than a longitudinal perspective on dental use. Future studies might examine utilization trends over time across the different chronic condition subgroups. Finally, because this was an observational study, there is potential for residual confounding, which we attempted to minimize by adopting a conceptual model that we used to develop our empirical model. In the future, there is the potential to link claims data with survey data that might be used to collect social and behavioral measures that potentially confound the relationship between chronic conditions and dental use. Conclusion The goal of pediatric dentistry is to ensure optimal oral health for all children, including children with chronic conditions. An important component of optimal oral health is regular visits to the dentist for preventive dental care and restorative care when needed. Our findings suggest heterogeneous dental utilization patterns for children across different chronic condition subgroups. It is important to note that nearly 42% of children in our study did not utilize any dental care in 2006, which highlights the barriers to dental care that many Medicaid-enrolled children with chronic condition encounter. Some of these barriers may be system-level (e.g., low reimbursement to dentists for treatment) whereas others are environmental/ social (e.g., lack of dental offices in areas where Medicaid enrollees live) or behavioral (e.g., dentists' unwillingness to see Medicaid patients or symptom-driven dental utilization patterns by patients). The next step for researchers is to identify the social and behavioral determinants of particular types of dental care use that exist at these various levels (e.g., ascribed, proximal, immediate, intermediate, distal). This information can then be used to develop and test multilevel clinical interventions aimed at improving dental utilization for specific subgroups of children with chronic conditions who exhibit the greatest disparities in dental care use.
2016-05-12T22:15:10.714Z
2012-08-07T00:00:00.000
{ "year": 2012, "sha1": "265660cf68a3325fe4fd05581b14db4c18e6a489", "oa_license": "CCBY", "oa_url": "https://bmcoralhealth.biomedcentral.com/track/pdf/10.1186/1472-6831-12-28", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8a17eb044bb813eec75bb4405deff0ce1832e9c1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244002498
pes2o/s2orc
v3-fos-license
Couple Stress Hybrid Nanofluid Flow through a Converging- Diverging Channel This research work is aimed at scrutinizing the mathematical model for the hybrid nanofluid flow in a converging and diverging channel. Titanium dioxide and silver are considered solid nanoparticles while blood is considered as a base solvent. The couple stress fluid model is essentially used to describe the blood flow. The radiation terminology is also included in the energy equation for the sustainability of drug delivery. The aim is to link the recent study with the applications of drug delivery. It is well-known from the available literature that the combination of TiO2 with any other metal can vanish more cancer cells than TiO2 separately. Governing equations are altered into the system of nonlinear coupled equations the similarity variables. The Homotopy Analysis Method (HAM) analytical approach is applied to obtain the preferred solution. The influence of the modeled parameters has been calculated and displayed. The confrontation to wall shear stress and hybrid nanofluid flow growth as the couple stress parameter rises which improves the stability of the base fluid (blood). The percentage (%) increase in the heat transfer rate with the variation of nanoparticle volume fraction is also calculated numerically and discussed. Introduction The flow of fluids in converging/diverging channels has particularly significant applications in science and technology, such as flows in cavities and channels. The converging/divergent channels also relate to the blood flow in the arteries and capillaries. The stretching converging and diverging channels are also very significant to the blood flow due to the occurrence of stress effects. The researcher has worked in the same model for other industrial applications. Sheikholeslami et al. [1] demonstrated the effect of nanoparticles considering Jeffery fluid. Turkyilmazoglu [2], Dogonchi and Ganji [3], Xia et al. [4], and Mishra et al. [5] have considered the same model for the fluid flow using the concept of shrinking/stretching in converging/diverging channels. Nanotechnology has refined and expanded the horizons of today's scientific world owing to its unpredicted results occurring in the field of energy, biotechnology, drugs, and therapeutics. It has also been demonstrated that stenosis is a damaging and potentially fatal disease, so researchers attempted to eliminate the problem using nanotechnology. Researchers believe that nanotechnology can deliver innovation in treating these kinds of problems since nanoparticles can pass through tissues and cells. Following that, there is a noticeable increase in research related to the advanced progress of nanoparticles in drugs [6][7][8][9]. Shahzadi and Bilal [10] pioneered nanoparticles by revealing their dynamic and abnormal properties. Nadeem and Ijaz [11] described the use of nanoparticles to transport blood through a stenosis artery with a permeable wall. Ellahi et al. [12] reported blood flow to arteries consisting of the composite when nanoparticles were used. Nadeem and Ijaz [13] studied the effect of nanoparticles on stenotic artery hemodynamics and found them to be very helpful in reducing wall pressure with a shear rate. There is dispersion of more nanoparticles with different thermophysical properties from hybrid nanofluids that have attracted researchers because they are widely used in the fields of energy and medicine [14]. The case of bionanotechnology, which is a renovation and an open and innovative horizon in medicine, is one of the most auspicious applications of hybrid fluids. Numerous studies have demonstrated the effectiveness of nanoparticles in tumor targeting, therapy, and diagnosis process. Many studies have shown how effective nanoparticles are in tumor targeting, diagnosis, and treatment. It should be noted that nanoparticles have eliminated some of the shortcomings of traditional chemotherapy [15]. Liu et al. [16] investigated the use of Pt/TiO 2 and Au/TiO 2 nanocomposites, which are useful for cancer cell treatment. It was observed that the combination of TiO 2 with any other metal can vanish more cancer cells than TiO 2 separately. Silver has a wide range of biomedical uses due to its exclusive properties. The product containing silver is usually used for antimicrobial activity versus a broad spectrum of microorganisms. Moreover, experimental data suggest that Ag nanoparticles are a more ecological and biocompatible substitute to standard anticancer medicines [17]. Blood, the most important biological fluid, is a liquid composed of various cell types suspended in a matrix of aqueous fluid (the plasma). It should be noted that red blood cells in plasma contribute to rotary motion in the occurrence of a velocity gradient. Body tissues have an angular gyration moment as well as an angular orbital moment. As a result, blood may be assumed a non-Newtonian fluid with a constant density. Stokes' theory is one of several polar fluid theories that take into consideration [18]. Couple stress fluid applications in biological problems are gaining popularity, and they are critical from both a theoretical and practical standpoint. Blood flow can be controlled with adequate couple stress. The theory of couple stress is first time introduced by Stokes [19] in the blood flow and claimed that blood is very reasonably flowing in the vessels due to the occurrence of the couple stresses. Devakar and Iyengar [20] suggested using a couple-stress term to regulate blood flow through the human system. Similarly, the idea was further extended by Devakar and Iyengar using the isothermal conditions and have found the exact solution. Recently, Saeed et al. [21], Ahmad et al. [22], and Gul et al. [23,24] have used the couple stress fluid terminology in the hybrid nanofluids for drug transport and medication. They have also studied the heat transfer enhancement effect on the blood flow in various geometries. In the light of the above discussion, the novelty of this study is highlighted as follows: (i) According to the best of the author's knowledge, no one has tried to investigate the flow through a converging/diverging stretchable/shrinkable channel with blood as the base fluid and TiO 2 -Ag as nanoparticles (ii) This article examines a suitable background of couple stress hybrid nanofluid flow through converging/diverging stretchable/shrinkable channels (iii) Heat absorption/omission and thermal radiation terminologies also strengthen the novelty of the work (iv) The system of equations is then analytically solved by HAM (v) The statistical analysis is also performed and presented through bar charts Formulation Assume the steady, laminar, incompressible, and couple stress ðTiO 2 -AgÞ hybrid nanofluid, while the fluid motion is caused by the thermal radiation and a source or sink among the binary contracting/expanding channel, such that 2α is the angle between them. The walls of the channel are also assumed to be stretchable along the radial direction. Here, u = uðr, θÞ, s stands for the velocity of the hybrid nanofluids and extending/contracting phenomena, respectively. The conditions ðα > 0, α < 0Þ are used to show that the channels are divergent and convergent correspondingly. The velocity for the fluid motion is the function of both ðr , θÞ. The couple stress terminology is imposed to the flow field whereas the other assumptions of [3][4][5] are used; the basic constituent dimensional equations of the hybrid nanofluid are taken into account. The pressure of fluid, electromagnetic field, and radiative heat flux are presented byP, B 0 , q r,rad , q θ,rad . The radiation terms are further written as Here, k nf * and σ * are the absorption term and Stefan-Boltzmann constants. 2 Journal of Nanomaterials Putting the values of equation (7) into equation (5), we have In the above equations, η 0 is the couple stress term; also, ρ hnf , μ hnf , ðρC p Þ hnf , and k hnf represent density, viscosity, density, and specific heat; the thermal conductivity of the hybrid nanofluids such that hnf stands for hybrid nanofluid. 2.1. Properties of the Materials. Initially, nanoparticles (titanium) are dispersed in the bloodstream (base fluid) to produce one (mono-nano fluid). (Silver) is then distributed as an additional nanoparticle to form the (hybrid nanofluid). On this occasion, TiO 2 represents (titanium dioxide nanomaterial) and silver (Ag nanoparticles) and subscript f describes blood (base fluid). In Tables 1 and 2, ϕ 1 and ϕ 2 state the volume fraction of TiO 2 and Ag nanoparticles, where ϕ 1 = ϕ 2 = 0 refers to the base fluid. Initial and Boundary Conditions. The auxiliary conditions at boundaries are 2.3. Introduction of Nondimensional Variables. In the case of the radial flow, equation (1) reduced to The nondimensional transformation is defined as The use of (10) and (11) and thermophysical properties alter equations (3)-(5) in the simplified form as The simplified form of the physical conditions are stated as ð14Þ [18]. Viscosity Table 1: Properties of TiO 2 and blood nanofluid [18]. Viscosity The alteration used for equation (16) and the simplified form is attained as Solution Methodology The series solution is one of the valued methods to handle nonlinear problems. Nonlinear problems usually arise in the field of science and engineering. HAM is one of the latest and fast convergence techniques and is frequently used in the solution of nonlinear and coupled equations. The BVPh 1.0 and BVPh 2.0 are the latest packages of HAM that enhance the convergence of the proposed problems. These packages are very helpful in the rapid convergence, and one can use the BVPh 2.0 package up to the 100th iterations easily. The idea of HAM was first introduced by Liao [25]. The idea is further improved by the same author by introducing the new packages [26]. These packages are frequently used like [27][28][29][30][31][32]. The feedback problem (12)- (18) was resolved by the HAM-BVPh 2.0 technique. The estimate of the iterations is utilized up to the 30th order. The trial solution or initial solution is required for the HAM solution. The zerothorder solution is obtained as Equations (12)- (14) are set under the planned packaging and presented as The sum of the two components in the form of square residual errors is displayed as The numerical results of the converging parameter are obtained as The range of convergence control parameters is used to find out the physical and numeric results. Results and Discussion The flow of the blood-based hybrid nanofluid consisting of TiO 2 and Ag has been considered in the converging and diverging channel. The heat transfer mechanism and medication are the main purposes of the proposed model. The main finding of the obtained results is shown physically and numerically. The geometry of the problem and convergence controlling sketches are demonstrated in Figures 1(a Tables 1-3. Table 4 shows the assessment of the current work with the available literature and the closed agreement to authenticate the validation of the problem. The drag force on the upper and lower walls is calculated for the embedding parameters and demonstrated in Table 5. The accumulative growth in the values of the constraints is used to keep the convergent range of the proposed problem. The drag force rises with the increment in these parameters (ϕ 1 , ϕ 2 , Re, and k * ) for both nanofluids and hybrid nanofluids. The calculated increase shows that the resistive force is more effective by using the hybrid nanofluid TiO 2 + Ag at both the lower and upper walls of the channels. Furthermore, the friction force is efficiently working in the converging channel as compared to the other one. The heat transfer rate is calculated numerically using the embedded parameters, and the results are exhibited in Table 6. The augmentation in the values of the parameters Rd, ϕ 1 , and ϕ 2 progresses the heat transfer rate ultimately. The attained results show that the heat transfer rate is more immediate by using the ðTiO 2 + AgÞ hybrid nanofluids. The heat transfer rate stimulates fluid motion by controlling the viscous effect. The TiO 2 material works as the treatment material in cancer therapy while the stability in the blood is controlled through silver. The (%) wise increase in the heat transfer rate versus the nanoparticle volume fraction has been calculated and displayed in Table 7. The hybrid nanofluid improves the heat transfer analysis as compared to the other traditional fluids. In each case, the accumulative growth provides the increasing effect, and this improvement is more effective using the hybrid nanofluids. The comparison of the obtained results is compared with the available literature [3][4][5] and displayed in Figures 7(a) and 7(b) considering diverging and converging cases of the channel. The closed agreement has been achieved while choosing the common parameter Re. The influence of the nanoparticle volume fraction versus the skin friction has been shown in Figures 7(c) and 7(d) for both cases. The augmentation in the values of ϕ 1 , ϕ 2 improves the resistive force to rise the drag force at the upper and lower walls. The influence is relatively strong using the hybrid nanofluids. The percentage increase in the heat transfer rate has been revealed in Figures 8(a)-8(d). The values of the ∝ < 0 (d) Figure 8: (a-d) Matching of the current work with published literature [3][4][5] and Cf versus ϕ 1 , ϕ 2 . 11 Journal of Nanomaterials nanoparticle volume fraction are used up to 3% as ðϕ 1 , ϕ 2 = 0:0, 0:01, 0:02, 0:03Þ. The comparative analysis of the nanofluid and hybrid nanofluid is shown in Figures 8(a) and 8(c) for the diverging and converging cases of the channel, while the % analysis has been performed in Figures 8(b) and 8(d) for the same cases, respectively. The % increase is more appropriate by using the hybrid nanofluids in both α > 0 and α < 0. Conclusions The current article explores the blood flow across a converging/diverging channel with stretchable/shrinkable walls with couple stress for the application of drug delivery. The consequences of the converging/diverging parameter, couple stress parameter, and solid nanoparticles are incorporated. To the best of our knowledge and belief, the converging/diverging channel including blood as a base fluid does not exist in the existing literature. Furthermore, the work also extended using the Ag and TiO 2 hybrid nanofluid. Couple stress terminologies are also used as a novelty in the current problem. The key conclusions of the existing study are as follows: (i) The rising values of solid nanoparticles ϕ 1 , ϕ 2 enhance the energy transmission rate, and the impact is relatively larger in the case of hybrid nanofluid (ii) The velocity field declines with the accumulative values of the parameters ϕ 1 , ϕ 2 , and Re (iii) The couple stress parameter k * has a significant role in blood flow analysis and declines the hybrid nanofluid motion (iv) TiO 2 + Ag hybrid nanofluids have an important role in the Escherichia coli culture to evaluate their antibacterial strength (v) The % analysis shows that hybrid nanofluids are more efficient for heat transfer analysis (vi) The pH values improve with the increment in heat transfer. That is why the purpose of the recent study is to use the TiO 2 + Ag hybrid nanofluids for medication Data Availability All the relevant data exist in the manuscript. Conflicts of Interest The authors declare that they have no conflict of interest.
2021-11-12T16:21:44.930Z
2021-11-10T00:00:00.000
{ "year": 2021, "sha1": "bb113c3e7519ce1a2840c4c4f21c116523f4af4d", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jnm/2021/2355258.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "5396814ba0c69b4cf10d48e29bca2e3c6c16f3d0", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [] }
227080676
pes2o/s2orc
v3-fos-license
Boron isotopic signatures of melt inclusions from North Iceland reveal recycled material in the Icelandic mantle source Trace element and volatile heterogeneity in the Earth’s mantle is influenced by the recycling of oceanic lithosphere through subduction. Oceanic island basalts commonly have high concentrations of volatiles compared to mid-ocean ridge basalts, but the extent to which this enrichment is linked to recycled mantle domains remains unclear. Boron is an ideal tracer of recycled subducted material, since only a small percentage of a recycled component is required to modify the bulk d 11 B of the source mantle. Boron isotopic compositions of primary melts thus have potential to trace the fate of recycled subducted material in the deep mantle, and to constrain the lengthscales of lithologic and compositional heterogeneities in diverse tectonic settings. We present new measurements of volatiles, light elements and boron isotopic ratios in basaltic glasses and melt inclusions that sample the mantle at two endmember spatial scales. Submarine glasses from the Reykjanes Ridge sample long-wavelength mantle heterogeneity on the broad scale of the Iceland plume. Crystal-hosted melt inclusions from the Askja and Ba´r ð arbunga volcanic systems in North Iceland sample short-wavelength mantle heterogeneity close to the plume centre. The Reykjanes Ridge glasses record only very weak along-ridge enrichment in B content approaching Iceland, and there is no systematic variability in d 11 B along the entire ridge segment. These observations constrain ambient Reykjanes Ridge mantle to have a d 11 B of (cid:1) 6.1 ‰ (2SD = 1.5 ‰ , 2SE INTRODUCTION The chemical flux of volatile elements from the Earth's interior to its surface environments is governed by partial melting of the mantle followed by magma ascent and eruption.Volatiles are returned to the deep mantle through the tectonic recycling of oceanic lithosphere at subduction zones.Over billion-year residence times in the mantle, this subducted material is stretched and thinned so that its geochemical signature is attenuated, creating lithologic, isotopic and volatile element heterogeneities on a range of lengthscales (Alle `gre and Turcotte, 1986;Kellogg and Turcotte, 1987).Melting of recycled oceanic lithosphere has long been recognised as generating the compositional variability observed in ocean island and mid-ocean ridge basalts (OIB and MORB), through the lithophile radiogenic isotopic compositions (Sr, Nd, Pb, e.g.White and Hofmann, 1982;White, 1985;Zindler and Hart, 1986;Jackson et al., 2012;Stracke, 2012;White, 2010White, , 2015) ) and major and trace element systematics (e.g.Langmuir and Hanson, 1980;Weaver, 1991;Hirschmann and Stolper, 1996;Prytulak and Elliott, 2007;Sobolev et al., 2007;Shorttle and Maclennan, 2011) of erupted basalts.Ocean island basalts commonly have high concentrations of volatiles in comparison to MORB (e.g.Schilling et al., 1980); however, the extent to which this volatile enrichment is linked to recycled mantle domains remains unclear (e.g.Kendrick et al., 2015, and references therein). Boron is incompatible in silicate minerals (Brenan et al., 1998), and its bulk partitioning behaviour between peridotite minerals and basaltic melt is comparable to that of Pr (Marschall et al., 2017).The depleted mantle has a very low B concentration, and its near-uniform boron isotopic signature of d 11 B = À7.1 ± 0.9‰ (Marschall et al., 2017) is not significantly fractionated during melting or crystallization.In contrast, boron is concentrated in surface reservoirs such as seawater, and is both enriched and isotopically fractionated in sediments and hydrothermally altered oceanic crust and lithospheric mantle (Marschall, 2018, and references therein).Enriched and fractionated lithologies returned to the mantle in subducting slabs include: marine sediments ([B] = 1 to >100 lg/g, d 11 B +2 to +26‰); continental sediments ([B]=50-150 lg/g, d 11 B À13 to À8‰); altered oceanic crust ([B]=10-90 lg/g, d 11 B 0 to +18‰); and oceanic serpentinites ([B] 10-90 lg/g, d 11 B +7 to +40‰) (Vils et al., 2009;De Hoog and Savov, 2018).Boron concentrations and boron isotopic compositions are gradually lowered during progressive dehydration of the subducting slab (e.g.Konrad-Schmolke and Halama, 2014).Nevertheless, the isotopic contrast between potential recycled lithologies and depleted mantle means that only a small percentage of a recycled crustal component may be required to modify significantly the boron isotopic signature of the mantle, making boron isotopes potentially a sensitive tracer of recycled subducted material (De Hoog and Savov, 2018).Boron isotopic compositions of primary melts thus have potential both to trace the fate of recycled subducted material in the deep mantle, and to constrain the lengthscales of lithologic and compositional heterogeneities in diverse tectonic settings. A number of studies have used B contents and d 11 B ratios of primitive basalt whole-rocks and glasses to characterize recycled components in OIB mantle sources (Ryan et al., 1996;Chaussidon and Jambon, 1994;Chaussidon and Marty, 1995;Tanaka and Nakamura, 2005).However, a major challenge in determining the B isotopic compositions of diverse mantle reservoirs is the propensity of ascending melts to assimilate altered crustal material en route to the surface.The isotopic contrast between primary mantle melts, geothermal fluids and hydrothermally altered crustal rocks means that bulk d 11 B in basalts is highly sensitive to even small degrees (<3%) of crustal assimilation. Crystal-hosted melt inclusions offer the possibility of accessing unmodified melt compositions.Primitive inclusions trapped during the earliest stages of fractional crystallization have the highest likelihood of preserving primary mantle-derived elemental concentrations and isotopic signatures (Gurenko and Chaussidon, 1997;Kobayashi et al., 2004;Maclennan, 2008;Walowski et al., 2019).Furthermore, the host mineral shields the melt inclusion from any further effects of crustal processing, such that the inclusion records the B abundance and d 11 B of the surrounding melt at the time of trapping.Melt inclusion suites trapped over a long crystallization interval therefore offer the potential to track chemical signatures of crustal contamination as crystallization proceeds. Iceland is an ideal natural laboratory for investigating mantle heterogeneity.Previous workers have inferred the presence of recycled oceanic crust in the Icelandic mantle on the basis of major element, incompatible trace element and radiogenic and stable isotope compositions of erupted basalts (e.g.Fitton et al., 1997;Chauvel and He ´mond, 2000;Skovgaard et al., 2001;Kokfelt et al., 2006;Shorttle and Maclennan, 2011;Koornneef et al., 2012).Furthermore, mantle heterogeneity is present both on the 100 km lengthscale of Iceland's active neovolcanic zones (e.g.Wood et al., 1979;Zindler et al., 1979;Hanan and Schilling, 1997;Stracke et al., 2003;Thirlwall et al., 2004;Koornneef et al., 2012;Shorttle et al., 2013) and on the lengthscale of melt supply to a single eruption (e.g.Gurenko and Chaussidon, 1995;Maclennan et al., 2007;Maclennan, 2008;Halldorsson et al., 2008;Winpenny and Maclennan, 2011;Neave et al., 2013).The boron isotopic signature of the Icelandic mantle has previously been estimated at À11.3 ± 3.8‰ (2SD; Gurenko and Chaussidon, 1997), based on measurements of primitive olivine-hosted melt inclusions from Miðfell and Maelifell in the Western Volcanic Zone (WVZ), and from the Reykjanes Peninsula.The WVZ inclusions are typified by low ratios of incompatible trace elements such as La/Yb, which could suggest an association between isotopically light boron and a relatively depleted mantle source.However, it is not known whether low d 11 B signatures are typical of Icelandic basalts, nor whether low d 11 B represents a recycled lithology that is heterogeneously distributed in the Icelandic mantle.Brounce et al. (2012) assumed a d 11 B of À7.8‰ for the Icelandic mantle, i.e. within the À7.1 ± 0.9‰ proposed for depleted MORB mantle (DMM; Marschall et al., 2017), based on the least negative d 11 B obtained in their study of plagioclase-hosted melt inclusions from the AD 1783 Laki fissure eruption.However, this value was obtained from a melt inclusion containing just 6.09 wt.% MgO, sufficiently evolved that the melt d 11 B may already have been modified through crustal assimilation.The nature and lengthscale of boron isotopic heterogeneity in the Icelandic mantle therefore remains poorly characterized. A further consideration is that comparisons of d 11 B values obtained at different laboratories and using various analytical techniques must take into account analytical limitations.The past decade has seen significant developments in the characterization of silicate reference materials for boron isotope analysis, as well as improvements to analytical protocols (Marschall, 2018, and references therein).It has been suggested that measured differences in d 11 B from the 1980s and 1990s are not likely to be significant below the 5‰ level (Marschall, 2018), particularly when compared with more recently acquired data.Therefore, a fundamental outstanding question is whether the existence of an isotopically light component in the Icelandic mantle with d 11 B around À11‰ (Gurenko and Chaussidon, 1997) can be verified with new high-precision analyses. In this work we present new measurements of volatiles (H 2 O, CO 2 , S, F, Cl), light elements (B, Li), and boron isotopic ratios in two sample suites that sample the mantle at two endmember spatial scales.First, we have studied a suite of olivine-and plagioclase-hosted melt inclusions and glasses from North Iceland.These samples have previously been analysed for their major, trace element and oxygen isotopic compositions (Hartley et al., 2012;Hartley and Thordarson, 2013;Hartley et al., 2013).Importantly, the oxygen isotopic signatures of the most primitive melt inclusions reflect a primitive mantle-like component with d 18 O of +5.2 ± 0.2‰, whereas more evolved melt inclusions and matrix glasses have lower d 18 O values that reflect progressive assimilation of low-d 18 O altered basaltic crust.This sample suite is therefore ideal for identifying and characterizing both mantle-derived and assimilation-modified d 11 B signatures on a single-eruption and rift zone scale.We also present new B, Li and boron isotope data for a suite of basalt glasses from the Reykjanes Ridge south of Iceland (Murton, 1995).Previous studies of elemental, isotopic and redox geochemistry in these samples (Murton et al., 2002;Nichols et al., 2002;Shorttle et al., 2015) reveal systematic long-wavelength mantle heterogeneity on the broad scale of the Iceland plume (Schilling, 1973): glasses recovered north of 61 N at radial distances <600 km from the putative plume centre record increasingly enriched and oxidised geochemical signatures, whereas samples collected $1200-600 km from the plume provide a reference point for the boron isotopic composition of ambient Reykjanes Ridge mantle.With these datasets we examine the volatile, Li, B and boron isotopic heterogeneity in Icelandic primary melts, and determine the extent to which boron isotopic compositions in Icelandic basalts are controlled by crustal contamination.Our results offer insights into the contribution of deep recycled mantle material to melt production, and hence the lengthscales of volatile element heterogeneity across an ocean island. SAMPLES AND ANALYTICAL METHODS The samples from North Iceland selected for this study comprise basaltic tephra collected from a suite of eruptions located between the northern edge of Vatnajo ¨kull glacier and the central part of the Askja volcanic system (Fig. 1).Sample locations and eruption ages are summarized in Table S2.1.The Holuhraun samples discussed in this study were probably erupted between the 1860s and 1890s, and are geochemically similar to melts from the Ba ´rðarbunga volcanic system (Hartley and Thordarson, 2013).These older lavas are now partly covered by the 2014-2015 Holuhraun lava flow field (Pedersen et al., 2017).All references to Holuhraun in this study refer to the older Holuhraun eruptions, unless otherwise stated.The samples from Askja central volcano comprise two basaltic tuff sequences located on the northeast and southwest shores of O ¨skjuvatn lake, erupted between 3.6 and $3.0 ka BP; basaltic tephra erupted in January 1875 (denoted 1875-J) and March 1875 (denoted 1875-M); and three small eruptions from the early 20 th century (c.1910, 1921, and 1922-23).The most northerly samples were collected from the 1875 AD Ny ´jahraun fissure eruption, located 45-60 km north of Askja central volcano.Before performing the measurements described below, all samples were thoroughly cleaned to remove old gold and carbon coatings, and re-polished to remove analysis pits from previous ion probe measurements. The Reykjanes Ridge samples comprise quenched basaltic glass from pillows or sheet flows, collected at radial distances of $1100 to $400 km from the Iceland plume centre (Murton, 1995;Murton et al., 2002). Volatiles and light elements Melt inclusion sulfur contents were measured alongside the major elements reported by Hartley and Thordarson (2013), using the Cameca SX-100 electron microprobe instrument at the University of Edinburgh.Precision and accuracy were monitored by repeat analyses of standards with known S concentrations, and are estimated to be better than AE3% and AE5% respectively. Following electron microprobe analyses, a total of 165 unbreached inclusion-hosted vapour bubbles and 3 fluid inclusions from 23 different crystals were analysed by micro-Raman spectroscopy using a Horiba LabRam instrument at the University of Cambridge, following the method outlined by Hartley et al. (2014).Olivines typically host only 1-4 melt inclusions, whereas some plagioclases contain melt inclusion assemblages of 30 or more inclusions.The majority of analyzed bubbles were hosted in inclusions that were not opened for geochemical analysis.Of those inclusions exposed at the sample surface, 21 had unbreached bubbles that were analysed by Raman spectroscopy, and two had breached bubbles that could not be analysed.It is possible that some of the exposed inclusions hosted bubbles that were completely removed during sample preparation prior to visual inspection. Melt inclusion and bubble lengths and widths were measured from high-resolution photomicrographs taken using Zeiss AxioVision software.Inclusion and bubble volumes were then calculated assuming a regular ellipsoidal shape and that depth was equal to the shorter of the measured dimensions.The melt inclusions range from 5 to 300 lm in their longest dimension (average 54 lm).Bubbles had diameters of 1-60 lm (average 9 lm).In all but 13 of the bubble-bearing melt inclusions, the bubble occupied <5% of the inclusion volume (average 1.2 vol.%).Of the remaining inclusions, 12 had bubbles occupying between 5 and 13 vol.% of the inclusion, and one bubble comprised 42 vol.% of its host inclusion. The presence of CO 2 in fluid bubbles is verified by the presence of Fermi diad peaks at $1285 cm À1 and $1380 cm À1 in the Raman spectum.The Fermi diad spacing, D, was converted to fluid density using the equation of Kawakami et al. (2003).The fluid is assumed to be pure CO 2 since we did not detect any characteristic bands corresponding to other volatile species such as H 2 O, SO 2 or SO 2À 4 in any of the Raman spectra. Following Raman analyses, volatile (CO 2 , H 2 O, F, Cl) and light element (B, Li) concentrations in 74 melt inclusions and 31 matrix glasses were determined by secondary ion mass spectrometry (SIMS) using the Cameca ims-4f instrument at the Edinburgh Ion Microprobe Facility.CO 2 was measured first, with the instrument configured to a high mass resolution to resolve any interference by 24 Mg 2þ on the 12 C þ peak.The remaining elements were then measured in the same spots during a second round of analyses with the instrument configured to lower mass resolution.Precision and accuracy for CO 2 and H 2 O were monitored by repeat analyses of standards with known compositions (Shishkina et al., 2010) and were AE10.8% and AE10% for CO 2 , and AE8% and AE8% for H 2 O. Precision and accuracy for light elements were monitored by repeat analyses of glass standards NIST-SRM610, GSA-1G, GSD-1G, and BCR2-G.Accuracy was typically better than AE5% for Li, AE6% for B, and AE20% for F and Cl.Average precision was as follows: Li, AE3%; B, AE4%; F, AE11%; Cl, AE22%.All errors are 2r. Boron and lithium concentrations were measured in 65 basalt glasses from the Reykjanes Ridge in a separate session under similar operating conditions.Each sample was measured twice or three times, and the results averaged. Boron isotopes Boron isotopic ratios in North Iceland samples were measured for 63 melt inclusions and 37 matrix glasses using the Cameca ims-1270 instrument at the Edinburgh Ion Microprobe Facility.Samples were re-polished to remove old analysis pits.Boron isotope analyses were then made in the same locations as the volatile and light elements, with the exeption of 6 glasses that were not previously analysed.Five silicate glass standards with known boron isotopic ratios (GSD-1G, StHs6/80-G, GOR128-G, GOR132-G and BCR2-G) were measured at regular intervals during the session to assess precision and accuracy.Standard glass GSA-1G was mounted alongside the unknowns and analysed at regular intervals to monitor and correct for instrumental drift.The mean internal precision was 0.95 AE 0.47‰ (2SD) across the five glass standards.The external precision, or reproducibility, is estimated at 1.49‰ based on n = 19 measurements on GSD-1G.The propagated uncertainty on the correction for instrumental mass fractionation is equivalent to AE0.28‰.The reported total analytical uncertainties on the North Iceland unknowns take into account the internal precision (uncertainty on an individual analysis), the external precision, and the propagated uncertainty on the instrumental mass fractionation correction, and range from 1.7 to 6.1‰ (2SD; average 3.1‰).Full details of the error propagation calculation are provided as supplementary information. Boron isotope ratios in 50 Reykjanes Ridge glasses were measured in a separate analytical session using similar operating conditions, divided into three sub-sessions based on minor differences in beam conditions.Precision and accuracy were monitored using the same set of standard glasses, with the exception of GOR132-G.Standard glass BCR-2G was mounted alongside the unknowns and measured at regular intervals to monitor instrumental drift, which was negligible.The mean internal precision was 0.95 AE 0.85‰ (2SD) across the four glass standards and three sessions.The external precision was assessed through repeat measurements on GSD-1G in each subsession, and is 1.1‰ or better.Propagated uncertainty on the correction for instrumental mass fractionation is equivalent to 0.5, 0.4 and 0.2‰ for sub-sessions 1, 2, and 3.Each of the Reykjanes Ridge unknowns was measured at least five times.The total analytical uncertainty on individual measurements ranged from 2.1 to 7.9‰ (2SD; average 3.2‰).We have reported the d 11 B values for the Reykjanes Ridge glasses as the average of n measurements, and uncertainties are reported as 2 standard error of the mean which ranges from 0.7 to 3.4‰ (average 1.5‰).Further details about the analytical methods and data processing are provided as supplementary material. Summary of major elements and post-entrapment crystallization corrections The major element compositions measured in glasses and melt inclusions from the Askja NE and SW tuff sequences, Ny ´jahraun and Holuhraun, are described in detail by Hartley and Thordarson (2013).The melt inclusion compositions were previously argued to be close to equilibrium with their host minerals, and minimally modified by post-entrapment crystallization (PEC).However, applying an empirical PEC correction similar to that of Neave et al. (2017) reveals that some plagioclase-hosted melt inclusions did experience substantial PEC prior to quenching.We added equilibrium plagioclase incrementally back into the inclusion until its Al 2 O 3 , FeO and MgO contents match those of Icelandic tholeiitic glasses.Following this procedure, 58 out of 91 inclusions had Kd plÀliq AbÀAn within the range 0.27 AE 0.11 appropriate for plagioclase-melt equilibrium above P1050 C (Putirka, 2008).Thirty inclusions had Kd plÀliq AbÀAn lower than the equilibrium range, and these were all hosted in An > 86 plagioclase.Applying any further PEC correction to these inclusions in order to satisfy the equilibrium criterion results in unrealistically high Al 2 O 3 compared with Icelandic tholeiitic glasses (Fig. 2); similar results have been obtained for melt inclusions in high-anorthite plagioclases from the nearby Grı ´msvo ¨tn volcanic system (Neave et al., 2017) and the 2014-15 Holuhraun eruption (Hartley et al., 2018).Four inclusions required subtraction of equilibrium plagioclase to satisfy the equilibrium criterion.The average PEC correction for inclusions from the Askja NE and SW tuff cones was 4% (range 0-18%), and for Holuhraun the average PEC correction was 8% (range 4-15%).Olivine-hosted melt inclusions were corrected for PEC by adding equilibrium olivine back to the inclusion until a Kd olÀliq FeÀMg of 0.30 (Roeder and Emslie, 1970) was reached.The mean PEC correction for inclusions from Ny ´jahraun was 1.7% (range 0.0-2.5%),and for Holuhraun the average correction was 2.1% (range 0.0-6.0%). Corrected major element compositions of melt inclusions are summarized in Fig. 2. Following PEC correction, the most primitive melt inclusions in the sample suite contain up to 9.3 wt.% MgO (Fig. 2) and are hosted in plagioclases from Holuhraun.The most evolved melt inclusions from each sample have compositions that are comparable with their carrier liquids, represented by the matrix glass. Volatile and light element contents of melt inclusions were corrected for PEC assuming that they are perfectly incompatible in olivine and plagioclase.Melt inclusion trace element contents were corrected for PEC using parti-tion coefficents from O'Neill and Jenner (2012).Boron and oxygen isotopes are not significantly fractionated during basalt crystallization (Eiler, 2001;Marschall et al., 2017), so no PEC correction is required. Volatiles and light elements New volatile and light element analyses for our North Iceland and Reykjanes Ridge samples are summarized in Fig. 3, and plotted against similarly incompatible trace elements in Fig. 4. Carbon dioxide The CO 2 contents of matrix glasses and the evolved Ny ´jahraun melt inclusions are low (50-284 lg/g), and in some cases are lower than the detection limit of $24 lg/g (Fig. 3a).The more primitive melt inclusions (>6 wt.% MgO) have a wide range of CO 2 contents (365-2670 lg/ g), but most inclusions have CO 2 contents close to the mean values of 806 lg/g for the Askja NE tuff, 1036 lg/g for the SW tuff, and 867 lg/g for Holuhraun. We detected no Fermi diad peaks in the Raman spectra of 143 out of 165 inclusion-hosted bubbles, suggesting that they contain (0.04 g/cm 3 CO 2 (Hartley et al., 2014) and make no significant contribution to the total melt inclusion CO 2 content (e.g.Steele-MacInnis et al., 2011).These apparently empty bubbles typically occupy <3 vol.% of their host inclusion (Fig. S2.1).They are most likely true shrinkage bubbles, formed due to differential thermal con- traction of the host olivine and silicate melt upon quenching, and with negligible diffusive transfer of CO 2 from the silicate melt into the vapour phase.CO 2 fluid was detected in 22 inclusion-hosted bubbles and one fluid inclusion.These were hosted in one olivine crystal from Holuhraun (one bubble, one fluid inclusion); two plagioclases from the NE tuff (three bubbles); and four plagioclases from the SW tuff (17 bubbles).Fluid densities for the 22 CO 2 -bearing inclusion-hosted bubbles were converted to CO 2 contents in lg/g on a perinclusion basis, after estimating the volumes of the bubble and the glass, following the mass-balance approach of Steele-MacInnis et al. (2011).We assumed a melt density of 2750 kg/m 3 for the mass balance calculations.The calculated bubble CO 2 contents range from 86 to >11,000 lg/g (average 1860 lg/g).For CO 2 -bearing bubbles, there is a strong positive correlation between bubble CO 2 content and the bubble volume fraction of the melt inclusion. We find no differences in the glass CO 2 contents of bubble-bearing versus bubble-free inclusions.Given that 143 out of 165 inclusion-hosted bubbles contain no detectable CO 2 and are most probably true shrinkage bubbles, we can assume that melt inclusion glasses typically record the total melt inclusion CO 2 content at quenching.For three of our melt inclusions (two olivine-hosted inclusions from Holuhraun and one plagioclase-hosted inclusion from the SW tuff) it is necessary to add the glass and bubble CO 2 contents to yield the total inclusion CO 2 content.These inclusions have total CO 2 contents of 880-1980 lg/ g, within the ranges of measured glass CO 2 contents in the same samples.The percentage of CO 2 sequestered into the fluid phase were 5% and 13% for the two Holuhraun inclusions, and 73% for the plagioclase-hosted inclusion from the SW tuff. Water Melt inclusion H 2 O contents for Holuhraun and the Askja tuff sequences cluster around 0.39 AE 0.08 wt.%, with no statistically significant differences between eruptions and no correlation of H 2 O with MgO (Fig. 3b).Most Ny ´jahraun melt inclusions lie between the most H 2 O-rich and H 2 O-poor matrix glasses, and the positive correlation of H 2 O with MgO suggests that these inclusions were trapped as H 2 O was degassing.A single melt inclusion containing 0.98 wt.% H 2 O may record the undegassed pre-eruptive melt H 2 O content. The North Iceland matrix glasses contain an average 400 AE 180 (2SD) lg/g fluorine (Fig. 3d).They have similar F contents to the evolved melt inclusions, suggesting that there was minimal F degassing before the matrix glasses were quenched.More primitive melt inclusions contain 60-620 lg/g F. In olivine-hosted melt inclusions, F is negatively correlated with MgO (Fig. S2.2), but this correlation is absent for plagioclase-hosted melt inclusions (Fig. S2.3). Chlorine in matrix glasses ranges from 75 to 380 lg/g (Fig. 3e).Holuhraun melt inclusions contain 60-185 lg/g Cl, and Cl is negatively correlated with MgO.Melt inclusions from the Askja tuff cones have slightly higher Cl contents of 110-395 lg/g, and are more Cl-rich at any given MgO content than inclusions from Holuhraun. Boron, lithium Both B and Li are broadly negatively correlated with MgO (Fig. 3f,g), consistent with a dominant fractional crystallization control on the concentrations of these incompatible trace elements.All the glasses and melt inclusions measured in this study contain between 0.3 and 2.9 lg/g B, similar to the B contents measured in global MORB datasets (Marschall et al., 2017).Several Reykjanes glasses, and a small number of melt inclusions, have slightly higher [B] than the main population of North Iceland melt inclusions.Two Reykjanes Ridge glasses located <500 km from the putative Iceland plume centre have slightly higher [B] than most Reykjanes Ridge samples; however, the apparent increase in the mean [B] of Reykjanes Ridge samples approaching Iceland is not significant on the lengthscale of the whole dataset (Fig. 5b). Askja and Holuhruan matrix glasses contain 0.1-9.2lg/ g Li (Fig. 3g).Reykjanes Ridge glasses contain 3.7-6.7 lg/g Li, and there is no systematic along-ridge variability in Li content (Fig. 5a).They are compositionally indistinguishable from the main population of North Iceland melt inclusions.A small number of melt inclusions have low Li contents down to 0.1 lg/g, and two inclusions have anomalously high Li contents up to 16 lg/g. Boron and oxygen isotopes Boron isotopic compositions of the North Iceland melt inclusions range from À20.7 to +0.6‰.Across the whole dataset the modal d 11 B = À5.9‰,where 'modal' refers to the peak in the probability distribution, here and throughout the text.The modal d 11 B values are À6.1 to À6.4‰ for Holuhraun and the Askja tuff sequences, and À4.9‰ for Ny ´jahraun (Fig. S2.7).The North Iceland glasses have d 11 B between À10.6 and À4.0‰, and the modal d 11 B value is À5.6‰ (Fig. S2.7).Reykjanes Ridge glasses have d 11 B between À7.9 and À3.6‰.Their modal d 11 B is À6:1 AE ð2SD ¼ 2:0‰, 2SE = 0.5‰, n = 50).There is no along-ridge variability in d 11 B (Fig. 5c) and there is no correlation between [B] and d 11 B. For both North Iceland and Reykjanes Ridge samples, the modal d 11 B values are higher than the À7.1 AE 0.9‰ (mean of six ridge segments, 2SD) proposed for uncontaminated MORB (Marschall et al., 2017).However, all the North Iceland samples contain melt inclusions that are isotopically lighter than the proposed MORB range.Some inclusions from Holuhraun and the Askja tuff sequences have d 11 B within the range À11.3 AE 3.8‰ measured in primitive olivine-hosted melt inclusions from the Western Volcanic Zone and Reykjanes Peninsula (Gurenko and Chaussidon, 1997). The North Iceland melt inclusions show no statistically significant correlations between d 11 B and indices of melt evolution.However, if literature and Reykjanes Ridge data are included, the lowest d 11 B values appear to be associated with the most primitive melt and host mineral compositions (Fig. S2.9).For olivine-hosted melt inclusions, d 11 B and MgO are negatively correlated with R 2 = 0.60 (Fig. 6a).Plagioclase-hosted inclusions have widely variable d 11 B at near-constant MgO or B content (Fig. S2.9): for example, melt inclusions from the Askja SW tuff containing 6.0-9.0 wt.% MgO have d 11 B ranging from À20.7 to À2.6‰. Oxygen isotopic compositions of the North Iceland melt inclusions and glasses are summarized in Fig. 6b.The strong positive correlation between d 18 O and MgO wt.% has been interpreted as evidence of assimilation of a low-d 18 O basaltic crustal component (Hartley et al., 2013).We find no statistically significant correlation between d 11 B and d 18 O in the North Iceland dataset (Fig. S2.8). Melt inclusion trapping pressures The boron contents and d 11 B signatures of basaltic magmas are potentially highly sensitive to small degrees of assimilation of hydrothermally altered crustal material.The few published measurements of [B] and d 11 B in Icelandic upper crustal materials suggest that they have high B contents of $3-12 lg/g, and heterogeneous boron isotopic compositions between À18.3 and À4.4‰ (Raffone et al., 2008;Raffone et al., 2010).Thus, even small degrees of upper crustal assimilation could exert strong influence on the boron contents and d 11 B of ascending basaltic magmas, particularly when the concentration and isotopic contrasts between melt and assimilant are high.In contrast, the Icelandic lower crust is constructed through repeated melt injections (e.g.Greenfield and White, 2015, and references therein), thus there will be little compositional or isotopic difference between melts intruded into the lower crust and their surrounding material.Melt inclusions trapped during crystallization in the lower crust are therefore expected to record the least modified boron isotopic compositions, since their carrier melts will have had minimal opportunity to assimilate B-rich, isotopically distinctive altered upper crustal material. Melt inclusion and glass equilibration pressures can be estimated using the position of the olivine-plagioclaseaugite-melt (OPAM) thermal minimum, provided that the melt is saturated in all three phases.All the North Iceland samples contain olivine, clinopyroxene and plagioclase crystals, although it is not possible to visually assess whether glassy melt inclusions were trapped from a three phase-saturated melt.We calculated OPAM equilibration pressures for the North Iceland melt inclusions and glasses using the Yang et al. (1996) parameterization of the OPAM barometer, following the method of Hartley et al. (2018).The calculation is performed in two steps.First, Eqs. ( 1)-(3) of Yang et al. (1996) are solved iteratively at 1 MPa intervals between À0.5 and 1.5 GPa.The predicted cation mole fractions of Mg, Ca and Al are then compared with the input melt composition, and the best fitting model equilibration pressure is chosen to minimise the v 2 misfit between measured and predicted melt compositions.Second, the quality of fit between the predicted and measured melt compositions is assessed by using the v 2 vs. pressure distribution to define a significance criterion P F , whereby only samples that pass the filter P F P 0:8 are considered to be three phase-saturated (Hartley et al., 2018).The high threshold of the P F filter ensures that one-or two-phasesaturated melts that could yield erroneously high OPAM pressures are effectively screened out and not considered further. We calculated OPAM equilibration pressures for the North Iceland melt inclusions using both measured and PEC-corrected compositions.Only 20 out of 121 measured melt inclusion compositions met the P F P 0:8 criterion, rising to 53 inclusions when PEC-corrected compositions are considered.The returned equilibration pressures are summarized in Fig. 7.The highest pressure of 0.57 GPa was returned for a plagioclase-hosted melt inclusion from the SW tuff.Assuming a mean crustal density of 2860 kg/m 3 , this corresponds to a depth of 20.5 km.The crustal thicknesss is 30-35 km in the Askja region (Darbyshire et al., 2000), so our deepest-trapped melt inclusion records crystallization in the lower crust (e.g.Winpenny and Maclennan, 2011).The modal equilibration pressures are 0.36 GPa (12.9 km) for Holuhraun, 0.31 GPa (11.1 km) for the NE tuff, and 0.47 GPa (16.6 km) for the SW tuff.The lowest equilibration pressures of 0.10-0.15GPa (3.6-5.3 km) were for olivine-hosted inclusions from Holuhraun.No melt inclusions from Ny ´jahraun passed the P F P 0:8 criterion. There appears to be a broad correlation between melt inclusion composition and trapping pressure, with more primitive inclusions returning deeper trapping pressures.However, it is difficult to assess the significance of this relationship given the AE0.13 GPa uncertainty of the OPAM barometer.The median d 11 B decreases with decreasing pressure (Fig. 7b), although the magnitude of this decrease is smaller than analytical uncertainty on individual measurements.The d 11 B values are widely scattered across the entire crystallization interval.To characterise the dispersion of d 11 B we use the median absolute deviation, r à , which is not sensitive to outliers: where k % 1:48 for normally distributed data, and medðx i Þ refers to the median of ordered dataset x i .We assume that, (Chaussidon and Marty, 1995;Marschall et al., 2017).The modal boron isotopic composition of Reykjanes Ridge glasses is À6.1‰; dashed lines in (a) indicate the 2SE range of AE0.3 and dotted lines indicate the 2SD of AE1.7‰.High-temperature crystallization is not expected to fractionate boron or oxygen isotopes, therefore the observed trends can only be generated through assimilation processes. at any given pressure, the dispersion in d 11 B is normally distributed.The median absolute deviation suggests that the variability in d 11 B generally increases with decreasing pressure, and is greatest in the uppermost 0.1 GPa (3.5 km) of the crust (Fig. 7b). Volatile-trace element systematics: primary versus modified signatures Fig. 4 shows melt inclusion volatile concentrations each plotted against a similarly incompatible trace element.These element pairs are expected to exhibit similar geochemical behaviour during the crystallization of volatileundersaturated melts (e.g.Michael, 1995;Dixon and Clague, 2001;Saal et al., 2002;Michael and Graham, 2013;Rosenthal et al., 2015). The average CO 2 /Ba recorded in melt inclusion suites is controlled by mixing between variably degassed melts, such that the maximum CO 2 /Ba in a melt inclusion dataset may not reflect the mantle source CO 2 /Ba (Matthews et al., 2017).Published estimates of CO 2 /Ba in nominally undegassed Icelandic melt inclusions range from 48 (Hauri et al., 2018) and 80-90 (Hartley et al., 2014;Neave et al., 2014) up to 396 (Miller et al., 2019).Only four of our North Iceland melt inclusions could reflect undegassed or minimally degassed melts: three inclusions from Holuhraun with CO 2 /Ba of 77-79 and one inclusion from the Askja NE tuff with CO 2 /Ba = 62.All the remaining melt inclusions have CO 2 /Ba<44, and the negative correlation between CO 2 and Ba (Fig. 4a) is consistent with crystallization occurring concurrently with CO 2 degassing.We used the major element compositions and total CO 2 contents of melt inclusions to calculate volatile saturation pressures following the method of Shishkina et al. (2014).Calculated volatile saturation pressures fall between 0.37 and 0.02 GPa, and are on average 0.18 GPa lower than the equivalent OPAM pressure (Fig. S2.11).This discrepancy, combined with the number of inclusions with apparently CO 2 -free bubbles in our sample set, suggests that many of the inclusions have been affected by decrepitation.This occurs when the internal pressure of the inclusion exceeds the tensile strength of the host mineral resulting in loss of CO 2 vapour (Maclennan, 2017).The calculated volatile saturation pressures thus provide a minimum estimate of the pressure at which the inclusion was trapped. The North Iceland melt inclusions have H 2 O/Ce, S/Dy, F/Nd, Cl/K and Li/Yb values that fall broadly within the expected ranges for undegassed and unmodified primary melts of MORB or OIB affinity (Fig. 4).Reykjanes Ridge glasses have Li/Yb values that are indistinguishable from the North Iceland samples.The mean H 2 O/Ce across our melt inclusion dataset is 219, and most of the North Iceland inclusions fall within the expected range for Icelandic melts (Hartley et al., 2015;Bali et al., 2018), suggesting that there has been minimal post-entrapment modification of melt inclusion H 2 O contents through diffusive H + exchange with their carrier melts.A number of plagioclase-hosted melt inlusions have significantly higher F/Nd than the expected MORB range, which is consistent with a dissolutioncrystallization process resulting in the trapping of an Aland F-rich boundary layer (Neave et al., 2017).The high-Li, high-Li/Yb signatures in two melt inclusions from the SW tuff are not associated with enrichment in any other incompatible or volatile element, and could reflect trapping of Li-oversaturated melt pockets during crystallization (e.g.Hartley et al., 2018).show PEC-corrected melt inclusion compositions where the returned probability of fit P F is greater than 0.8.Small grey symbols show melt inclusion compositions where P F < 0:8.Kernel density estimates to the right of plot (a) show the relative probability of equilibration pressures for melt inclusion compositions with P F P 0:8, coloured according to the source eruption.Dark red circles in (b) show boron isotopic compositions of whole-rock samples from drill core RN-17, Reykjanes Peninsula (Raffone et al., 2008), where the sampled depth in the core is converted to pressure assuming an upper crustal density of 2860 kg/m 3 .The black line shows the running median d 11 B as a function of depth, calculated using a boxcar filter with a bandwidth of 0.5 GPa; the grey shaded area shows the median absolute deviation.(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) Dy, F/Nd, Cl/K and Li/Yb systematics is provided as supplementary material.The North Iceland melt inclusions have B/Pr between 0.17 and 0.58, and mostly lie within the range 0.34 AE 0.06 (Fig. 4f).All but six plagioclase-hosted inclusions have lower B/Pr than the global MORB average of 0.57 AE 0.09 (Marschall et al., 2017).Reykjanes Ridge glasses have widely variable B/Pr (0.4-1.9; Pr data from Novella et al. ( 2020)), and most have higher B/Pr than the published global MORB range (Fig. 4f). Boron content and isotopic composition of Reykjanes Ridge mantle Reykjanes Ridge basalts show well-documented alongridge shifts in incompatible trace element concentrations, lithophile and noble gas isotopic ratios, and fO 2 north of 61 N (Hart et al., 1983;Schilling et al., 1983;Murton et al., 2002;Shorttle et al., 2015), indicating that the presence of an enriched, plume-influenced mantle component beneath the northern ridge segment is likely.Glasses collected at radial distances >620 km from the putative plume centre do not show this distinctive enrichment, but sample ambient Reykjanes Ridge mantle.To elucidate the boron isotopic composition of this mantle component, we have filtered the Reykjanes Ridge sample set to consider only those collected at radial distance >620 km.We also exclude two samples from enriched seamount 14D located 1100 km from the plume centre (Murton et al., 2002).None of the filtered samples have B/Pr within the expected MORB range: of the seven samples with B/Pr of 0.57 AE 0.09, six are proximal to Iceland and the seventh is from the enriched seamount (Fig. S2.12). To maximise the likelihood that we are considering only samples that have not gained boron through assimilation of seawater, brines or altered oceanic crust, we apply further stringent filtering to exclude any samples with [B] >1.25 lg/g or B/Pr > 1.4.The remaining samples contain on average 0.92 AE 0.29 (2SD) lg/g B and 1.0 AE 0.2 (2SD) lg/g Pr; their average B/Pr is 0.92 (2SD = 0.29, 2SE = 0.06, n = 27), and the modal d 11 B is À6.1‰ (2SD = 1.5‰, 2SE = 0.3‰, n = 21).We are confident that this d 11 B value is representative of ambient depleted Reykjanes Ridge mantle.It is slightly higher than the proposed MORB range from Marschall et al. (2017), although the two ranges overlap within uncertainty.Reykjanes Ridge melts also have higher B/Pr than has been proposed for global MORB, but are similar to basalts from the Kolbeinsey Ridge, north of Iceland, which have mean B/Pr = 0.86 (Marschall et al., 2017).Both the high B/Pr and high d 11 B signatures of Reykjanes Ridge basalts appear to be intrinsic to the Reykjanes Ridge mantle source. We used a simple non-modal batch melting equation and the relationships between Zr/Y and La/Y (Fig. S2.16) to show that the Reykjanes Ridge basalts likely represent 10-12% partial melts of a depleted spinel peridotite mantle.We use a bulk partition coefficient D Pr = 0.015 for spinel peridotite (details provided as supplementary material) and a Pr content of 0.107 lg/g for depleted MORB mantle (DMM; Workman and Hart, 2005), to calculate the average boron content of Reykjanes Ridge mantle.The average [Pr] of the Reykjanes Ridge glasses, 0.87 lg/g, is reached after 11.5% partial melting of DMM.To reach an average B/Pr of 0.92 AE 0.29 after 11.5% partial melting, the mantle source should contain $0.10 AE 0.03 lg/g B. This suggests that Reykjanes Ridge mantle contains slightly more boron than the 0.077 AE 0.010 lg/g typical of DMM (Marschall et al., 2017). We recover a small degree of along-ridge enrichment in [B], but d 11 B remains constant across the ridge segment (Fig. 5).The very limited [B] enrichment along the Reykjanes Ridge approaching Iceland suggests that the plumederived mantle component exerts only very weak leverage on the along-ridge [B], and does not contribute substantially to the boron budget of these melts.A counterintuitive inference leading from this observation is that the enriched mantle component may be boron-poor in comparison to Reykjanes Ridge depleted mantle.This is consistent with the observation that our North Iceland melt inclusions have similar boron contents to the Reykjanes Ridge glasses (Fig. 4f), but have higher Pr contents and lower B/Pr.Given the similarly incompatible behaviour of B and Pr during mantle melting, this indicates that the North Iceland melt inclusions originate from a mantle component with lower [B] than depleted Reykjanes Ridge mantle.The absence of along-ridge variation in d 11 B could either reflect the low boron contribution from the enriched mantle component, or else that there is only limited boron isotopic contrast between the enriched component and ambient depleted mantle.In the next section, we explore the boron isotopic composition of the mantle beneath Iceland. Boron content and isotopic composition of the Icelandic mantle The modal d 11 B value across the North Iceland melt inclusion dataset is À5.9‰, somewhat higher than the À7.1 AE 0.9‰ range proposed for uncontaminated MORB (Marschall et al., 2017).However, the modal d 11 B values for melt inclusions from Holuhraun and the Askja tuff sequences lie between À6.4 and À6.1‰, within the expected MORB range.A number of inclusions from Holuhraun and the Askja tuff sequences have much lower d 11 B and fall within the À11.3 AE 3.8‰ that has previously been suggested to be representative of the Iceland mantle source (Gurenko and Chaussidon, 1997).A key question is therefore whether the boron isotopic signature of the Icelandic mantle is similar to, or distinct from, that of MORB. To use the North Iceland melt incluson data to assess the boron isotopic signature of the Icelandic mantle, it is first necessary to verify that their d 11 B values have not been affected by pre-or post-entrapment modification.We are not aware of any published studies of boron diffusion in olivine or plagioclase, but we expect that post-entrapment modification via B diffusion through these host minerals will be negligble.Both [B] and d 11 B in an ascending magma could be modified prior to inclusion trapping via assimilation of altered crustal material (e.g.Chaussidon and Jambon, 1994;Chaussidon and Marty, 1995;Gurenko and Chaussidon, 1997;Rose-Koga and Sigmarsson, 2008;Brounce et al., 2012).Low d 18 O values measured in more evolved inclusions and glasses from North Iceland (Fig. 6b) most likely reflect progressive assimilation of hydrated low-d 18 O basaltic hyaloclastite in the mid-to upper crust (Hartley et al., 2013).Altered Icelandic upper crustal material has high B contents of $3-12 lg/g, and boron isotopic compositions between À18.3 and À4.4‰ (Raffone et al., 2008;Raffone et al., 2010) (Fig. 7); therefore, the boron contents and isotopic signatures of these more evolved melt inclusions and glasses are also expected to be modified by assimilation. A small number of our North Iceland melt inclusions have higher [B] than can be consistent with simple fractional crystallization (Fig. 2f), which indicates assimilation of a B-rich component prior to melt inclusion trapping.To exclude potentially contaminated melt inclusions from further consideration, we have filtered the melt inclusion dataset for compositions with P8 wt.% MgO in order to assess the d 11 B of Icelandic primary melts.The MgOP8 wt.% melt inclusions are hosted in the most primitive olivines and plagioclases, indicating that they were trapped during the earliest stages of crystallization from melts that experienced no to minimal modification by assimilation of crustal contaminants.The effects of crustal assimilation on melt inclusion [B], d 11 B and d 18 O signatures are explored further in Section 5. The boron isotopic compositions of primitive melt inclusions from across Iceland are shown in Fig. 8. Eleven of the North Iceland melt inclusions with available d 11 B measurements have P8 wt.% MgO.These inclusions are hosted in some of the most primitive olivines and plagioclases, and also have high CO 2 and low S, Cl, B and Li concentrations (Fig. 2) suggesting that they are minimally degassed.These inclusions are therefore most likely to provide robust estimates of d 11 B for the Icelandic mantle source. Primitive melt inclusions from Holuhraun and the Askja SW tuff have different boron isotopic signatures.Those from Holuhraun display a broad peak in the d 11 B probability distribution at À10.6‰ (8), which is not distinguishable from the d 11 B of around À11‰ for primitive WVZ melt Reykjanes Ridge Fig. 8. Boron isotopic compositions of on-land Iceland melt inclusions and Reykjanes Ridge glasses with P8 wt.% MgO plotted against La/ Yb, an indicator of primary melt enrichment or depletion.Error bars are 2r.Small circles show Iceland (light grey) and Reykjanes Ridge (dark grey) samples with <8 wt.% MgO.The Reykjanes Ridge samples with P8 wt.% MgO are subdivided into those collected at radial distances <620 km (large grey circles) and >620 km (large black circles) from the Iceland plume centre.Iceland melt inclusions are shown in coloured symbols; those with no available La/Yb data are shown to the right of the plot.Kernel density estimates (KDEs) show d 11 B probability distributions for Iceland melt inclusions with MgOP8 wt.% (data from Gurenko and Chaussidon, 1997, and this study), glasses from the Reykjanes Ridge, melt inclusions from La Palma and Re ´union (Walowski et al., 2019), and melt inclusions from Hawaii (Kobayashi et al., 2004).In the Reykjanes Ridge KDE plot, the black line shows all samples; the red line shows filtered samples collected at radial distance >620 km from the Iceland plume centre (see text for details); the blue line shows samples with radial distance >620 km and MgOP8 wt.%, and the dashed orange line shows samples that have B/Pr within the expected MORB range of 0.57 AE 0.09.Shaded grey bar shows the boron isotopic composition of MORB, d 11 B = À7.1 AE 0.9‰, from the compilation of Marschall et al. (2017); the black dashed box shows La/ Yb = 1.07 AE 0.89 (2SD) for the same samples.(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)M.E.Hartley et al. / Geochimica et Cosmochimica Acta 294 (2021) 273-294 inclusions (Gurenko and Chaussidon, 1997) within the uncertainty of the measurements.One inclusion from the SW tuff has d 11 B of À10.1‰, similar to the Holuhraun and WVZ inclusions.However, the most probable d 11 B for inclusions from the SW tuff is À5.7‰ (8), indistinguishable from the À6.1‰ of the Reykjanes Ridge glasses within analytical uncertainty. Our data suggest that Holuhraun and one inclusion from the SW tuff have sampled a common, low-d 11 B mantle component that is characteristic of the Icelandic mantle and distinct from Reykjanes Ridge MORB.There are then two possible explanations for the inclusions from the SW tuff with d 11 B around À5.7‰ and the remaining primitive Holuhraum melt inclusion with d 11 B of À6.4‰.First, these inclusions could have trapped melts that had already assimilated a crustal contaminant with d 11 B higher than À10.6‰.However, the inclusions return OPAM equilibration pressures between 0.58 and 0.40 GPa ($20-14 km, assuming a crustal density of 2860 kg/m 3 ).This suggests that their host melts had limited opportunity for interaction with altered upper crustal material prior to inclusion trapping.The second, and more likely, explanation is that most of the primitive melt inclusions from the SW tuff, and one inclusion from Holuhraun, have sampled melts of a mantle component with near-identical d 11 B to Reykjanes Ridge MORB. We modelled the boron contents of the lower-and higher-d 11 B components of the Icelandic mantle using a simple non-modal batch melting model.Relationships between Zr/Y and La/Y suggest that primitve melts from the Askja SW tuff derive from 8-10% partial melting of an approximately 1:1 mixture of melts derived from spinel and garnet peridotites (Fig. S2.16).The average Pr content of primitive inclusions from the SW tuff, 2.3 lg/g, is achieved after 8-10% partial melting of an enriched mantle source containing 0.25 lg/g Pr, similar to primitive mantle (PM) (Pr = 0.27 lg/g; Palme and O'Neill, 2004).The primitive inclusions of the SW tuff have an average B/Pr of 0.30 (range 0.24-0.38),suggesting that the mantle source should contain $0.083 lg/g B (range 0.066-0.105lg/g B), similar to DMM.The Zr/Y and La/Y systematics of primitive Holuhraun melt inclusions are best modelled by 10-15% partial melts derived predominantly from spinel-facies mantle (Fig. S2.16).To achieve the average Pr content of primitive Holuhraun melt inclusions (1.5 lg/g) after 15% partial melting, the mantle source should contain $0.2 lg/g Pr, i.e. intermediate between DMM and PM.The primitive Holuhraun inclusions have an average B/Pr of 0.40 (range 0.38-0.51),suggesting that their mantle source contains $0.085 lg/g B (range 0.081-0.108lg/g B).Despite the inherent trade-off between melt fraction and mantle source composition, these simple calculations suggest that the lower-and higher-d 11 B components of the Icelandic mantle have similar B contents around 0.085 lg/g.This suggests that the average Icelandic mantle is slightly enriched in boron compared to average DMM (0.077 AE 0.010 lg/g B; Marschall et al., 2017) and slightly B-depleted compared to Reykjanes Ridge mantle ([B]%0.10lg/g), although our estimated ranges for [B] in the Icelandic mantle source overlap both DMM and Reykjanes Ridge mantle.Crucially, these calculations confirm that the incompatible trace elementenriched plume-like component sampled along the northernmost Reykjanes Ridge and by on-land Icelandic basalts shows no enrichment in boron compared to ambient Reykjanes Ridge depleted mantle. Our new melt inclusion data show a positive correlation between d 11 B and La/Yb, which is often used as a tracer of primary melt enrichment or depletion (Maclennan, 2008) (Fig. 8).This relationship is strengthened if depleted melt inclusions from Miðfell, with low d 11 B and La/Yb<1, are taken into account.We therefore suggest that higher and lower d 11 B signatures in Icelandic primary melts may be associated with incompatible trace element (ITE)-enriched and ITE-depleted mantle components respectively, but both components have similar B contents.They are both intrinsic to the Icelandic mantle source and distinct from ambient depleted Reykjanes Ridge mantle.The presence of an intrinsic depleted Icelandic mantle component distinct from N-MORB is consistent with available trace element and radiogenic isotope data (e.g.Kerr et al., 1995;Fitton et al., 1997;Fitton et al., 2003), while combined Sr-Nd-Pb isotope data suggest the existence of at least four distinct mantle components beneath Iceland that contribute to localised intermediate enriched and depleted components (Thirlwall et al., 2004;Peate et al., 2010).We would expect both depleted and enriched d 11 B signatures to be recorded in early-trapped melt inclusions from individual eruptions, although further measurements of d 11 B in primitive melt inclusions from across Iceland are required to test this hypothesis. Recycled boron in the Icelandic mantle source What is the origin of low d 11 B signatures in primitive Icelandic melt inclusions and the Icelandic mantle source components?Reported d 11 B values for OIB mantle are widely variable, with uncontaminated OIB samples having d 11 B between À12 and À3‰ (Marschall, 2018, and references therein).Ocean island basalts therefore have similar average d 11 B to MORB, but are much more variable, both within and between different locations. Fig. 8 compares d 11 B in Icelandic melt inclusions with analyses of primitive uncontaminated melt inclusions from La Palma and Re ´union (Walowski et al., 2019) and Hawaii (Kobayashi et al., 2004) (Day et al., 2010;Day and Hilton, 2011).These signatures have been interpreted as evidence for recycled oceanic crust and lithosphere in the Canary Islands mantle source, suggesting that the low d 11 B could be from a recycled subducted mantle component (Walowski et al., 2019).However, low d 11 B is also associated with low B/Zr, indicating that these inclusions are Bdepleted relative to melts of typical depleted upper mantle (Walowski et al., 2019). Simple non-modal batch melting calculations suggest that low B/Pr in North Iceland melt inclusions compared to Reykjanes Ridge or global MORB glasses (Fig. 4f) is consistent with a mantle source with [B] slightly higher than DMM, but slightly lower than Reykjanes Ridge mantle.Importantly, B is not as enriched as trace elements of similar compatibility dring partial melting.This means that neither the isotopically light, incompatible trace element (ITE)-depleted Holuhraun melt inclusions nor the isotopically heavy, ITE-enriched SW tuff melt inclusions can be explained by simple recycling of B-enriched subducted lithologies such as continental sediments or oceanic serpentinites into the mantle, as this would create a source that is both B-enriched and likely isotopically heavy (De Hoog and Savov, 2018). The best explanation for a B-depleted and isotopically light mantle source component is subducted oceanic lithosphere that has been stripped of its boron through slab dehydration.This is consistent with geochemical and thermodynamic models which predict that subducted oceanic lithosphere will be B-depleted and have d 11 B as low as À20 to À40‰, depending on the slab dehydration depth and the thermal profile of the subduction zone (e.g.Peacock and Hervig, 1999;Rosner et al., 2003;Marschall et al., 2007;Konrad-Schmolke and Halama, 2014). We therefore suggest that the low d 11 B sampled by primitive, depleted melt inclusions from Iceland is indictive of dehydrated subducted oceanic lithosphere in an ITEdepleted component intrinsic to the Icelandic mantle (Fig. S2.15).This is consistent with interpretations of major, trace and lithophile isotope systematics in Icelandic basalts, which have likewise inferred the presence of at least 5% recycled material in the Icelandic mantle (e.g.Chauvel and He ´mond, 2000;Stracke et al., 2003;Thirlwall et al., 2004;Kokfelt et al., 2006;Bindeman et al., 2008;Shorttle et al., 2014), and that ancient depleted oceanic lithospheric mantle is a plausible source for the intrinsic depleted Iceland component (e.g.Skovgaard et al., 2001, Fitton et al., 2003, andreferences therein).Melt inclusions sampling this depleted component show no B enrichment compared to Reykjanes Ridge basalts.This suggests that the recycled lithospheric component is likely boron-poor, and hence recycled boron is difficult to detect other than by its low boron isotopic signature.The enriched mantle component sampled by ITE-enriched melt inclusions is also likely to contain dehydrated lithosphere.However, the recycled lithospheric component in the enriched mantle source likely contains almost no boron, meaning that melt boron content is diluted by the recycled component rather than enriched.The boron isotopic signatures of melts from the enriched component are therefore dominated by ambient depleted upper mantle and hence very similar to Rekjanes Ridge basalts and global MORB. MODIFICATION OF MELT d 11 B THROUGH ASSIMILATION OF ALTERED CRUST The North Iceland melt inclusions have major and trace element contents that are broadly consistent with a dominant fractional crystallization control (Hartley and Thordarson, 2013).However, some melt inclusions with <8 wt.% MgO have higher d 11 B than Reykjanes Ridge MORB, while others are isotopically ligher than primitive North Iceland melt inclusions (Fig. 8; Fig. S9).Given that a small number of North Iceland melt inclusions show signatures of minor B addition independent of Pr (Fig. 4) that are inconsistent with simple fractional crystallization, we explore whether the North Iceland melt inclusions could have assimilated a high-[B] component with heterogeneous The well-defined correlation between d 18 O and indices of melt evolution in our North Iceland samples (Fig. 6b, Fig. S2.10) is not consistent with high-temperature fractional crystallization, since this process is not expected to fractionate oxygen isotopes (e.g.Bindeman et al., 2008).The major and trace element systematics of the North Iceland melt inclusions are not consistent with mixing between basaltic melt and low-d 18 O rhyolitic or andesitic magmas.Instead, the low d 18 O signatures are best explained through bulk assimilation of altered basaltic hyaloclastite in the upper crust and/or mixing with low-d 18 O basaltic melts stored in upper crustal reservoirs (Hartley et al., 2013).Low d 18 O in olivine and plagioclase crystals from largevolume Holocene lavas in Iceland's Eastern Volcanic Zone have likewise been interpreted as resulting from bulk digestion of low-d 18 O basaltic hyaloclastite, whereby the hyaloclastite inherits its oxygen isotopic signature through interaction with low-d 18 O glacial meltwaters (Bindeman et al., 2006;Bindeman et al., 2008).Given that melt inclusion d 18 O signatures require assimilation of an altered crustal component, we examine whether variable d 11 B in the North Iceland melt inclusions can also be generated through crustal assimilation. Published measurements of d 11 B in Icelandic upper crustal materials are restricted to a single drill core RN-17 from the Reykjanes Peninsula (Raffone et al., 2010).Basalts sampled between 0 and 3000 m in this core have high wholerock B contents of 3.3-12.4lg/g and heterogeneous d 11 B between À18.3 and À4.4‰, and there is no correlation between [B] and d 11 B, nor between composition and depth (Raffone et al., 2008;Raffone et al., 2010) (Fig. 7).Boron in the RN-17 samples is primarily concentrated in hydrothermal minerals including epidote (0.3-9.0 lg/g) and amphibole (0.1-2.3 lg/g); however, the abundance of hydrothermal minerals is too low to explain the elevated bulk B contents.Brounce et al. (2012) proposed that the additional boron is concentrated on altered surfaces within porous altered basalt. We have modelled the generation of B-rich altered basaltic hyaloclastites in the upper crust following the two-stage process described by Brounce et al. (2012).First, B-depleted meteoric fluids circulating through hightemperature geothermal systems in the upper crust scavenge B from the basalts they flow through.Meteoric and glacial waters across Iceland typically contain <0.3 lg/g B and have high d 11 B up to +17‰.In contrast, hightemperature geothermal waters contain up to 5 lg/g B and have d 11 B down to À6.7‰ (Aggarwal et al., 2000) (Table 1, Fig. S2.14), suggesting that boron scavenging during high-temperature fluid-rock interaction is associated with isotopic fractionation of several permil.Second, the fluids cool to temperatures <200 C at which point scavenged B adsorbs onto clay mineral surfaces in palagonitized basaltic hyaloclastite, predominantly smectite and illite, driving the bulk rock towards high [B].Boron isotopes are further fractionated during adsorption, since 10 B is adsorbed preferentially to 11 B in clay minerals (Palmer et al., 1987).The boron content and d 11 B of the resultant altered basaltic hyaloclastite is controlled by four factors: the composition of the circulating hydrothermal fluid; the adsorption coefficient; the isotopic fractionation factor; and the water-rock ratio.Following Brounce et al. (2012), we assume that the boron adsorption coefficient K d = 2.6 and fractionation factor a = 0.975 for marine clays at 25 C and pH = 7.8 (Palmer et al., 1987) are appropriate for boron adsorption onto the smectite-dominated clay mineral assemblages present in palagonitized basaltic hyaloclastite.The isotopic ratio of the altered hyaloclastite is then calculated as a function of water/rock ratio W/R (Spivack and Edmond, 1987): where the subscript R refers to the rock, and the subscript W refers to the hydrothermal fluid.Similarly, the boron concentration in the altered hyaloclastite is calculated as follows: Table 1 shows a selection of potential compositions of altered basaltic hyaloclastites, including model hyaloclastite compositions calculated using different fluid compositions and water-rock ratios.Water-rock ratios >4 are required to generate materials with high [B] and low d 11 B, similar to altered basalts in the RN-17 core (Raffone et al., 2008).Water-rock ratios <3 generate materials with lower [B] and higher d 11 B than typical MORB (Fig. S2.14). We have used a range of possible natural and modelled crustal endmember compositions to calculate parabolic mixing curves to model the likely effects of crustal assimilation on [B], d 11 B and d 18 O on North Iceland melts (Fig. 9).The primitive melt endmembers in our mixing models are the compositions of primitive melt inclusions from Holuhraun and the Askja SW tuff, with d 11 B of À10.6‰ and À5.7‰ respectively (Fig. 8).The oxygen isotopic ratio of altered basaltic hyaloclastite is difficult to constrain and likely to be heterogeneous in the upper crust.Hyaloclastites from the KG-4 Krafla drill hole have d 18 O between À10.3 and À3.4‰ (Hattori and Muehlenbachs, 1982), while rhyolitic tephra and leucocratic xenoliths from the Askja 1875 eruption have d 18 O between À7.50 and +1.65‰ (Macdonald et al., 1987).For simplicity, the mixing curves in Fig. 9b assume d 18 O of À4‰ for all crustal endmembers. Our bulk mixing models suggest that [B], d 11 B and d 18 O in melt inclusions and glasses from North Iceland can be derived by assimilation of up to $20% altered crustal material, with most melt inclusion compositions requiring <10% assimilation (Fig. 9).The choice of primitive melt a Water/rock ratio.b Calculated using adsorption coefficient K d = 2.6 and isotopic fractionation factor a = 0.975 (Palmer et al., 1987).c Fluid composition is mean Icelandic geothermal fluid, an average of three active high-temperature geothermal sites sampled over the course of 8 years. d Fluid composition is Krafla geothermal fluid 1. e Fluid composition is Krafla geothermal fluid 2. endmember composition makes little difference to the degree of assimilation required to explain the observed boron and oxygen isotopic variations.Compositions similar to the primitive SW tuff melt inclusions can be generated by <5% contamination of the recycled endmember, which could support an argument that their d 11 B of À5.7‰ does not represent a primary mantle component, but is instead generated through very small degrees of crustal assimilation.However, melt inclusions from the SW tuff are trapped at pressures >0.4 GPa in the mid-to lower crust (Fig. 7 Mixing curves calculated between the recycled mantle endmember and plausible crustal endmembers can account for the full variety of melt inclusion and glass compositions (Fig. 9).In contrast, mixing curves calculated using a Reykjanes Ridge-like primary melt endmember do not satisfactorily reproduce [B], d 11 B and d 18 O for the most primitive Holuhraun melt inclusions, nor a subset of low-d 11 B inclusions from Askja NE and SW tuff sequences.Our data therefore strongly support the presence of a recycled component in the Icelandic mantle source with d 11 B around À11‰. Evolved melt inclusions with d 11 B values higher than typical Reykjanes Ridge MORB or lower than the primitive Holuhraun inclusions are generated through minor assimilation of heterogeneous altered material distributed through the upper crust. CONCLUSIONS We have reported new measurements of volatiles, light elements and boron isotopes in a suite of melt inclusions from North Iceland, and in submarine glasses from the Reykjanes Ridge.Reykjanes Ridge glasses sampled at radial distances >620 km from the Iceland plume show no evidence of enrichment from a plume-derived mantle component, and from these samples we derive a new estimate for the d 11 B signature of Reykjanes Ridge mantle, of À6.1‰ (2SD = 1.5‰, 2SE = 0.3‰, n = 21).We find only a very weak indication of along-ridge enrichment in [B] approaching Iceland, and no systematic variation in d 11 B along the entire ridge segment.This suggests that the enriched mantle component sampled by the northern ridge segment close to Iceland is not contributing substantially to the boron budget of these melts. Olivine-and plagioclase-hosted melt inclusions from North Iceland have major element compositions that are broadly consistent with fractional crystallization.Ratios of volatiles and light elements to similarly incompatible trace elements indicate that melt inclusion volatile contents are broadly consistent with canonical mantle reservoirs and, with the exception of CO 2 , have experienced minimal pre-, syn-and/or post-entrapment modification.A small number of melt inclusions have higher [B] than is consistent with simple fractional crystallization trends, indicating assimilation of a B-rich component prior to melt inclusion trapping. The North Iceland melt inclusions are characterized by widely variable d 11 B values between À20.7 and +0.6‰.The coupled [B], d 11 B and d 18 O signatures of more evolved melt inclusions are consistent with progressive assimilation of hydrothermally altered basaltic hyaloclastite as they ascend through the upper crust.Altered basaltic hyaloclastites in the Icelandic upper crust have high [B] and highly heterogeneous d 11 B in comparison to pristine Icelandic basalts.Even small degrees of crustal assimilation could thus exert a strong control on the bulk d 11 B of ascending magmas, generating wide d 11 B variability within a single sample set.To access mantle-derived d 11 B signatures, we identify and exclude any melt inclusions that may have been modified by crustal processing.Our observations suggest that only the most primitive melt inclusions reliably record truly primitive d 11 B signatures.Our unfiltered North Iceland melt inclusion dataset records the same large range in d 11 B as other oceanic islands such as Hawaii (Kobayashi et al., 2004), which highlights the importance of very careful screening of melt inclusion compositions in order to study global crustal recycling in ocean island basalts. Simple non-modal batch melting calculations suggest that the Icelandic mantle contains $0.085 lg/g B, slightly lower than the 0.10-0.11lg/g calculated for depleted Reykjanes Ridge mantle.The lowest d 11 B signatures in Icelandic melt inclusions are typically associated with more primitive (MgOP8 wt.%) and ITE-depleted melt compositions.Primitive melt inclusions from Holuhraun record a primary melt d 11 B of À10.6‰, consistent with melting of a depleted Marschall et al. (2017)) and Iceland (this study, light blue) mantle sources.Parabolic mixing curves are calculated between a primitive endmember and a range of potential crustal assimilants (Table 1).The primitive endmembers are taken to be the mean composition of primitive Holuhraun melt inclusions (coloured curves) or the mean composition of primitive inclusions from the SW tuff (grey curves).The mixing curves in (b) assume that the crustal assimilants have d 18 O of À4‰, consistent with basaltic hyaloclastites obtained from the Krafla KG-4 drill hole (Hattori and Muehlenbachs, 1982).Crosses show 5% increments of assimilation.Most melt inclusion and glass compositions can be modelled by up to 15% assimilation of likely crustal components.(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)mantle component containing dehydrated recycled oceanic lithosphere.This low-d 11 B depleted mantle component is also recorded in melt inclusions from the WVZ and Reykjanes Peninsula.The d 11 B of À5.7‰ recorded in primitive, ITE-enriched melt inclusions is consistent with an enriched mantle lithology that has similar boron isotopic composition to Reykjanes Ridge mantle.Our data therefore confirm the presence of boron isotopic heterogeneity in the Icelandic mantle source.We have not recovered boron isotopic heterogeneity on the lengthscale of melt supply to a single eruption, but our data do not exclude this possibility and this question may be revisited as more measurements of primitive melt inclusions become available and as in situ analytical techniques are improved.Our verification of a low-d 11 B recycled component in the Icelandic mantle provides further support for the role of recycled subducted oceanic lithosphere in melt generation at ocean islands. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Map of North Iceland, with fissure swarms shown in light grey.Filled symbols show the locations of samples used in this study: green squares, Ny ´jahraun; pink triangle, NE tuff; yellow circle, SW tuff; white hexagons, basaltic scoria from January 1875; brown inverted triangles, early 20 th century eruptions; blue diamonds, old Holuhraun eruptions.The Ny ´jahraun and old Holuhraun lava flow fields are shown in red.The 2014 Holuhraun lava flow field is outlined in orange.(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) 3 Fig. 2 .Fig. 3 . Fig. 2. Major element compositions of melt inclusions and glasses from North Iceland.Error bars are 2r.Small grey inverted triangles show matrix glasses from the Askja and Ba ´rðarbunga volcanic systems.Coloured open symbols show raw (uncorrected) melt inclusion compositions.Large filled symbols show melt inclusions corrected for post-entrapment crystallization.Plagioclase-hosted inclusions were corrected by adding equilibrium plagioclase until their MgO-FeO-Al 2 O 3 systematics matched those of Icelandic tholeiitic glasses; olivinehosted inclusions were corrected by adding equilibrium olivine until their compositions met the equilibrium criterion Kd olÀliq FeÀMg = 0.30 AE 0.03.Small pale inverted triangles show plagioclase-hosted melt inclusions corrected to be in equilibrium with their host crystal, which results in unrealistically high Al 2 O 3 contents.(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) Fig. 4 . Fig. 4. Volatile-trace element systematics in melt inclusions and matrix glasses from North Iceland.Error bars are 2r.The B and Li contents of Reykjanes Ridge glasses are shown in panels (f) and (g).Black circles show Reykjanes Ridge glasses collected at radial distance >620 km from the Iceland plume centre, and filtered to exclude samples that could possibly have gained B through assimilation of a B-rich contaminant (see Section 4.3 for details); all other Reykjanes Ridge glasses are shown as grey circles.The shaded regions show published volatile-trace element ratios for different mantle reservoirs.MORB: CO 2 /Ba from Michael et al. (2015), H 2 O/Ce from Michael (1995), S/Dy from Saal et al. (2002), F/Nd from Workman et al. (2006), Cl/K from Michael and Cornell (1998), B/Pr and Li/Yb fromMarschall et al. (2017).OIB: B/Pr range for Hawaii melt inclusions fromEdmonds (2015); Li/Yb range fromRyan and Langmuir (1987),Edmonds (2015).Iceland: CO 2 /Ba fromHauri et al. (2018), H 2 O/Ce fromHartley et al. (2015) andBali et al. (2018); B/Pr from this study. Fig. 5 . Fig. 5. Along-ridge variation in [Li], [B] and d 11 B of Reykjanes Ridge glasses.Data are plotted as a function of radial distance from the Iceland plume centre and coloured according to (a) MgO content, (b) Zr/Y and (c) 87 Sr/ 86 Sr of the sample glasses.Samples with bold outlines have B/Pr within the expected MORB range of 0.57 AE 0.09.Square symbols denote samples from enriched seamount 14D.The increases in Zr/Y and 87 Sr/ 86 Sr at radial distances <620 km indicate the influence of an enriched mantle compoment associated with the Iceland plume.Samples at radial distances >620 km are not influenced by the enriched plume component and sample ambient Reykjanes Ridge mantle.Black lines show the running average composition calculated using a boxcar filter with a bandwidth of 100 km; the grey shaded area shows the error envelope (2SE) of the filtered data.Two samples at radial distances $500 km appear to have high[B], but the increase in mean[B] approaching Iceland is not significant on the lengthscale of the whole dataset.There is no systematic along-ridge variability in [Li] or d 11 B. Major element data fromShorttle et al. (2015); trace element data fromNovella et al. (2020); isotope data fromMurton et al. (2002) andThirlwall et al. (2004). Fig. 6.(a) Boron isotopic compositions of olivine-hosted melt inclusions from Iceland.Miðfell data are from Gurenko and Chaussidon (1997).(b) Oxygen isotopic compositions of North Iceland melt inclusions and glasses vs MgO, an index of melt evolution.Oxygen isotope data are from Hartley et al. (2013).Error bars are 2r.Shaded grey bars indicate d 11 B and d 18 O for pristine MORB glasses(Chaussidon and Marty, 1995;Marschall et al., 2017).The modal boron isotopic composition of Reykjanes Ridge glasses is À6.1‰; dashed lines in (a) indicate the 2SE range of AE0.3 and dotted lines indicate the 2SD of AE1.7‰.High-temperature crystallization is not expected to fractionate boron or oxygen isotopes, therefore the observed trends can only be generated through assimilation processes. Fig. 7 . Fig. 7. Application of theYang et al. (1996) OPAM barometer to melt inclusions and glasses from North Iceland.Large coloured symbols show PEC-corrected melt inclusion compositions where the returned probability of fit P F is greater than 0.8.Small grey symbols show melt inclusion compositions where P F < 0:8.Kernel density estimates to the right of plot (a) show the relative probability of equilibration pressures for melt inclusion compositions with P F P 0:8, coloured according to the source eruption.Dark red circles in (b) show boron isotopic compositions of whole-rock samples from drill core RN-17, Reykjanes Peninsula(Raffone et al., 2008), where the sampled depth in the core is converted to pressure assuming an upper crustal density of 2860 kg/m 3 .The black line shows the running median d 11 B as a function of depth, calculated using a boxcar filter with a bandwidth of 0.5 GPa; the grey shaded area shows the median absolute deviation.(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) Fig. 9 . Fig. 9. (a) Boron concentrations and isotopic compositions of melt inclusions and glasses from North Iceland.(b) Boron and oxygen isotopic compositions of melt inclusions and glasses from North Iceland.Error bars are 2r.The shaded boxes show the expected compositions of unmodified primary melts from MORB (grey, Marschall et al. (2017)) and Iceland (this study, light blue) mantle sources.Parabolic mixing curves are calculated between a primitive endmember and a range of potential crustal assimilants (Table1).The primitive endmembers are taken to be the mean composition of primitive Holuhraun melt inclusions (coloured curves) or the mean composition of primitive inclusions from the SW tuff (grey curves).The mixing curves in (b) assume that the crustal assimilants have d 18 O of À4‰, consistent with basaltic hyaloclastites obtained from the Krafla KG-4 drill hole(Hattori and Muehlenbachs, 1982).Crosses show 5% increments of assimilation.Most melt inclusion and glass compositions can be modelled by up to 15% assimilation of likely crustal components.(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) . The modal d 11 B values for the Holuhraun and WVZ melt inclusions, À10.6 and À11.3‰ respectively, are not distinguishable from the La Palma melt inclusions within the uncertainty of in situ SIMS measurements.The modal d 11 B for the Hawaii and Re ´union melt inclusions are indistinguishable from MORB.The fact that Re ´union melt inclusions have d 11 B indistinguishable from MORB led Walowski et al. (2019) to suggest that primitive and depleted upper mantle reservoirs have a common d 11 B signature, and that the low d 11 B values recovered at La Palma and other ocean islands must therefore reflect partial melting of an isotopically distinct mantle component.Low d 11 B values in the La Palma melt inclusions are coupled with radiogenic whole-rock Pb and Os isotopic signatures and low d 18 O Table 1 Mixing model endmembers and reference values. ), and are therefore unlikely to have interacted with high-[B] altered crustal material.The boron isotopic compositions of primitive North Iceland melt inclusions are therefore best explained by differential sampling of a heterogeneous mantle containing a Reykjanes Ridge-like mantle component with d 11 B around À6‰ and a recycled component with d 11 B around À11‰.
2020-11-19T09:08:57.468Z
2020-11-13T00:00:00.000
{ "year": 2021, "sha1": "88b53b97f367836d20f2ea3e6bee9182221ce248", "oa_license": "CCBY", "oa_url": "https://www.pure.ed.ac.uk/ws/files/192297832/81._de_Hoog.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "0e4d2ea4a189184b5789c1b0ae0a4a617da3913f", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
7624512
pes2o/s2orc
v3-fos-license
Open source tools for management and archiving of digital microscopy data to allow integration with patient pathology and treatment information Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patients clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website www.abctb.org.au using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. Background Over the past decade there has been a marked increase in the use of virtual microscopy. Digital slides offer many benefits over traditional microscopy, such as ease of access, archiving, annotation and sharing. Automatic identification and percentage calculation of malignant/cancer regions of hundreds of archive slides have become possible by the use of data mining analysis tools [1][2][3]. Multiple digital slide images can be opened and analysed at the same time. For example Hematoxylin and eosin (H&E) and Periodic acid-Schiff (PAS) stained slides can be compared on the same screen, which is not possible in traditional microscopy. As unlimited users can examine specimens at the same time and this is independent of access time, many institutions have started teaching virtual microscopy as part of their regular histology course while others are considering moving in this direction [2,4,5]. The advantages of digitizing pathology slides are counterbalanced by the very large file sizes that are generated. A typical scanned slide at 400x magnification can be as large as 5 Giga bytes (0.25 μm/pixel) or even greater for higher resolutions. Such large file sizes hamper the downloading, viewing and analysis of digital slide images. Proprietary image viewers such as Hamamatsu's NDP. view or Aperio's ImageScope only allow the user to take manual snapshots of the image being viewed, thereby limiting the maximum resolution to the resolution of the screen. For example, if the user's display resolution is set at 1680x1050, then the maximum resolution of a snapshot would be 17.6 Mega pixels. This is insufficient for snapshots to be published as zoomable slides on the website, which require snapshots at a resolution of 45 Mega pixels or more. Similarly there is no tool that is able to split the digital slides scanned by the Hamamatsu NanoZoomer into smaller sections. Therefore, this study had two objectives; firstly, to enable the publishing of snapshots of virtual slides on a tissue bank website and the building of a searchable digital microscopy database, and secondly, producing smaller images from large virtual slides to enable easy analysis and handling by analysis software such as Metamorph W (Molecular Devices, USA) or MATLAB (MathWorks, U.S.A.). In order to achieve these objectives, we have developed two open source tools: 'Snapshot Creator' and 'NDPI-Splitter'. Implementation The tools are designed for digital images obtained on the Hamamatsu NanoZoomer Digital Pathology (NDP) System (Hamamatsu Photonics K.K. Japan), in their proprietary NDPI file format. NDPI-Splitter and Snapshot Creator are developed in Java using Standard Widget Toolkit (SWT), however it is Windows dependent because of the use of Hamamatsu SDK, available from Hamamatsu under their licensing agreement, for manipulating. NDPI files. It also uses JAI 1.1.3 available from http://ndpi-splitter.googlecode.com/files/jai-1_1_3-lib-windows-i586-jre.exe and JAI Image IO 1.1 available from http://ndpi-splitter.googlecode.com/files/ jai_imageio-1_1-lib-windows-i586-jre.exe. Apache Ant (http://ant.apache.org/) is used to build the projects. Software architecture Both NDPI Splitter and Snapshot Creator use the same underlying java classes to interact with the proprietary NDPI file format. These classes provide a Java wrapper around the Hamamatsu Software Development Kit (SDK) using Java Native Access (JNA) (Figure 1). The SDK provides low-level access to the image. The Java wrapper classes provide an easy to use Application Programming Interface (API) which includes operations to find out image dimensions and request portions of the image. NDPI Splitter is a Java Swing based graphical user interface (GUI) application. It uses the library classes described above to determine the size and dimensions of the image, then uses this information to calculate how to split up the image. It also includes a module to perform the filtering of "empty" images. Snapshot Creator is a windows batch file with supporting Java classes. The Java classes use the library classes described above to determine size and dimensions of the image, and to extract the required size of the image. The batch file then uses the Deep Zoom converter tool to prepare the JPEG image for viewing in Deep Zoom. Deep Zoom To facilitate panning and zooming of images on our website, we have used the Microsoft Deep Zoom [6] library. The JPEG converted images, produced by Snapshot Creator, are fed through the Deep Zoom converter tool to create tiles of images at various resolutions. Deep Zoomed images are then published on the website and linked to our customised version of the Caisis database [7]. Deep Zoom is acknowledged to be one of the most effective zooming technologies in current use [6]. Deep Zoom implementation is available as either Javascript library or Silverlight 3.0 component. We have chosen the Javascript version to eliminate the need for end users to install Silverlight in order to view the images. Zoom-in loads higher resolution image tiles saved as separate JPEG files. This provides a smooth transition when switching between different levels of resolution ( Figure 2). Results Although Snapshot Creator and NDPI-Splitter share common libraries to handle the manipulation of proprietary NDPI files, they are used in different contexts. Snapshot Creator Snapshot Creator is a Windows batch file and performs a number of operations in a set order, and the resultant image files are moved into designated folders. The Nano-Zoomer scanner is programmed to save the images in the 'NDPI-New' folder ( Figure 3), from where they are processed overnight and are moved to 'NDPI-Processed' folder. If any errors occur, the images are moved to the 'NDPI-Failed' folder. The scanned images are given the same name as their respective slide identifiers in the ABCTB database. The middle section of the image is captured from the processed NDPI file as a JPEG snapshot, at a magnification level defined in the configuration file ('snapshot-creator.properties') of the software. The ABCTB logo is added as a watermark on the bottom of the JPEG snapshot. Snapshot Creator then links the image to the respective specimen in the Caisis database. Once the JPEG snapshot has been successfully linked to the database, the images are moved into the 'JPEG-Processed' folder. Successfully linked images are run through the Deep Zoom converter tool, which creates tiles of the image at different resolutions. These are published to a directory visible to the website. If linking to the specimen fails, the images are moved to the 'JPEG-Failed' folder and an automatic email is sent to the database administrator. The failed images are looked at manually and if failure is due to a incorrect filename then after correcting the file names, images are put back into the 'JPEG Snapshot Processing' to be processed the next night. All folder paths, database locations, magnifications and JPEG quality are configurable through the snapshot-creator.properties file. The process is diagrammatically explained in Figure 3. Integration with Caisis After the successful creation of JPEG images from the NDPI files, Snapshot Creator links the snapshots to the database using the filename of the image. The filename is an identifier of the slide in the ABCTB customised open source cancer-research database, Caisis [7], wherein histology slides are recorded as specimens. A stored procedure updates the matched field of the Specimens table. Implementation of Snapshot Creator in other SQL databases can be achieved, as this is configurable in the configuration file (snapshot-creator.properties), however, the SQL stored procedures have to be re-written to match the database structure of the prospective system. A full customized copy of Caisis, in use by the ABCTB, can be requested from the corresponding author. Source code of Caisis and our tools is available under GNU General Public Licence [8]. NDPI-Splitter Like Snapshot Creator, NDPI-Splitter is a Windows application, and it is designed to split NDPI image files into smaller TIFF tiles, which can subsequently be imported into image analysis software for automatic processing. In "Step 1" a graphical user interface (GUI) prompts the user to select one or multiple files (Figure 4). Once the files are selected the GUI displays the size and magnification of the image. On the "Step 2" screen ( Figure 5) the user can choose the desired width and height of the sections in pixels and magnification level. There is also an option to have empty images filtered out, based on two algorithms i) intensity based: this algorithm is best suited to black/fluorescent images, ii) compression based: this is best suited to images with white backgrounds. These algorithms are designed specifically for the type of Step-1 of NDPI-Splitter: Graphical User Interface (GUI) makes it easy to select file(s) for splitting. images produced by the NanoZoomer. The intensity algorithm examines the intensity of each pixel in an image section, to determine whether each pixel is close to black or not. Based on an average threshold and whiteness threshold, the algorithm determines whether the section contains sufficient pixels of adequate intensity to be interesting. The compression based algorithm uses JPEG compression to compress the image section. If a high level of compression is achieved, the algorithm extrapolates that most of the image is white and therefore is not of interest. The thresholds used by these algorithms can be customised to fine-tune the results. For each image a directory is created containing the TIFF tiles using a grid-based position-dependent naming convention for the newly generated TIFF files. For example the tiles from the first logical row division of the image are named by appending letters such as A1, A2, A3 and so on. If an empty tile algorithm is selected, the tiles that are determined by the algorithm to contain no digital pathology information are placed in a sub-directory called "empty_tiles". A log file called "log.txt" is created which records the calculation used to decide if a tile is empty or not. The log file can be used to fine-tune the thresholds for assigning 'empty' status. The "emptiness" algorithms, which are based on a determination of the numbers of pixels diverging from a threshold background value, are not perfect and the accuracy of the results varies for different images. For this reason the 'empty' tiles are retained for review, in case tiles that are not empty have been discarded. The application's default values can be configured through a properties file named NDPIsplitter.properties. Discussion Snapshot Creator and NDPI-Splitter are developed in Java and share common libraries to interact with the NDPI files. Technically they are very similar; however they are used in two very different contexts. Snapshot Creator is used to publish lower resolutions JPEG images on a tissue bank web search engine. On the other hand, NDPI-Splitter produces files that can be imported into sophisticated image analysis packages such as Metamorph. The current limitation of image analysis software is their dependency on computer specifications, and typically large images fail to be processed because of insufficient memory. In addition, the significant time required for extraction of TIFF images from large image files is a significant limitation in image analysis. Therefore the ability of NDPI-Splitter to split large files into smaller TIFF sections enables their import into and analysis by image analysis software. Previous studies have reported different ways of automatic image analysis on virtual slides by identifying regions of interest [9][10][11]. For example, Romo et al. [10] employed colour, intensity, orientation and texture to calculate a relevance score against a manually selected region of interest. However, by contrast, NDPI-Splitter does not identify regions of interest, rather it creates files that can be imported into automated image analysis pipelines. In addition, NDPI-Splitter, using intensity-and compressionbased algorithms, can identify 'empty' regions that contain no or few pixels, which is a novel feature that streamlines the process of importing files for image analysis. This strategy reduces the requirement for manual review of Figure 5 Step-2 of NDPI-Splitter: At step 2 the width and height of each image section, and the magnification level, can be defined, and there is an option to filter out empty images. tiles prior to image analysis and minimizes the input to downstream analysis, representing a significant time saving. Snapshot Creator produces a snapshot, representing one quarter of the full slide image, for publishing on the website, allowing researchers to use the online image search engine and image viewer to determine rapidly whether the biobank holds the material they are interested in. If researchers are interested in applying to the bank for full scanned slides and related datasets, they can do so based on their rapid search of the online images, and full applications are assessed by peer-review [12]. Snapshot Creator takes the snapshot from the middle of the slide, in order to maximise the chance of including the cancer/malignant region in the snapshot, and in approximately 95% of our published images the malignant section is indeed present. However, for slides where the cancer region is markedly offset on the slide, the cancer region can be missed. In order to avoid this, all newly published images are manually reviewed. If the malignant region is not in the snapshot, a manual snapshot of the image is taken, and placed into the 'JPEG Snapshot Processing' folder ( Figure 3) for processing the next night. We have used Deep Zoom for publishing images on the website. Although there are a number of other options, such as the server-side software Spectrum (Aperio Inc.), SlidePath (Digital Pathology Solutions, Ireland) or NDP. Serve (Hamamatsu, Japan), these server-side proprietary products are very expensive. In addition, keeping full-sized images on a storage location accessible from a webserver may not be desirable due to the expensive nature of such storage systems. As an alternative to Deep Zoom, Lien et al. [13] developed a web-based solution for viewing large sized microscopic images derived from the Aperio ScanScope, however, the availability of such code, and maintenance of its currency, could be a limitation to its widespread use. An open source solution such as Deep Zoom from Microsoft is more widely accessible, and it is more likely that it will be maintained in the future. Deep Zoom provides the options of implementation either via Silverlight or JavaScript. We have used the JavaScript version because all modern browsers have built-in Java-Script support, whereas most web browsers do not have the Silverlight plugin installed. Zoomify EZ is another alternative to Deep Zoom, however this requires Flash plugin. Zoomify and Deep Zoom use the same underlying algorithm where images are cut into small sections of images for different resolutions and saved into a logical folder structure. Caisis is an open source cancer research database, with built-in fields for various cancers such as adrenal, bladder, colon, kidney, penile, prostate, testicular, breast, urological, pancreas and bladder, and the addition of more diseases or new fields is easily achievable. We have customised Caisis to link snapshots and virtual slides derived from the Snapshot Creator and NDPI-Splitter tools. Using Caisis, images can be searched for based on patient history, treatment or biomarkers, and relevant images can then be easily identified and sent to researchers. Therefore Snapshot Creator, NDPI-Splitter coupled with Deep Zoom and the customised Caisis database provide complete management of virtual images. Other researchers have also indicated the future development of such tools [14], therefore our open source tools provide the research community an alternate solution to in-house development. In summary, as virtual microscopy is moving into the main stream of diagnostic pathology, teaching and research [15], the development of open source tools that manage, catalogue and process virtual slides are needed. A web search engine holding digitized images can be used in teaching environments, to illustrate normal and abnormal cell structures of different cancer type, such as invasive or in situ cancer, and is broadly available for research and clinical pathology review. Therefore, NDPI Splitter, Snapshot Creator, Caisis and Deep Zoom are open source tools that provide the ability to make greater use of digital images and therefore broaden the range of applications for tissue bank images. Availability and requirements Project name: NDPI-Splitter Project home page: http://code.google.com/p/NDPI-Splitter/, a full customised copy of Caisis, in use by the ABCTB, can be requested from the corresponding author. Operating system(s): Windows Programming language: Java Requirements: Java, Hamamatsu SDK, JAI 1.1.3, JAI Image IO 1.1, Ant, Deep Zoom License: GNU GPL version 3 [8]
2017-06-25T07:37:33.488Z
2013-02-12T00:00:00.000
{ "year": 2020, "sha1": "d42acc6d7b922b2360fc194c00cc1682ec98c785", "oa_license": "CCBY", "oa_url": "https://diagnosticpathology.biomedcentral.com/track/pdf/10.1186/1746-1596-8-22", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "d42acc6d7b922b2360fc194c00cc1682ec98c785", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering", "Medicine" ] }
245902714
pes2o/s2orc
v3-fos-license
Geochemical and Geochronological Constraints on a Granitoid Containing the Largest Indosinian Tungsten (W) Deposit in South China (SC): Petrogenesis and Implications : Chuankou tungsten (W) ore field, with an estimated WO3 reserve exceeding 300,000 tonnes, is so far the largest Indosinian (Triassic) granite-related W ore field in South China. However, the precise emplacement ages, Trace elements within the zircons and whole-rock geochemistry yielded evidence of the close relationship between W mineralization and G-1 and G-2 granitoids of the Chuankou ore field. The batholith of the Chuankou ore field was formed 20–10 Ma later than the peak age of the collisions orogeny and formed in a post-collisional setting. Introduction South China (SC) is renowned for its extensive magmatism and the giant ore deposit clusters of W, Sn, Mo, Bi, Pb, Zn, Sb, U, Be, Nb, Ta, and REEs in the Yanshanian period [1][2][3][4][5]. These ore deposits host more than 90% of China's W resources; over 56% of global W resources [1 -3]. Extensive research has been carried out around Yanshanian W mineralization and related igneous rocks using high-precision geochronological data [2,[6][7][8][9][10][11][12][13][14][15][16]. In contrast, the Indosinian igneous rocks and W deposits have been not widely concerned since they are small in size and bear minimal U, Nb, and Ta deposits [17][18][19]. Recently, Sample reports on Indosinian W-Sn mineralization (the Miao'ershan W-Mo deposit, Hehuaping Sn deposit, Xiane'tang Sn deposit, Xitian Sn deposit, Nanyangtian W-Mo deposit, and Qingshan W deposit) have come to the forefront [2,12,[20][21][22][23][24] (Figure 1b). Due to the unique spatial and Previous research has identified the close genetic relationships between W deposits and granitoids in the South China block (SCB). Numerous studies have shown that W-bearing granitoids generally present S-and/or A-type granitoid affinities and are enriched in SiO2 and volatiles (e.g., Li and F) [25,26]. Recently, Zhang et al. [27] and Jiang et al. [28] confirmed that W-bearing granitoids are highly fractionated I-type granite based on the investigation of Yanshannian W deposits from Jiangxi Province and Guangdong Province. However, Huang et al. [29] proposed that W-bearing granitoids from Indosinian Yuntoujie W deposits are obvious highly fractionated S-type granite affinities. Therefore, further research is needed to solve the issue of whether the W-bearing granitoids are highly fractionated I-type or S-type. The Chuankou W ore field is situated in the middle of the SCB and has been identified as the largest W ore field of SC with a total W metal content of over 300,000 tonnes ( Figure 1a). Moreover, there are 14 important W deposits distributed in the ore field (Table 1). Bai et al. [30] suggested that the host rocks of the Chuankou W deposit were formed 170 to 160 Ma. Peng et al. [31] suggested zircon U-Pb dating of host rocks to around 220 Ma and a molybdenite Re-Os to 221 Ma for the Sanjiaotan W deposit. However, up to now, the precise emplacement ages, sources of granitoids from the Chuankou ore field, and their relationship with W mineralization have been less studied and are still not well understood. Previous research has identified the close genetic relationships between W deposits and granitoids in the South China block (SCB). Numerous studies have shown that Wbearing granitoids generally present S-and/or A-type granitoid affinities and are enriched in SiO 2 and volatiles (e.g., Li and F) [25,26]. Recently, Zhang et al. [27] and Jiang et al. [28] confirmed that W-bearing granitoids are highly fractionated I-type granite based on the investigation of Yanshannian W deposits from Jiangxi Province and Guangdong Province. However, Huang et al. [29] proposed that W-bearing granitoids from Indosinian Yuntoujie W deposits are obvious highly fractionated S-type granite affinities. Therefore, further research is needed to solve the issue of whether the W-bearing granitoids are highly fractionated I-type or S-type. The Chuankou W ore field is situated in the middle of the SCB and has been identified as the largest W ore field of SC with a total W metal content of over 300,000 tonnes ( Figure 1a). Moreover, there are 14 important W deposits distributed in the ore field (Table 1). Bai et al. [30] suggested that the host rocks of the Chuankou W deposit were formed 170 to 160 Ma. Peng et al. [31] suggested zircon U-Pb dating of host rocks to around 220 Ma and a molybdenite Re-Os to 221 Ma for the Sanjiaotan W deposit. However, up to now, the precise emplacement ages, sources of granitoids from the Chuankou ore field, and their relationship with W mineralization have been less studied and are still not well understood. Ore Deposit Geology The Proterozoic metamorphic basement exposed in the center of the ore field contains a metamorphic silty slate and an argillaceous slate of the Neoproterozoic Wuqiangxi Formation of Banxi group. These are the most important host rocks of the quartz vein-type wolframite. The Paleozoic strata are exposed in the margin of the ore field and are unconformably covered above the metamorphic basement. It is composed of siliceous sedimentary breccia and shale with of the Devonian Yanglinao Formation (D2y), shale of the Carboniferous Yanguan Formation (C1y), and the diluvial layer of the Quaternary. Among them, the siliceous sedimentary breccia of Yanglinao Formation (D2y) has been confirmed as one of the wall rocks of the vein-type scheelite in the Yanglinao deposit ( Figure 2). The Chuankou W ore field is exposed in the core of the Chuankou uplift, which is composed of a series of anticlines. The Chuankou uplift belongs to the eastward extension of the Qiyangshan zigzag-shaped structural ridge axis. Two groups of folds were developed: (1) the early E-W-direction fold belt and (2) the late N-S-direction fold belt. Fault structures in the ore field are oriented mainly in an NNW direction and NEE direction. The ENE-direction fault clusters are early faults that occur near the internal contact zone between the granitoids and surrounding rocks. The NNW-direction fault clusters are deep normal faults, which control the ore body's occurrences, orientation and enrichment ( Figure 2). Granitoids of the Chuankou ore field are exposed in the core of the Chuankou uplift with an area of 15 km 2 . According to fieldwork in this research, four main magmatic stages could be observed ( Figure 2). The emplacement sequence is biotite monzogranite (G-1) → two-mica monzogranite (G-2) → fine-grained granite (G-3) → granite porphyry (G-4) (Figure 3a-d). Ore Deposit Geology The Proterozoic metamorphic basement exposed in the center of the ore field contains a metamorphic silty slate and an argillaceous slate of the Neoproterozoic Wuqiangxi Formation of Banxi group. These are the most important host rocks of the quartz veintype wolframite. The Paleozoic strata are exposed in the margin of the ore field and are unconformably covered above the metamorphic basement. It is composed of siliceous sedimentary breccia and shale with of the Devonian Yanglinao Formation (D2y), shale of the Carboniferous Yanguan Formation (C1y), and the diluvial layer of the Quaternary. Among them, the siliceous sedimentary breccia of Yanglinao Formation (D2y) has been confirmed as one of the wall rocks of the vein-type scheelite in the Yanglinao deposit ( Figure 2). The Chuankou W ore field is exposed in the core of the Chuankou uplift, which is composed of a series of anticlines. The Chuankou uplift belongs to the eastward extension of the Qiyangshan zigzag-shaped structural ridge axis. Two groups of folds were developed: (1) the early E-W-direction fold belt and (2) the late N-S-direction fold belt. Fault structures in the ore field are oriented mainly in an NNW direction and NEE direction. The ENE-direction fault clusters are early faults that occur near the internal contact zone between the granitoids and surrounding rocks. The NNW-direction fault clusters are deep normal faults, which control the ore body's occurrences, orientation and enrichment ( Figure 2). Granitoids of the Chuankou ore field are exposed in the core of the Chuankou uplift with an area of 15 km 2 . According to fieldwork in this research, four main magmatic stages could be observed ( Figure 2). The emplacement sequence is biotite monzogranite (G-1) → two-mica monzogranite (G-2) → fine-grained granite (G-3) → granite porphyry (G-4) (Figure 3a-d). (3) Fine-grained granite (G-3) is widely exposed at the region and intrudes into the G-2 and metamorphic slate as veins about 30-50 cm in width. G-3 is dark to gray in color and has a fine-grained texture. The minerals assemblage includes quartz, plagioclase, K-feldspar, and muscovite. Generally, the mineral crystals of G-3 are smaller than 0.5 mm. Slight alteration were developed in K-feldspar crystals (Figure 4b,c,l). (3) Fine-grained granite (G-3) is widely exposed at the region and intrudes into the G-2 and metamorphic slate as veins about 30-50 cm in width. G-3 is dark to gray in color and has a fine-grained texture. The minerals assemblage includes quartz, plagioclase, K-feldspar, and muscovite. Generally, the mineral crystals of G-3 are smaller than 0.5 mm. Slight alteration were developed in K-feldspar crystals (Figure 4b,c,l). (4) Granite porphyry (G-4) is only exposed on the north side of Chishui Village roads. It occurs as a vein and intrudes into G-2 with a width of 15-20 m. G-4 exhibits a large structure and porphyritic texture. The phenocrysts (approximately 30 vol.% of the whole rocks) are 0.5-2 mm in size and composed of quartz (30 vol.% of total phenocrysts), potassium feldspar (60 vol.% of total phenocrysts), and a small amount of plagioclase and muscovite (less than 10 vol.%). The matrix is microgranular, which occupies 70 vol.% of all rocks (Figure 4f,k). Alteration and Mineralization Field observation shows that hydrothermal alteration occurred in the contact zone between the granitoids and Neoproterozoic strata and its adjacent area. The alteration types contain silicification, greisenization, potash feldspathization, tourmalinization, carbonatization, argillization. Greisenization, and silicification as the main high-temperature hydrothermal alterations that are widely developed at the top of the contact zones between the G-2 and Neoproterozoic strata. In addition, greisenization occurred intensely along the margins between barren or fertile quartz veins. The interior of the veins developed potassium feldspar, tourmaline, and calcite. Alteration and Mineralization Field observation shows that hydrothermal alteration occurred in the contact zone between the granitoids and Neoproterozoic strata and its adjacent area. The alteration types contain silicification, greisenization, potash feldspathization, tourmalinization, carbonatization, argillization. Greisenization, and silicification as the main high-temperature hydrothermal alterations that are widely developed at the top of the contact zones between the G-2 and Neoproterozoic strata. In addition, greisenization occurred intensely along the margins between barren or fertile quartz veins. The interior of the veins developed potassium feldspar, tourmaline, and calcite. The mineralization types of the Chuankou ore field include altered granite-type scheelite and molybdenite, quartz vein-type wolframite, and veinlet-disseminated-type scheelite. Among them, the altered granite-type scheelite and molybdenite occur in the top greisenization zone of two-mica monzogranites (Maowan, Hubeichong, and Baishui deposits); generally, low ore grades and limited spatial scales. Quartz vein-type wolframite occurs in the fault zone above the granitoids (Nanwan and Hunaglong deposits). Ore-bearing veins are along the NNE direction, and the angle of inclination is 70 • to 80 • . Veinlet-disseminated scheelite has economic value only in the Yanglinao deposit, and it occurs in the siliceous breccia belt (D 2 y) as a mesh vein structure. Geochoronology Zircon grains were separated for U-Pb age dating at the Langfang Regional Geology and Mineral Resources Survey Institute. The bulk samples were crushed to 60-80 mesh (3) The low-to middle-temperature hydrothermal period. The quartz and sulfide stage shows no obvious mineralization of W. The minerals assemblage is composed by chalcopyrite, sphalerite, pyrite, and arsenopyrite. (Figure 6i-l) (4) Low-temperature hydrothermal period. Low-temperature minerals (fluorite and calcite) and a small amount of sulfide (sphalerite and galena) are the dominant minerals in this period. Geochoronology Zircon grains were separated for U-Pb age dating at the Langfang Regional Geology and Mineral Resources Survey Institute. The bulk samples were crushed to 60-80 mesh size, and zircons were separated using gravity and electromagnetic techniques and hand-picked under a binocular microscope. The samples were then mounted on epoxy resin, smoothed and polished, and finally gold coated. The zircons were examined using transmitted and reflected light and cathodoluminescence (CL) microscopy. Zircon U-Pb dating was performed at the Institute of Mineral Resources, CAGS, Beijing, using a Finnigan Neptune inductively coupled plasma mass spectrometer (MC-ICP-MS) with a new wave UP213 laserablation system. Helium was used as the carrier gas, and the beam diameter was 30 µm with a 10 Hz repetition rate and laser power of 2.5 J/cm 2 . Eight ion counters were used to simultaneously receive the 238 U, 235 U, 232 Th, 208 Pb, 207 Pb, 206 Pb, 204 Pb, and 202 Hg signals, whereas data for 208 Pb, 232 Th, 235 U, and 238 U were collected on a Faraday cup. Zircon GJ-1 was used as standard, and Plešovice zircon was used to optimize the mass spectrometer. U, Th, and Pb concentrations were calibrated using 29 Si as an internal standard and zircon M127 (U: 923 ppm; Th: 439 ppm; Th/U: 0.4750) as an external standard [42]. 207 Pb/ 206 Pb, and 206 Pb/ 238 U were calculated using the ICP-MS DataCal 4.3 program. Common Pb was not corrected because of high 206 Pb/ 204 Pb. Abnormally high 204 Pb data were deleted. The Plešovice zircon was dated as unknown and yielded a weighted mean 206 Pb/ 238 U age of 337 ± 2 Ma (2SD, n = 12), which is in good agreement with the recommended 206 Pb/ 238 U age of 337.13 ± 0.37 Ma (2SD) [43]. Age calculations were performed, and Concordia diagrams were generated using the Isoplot/Ex 3.0 software [44]. Geochemistry Whole-rock major, trace, and rare earth element concentrations were analyzed at the National Geological Experiment Test Center, Beijing. Whole-rock major, trace, and rare earth element concentrations were analyzed at the National Geological Experiment Test Center, Beijing. Whole-rock major elements were analyzed using a plasma spectrometer (PE8300). All results were normalized against the Chinese rock reference standard JY/T015-1996 [45]. The analytical uncertainties were less than ±2%. Sr-Nd Isotope Fresh samples were ground with an agate mill and powders were spiked with mixed isotope tracers, dissolved in Teflon capsules with HF + HNO 3 acid, and separated by conventional cation-exchange techniques. The isotopic measurements were performed on a VG-354 mass spectrometer at the Institute of Geology and Geophysics, Chinese Academy of Sciences [46]. The mass fractionation corrections for Sr and Nd isotopic ratios were based on 86 Sr/ 88 Sr = 0.1194 and 146 Nd/ 144 Nd = 0.7219. Repeat analyses yielded an 87 Sr/ 86 Sr ratio of 0.71023 ± 0.00006 for the NBS-987 Sr standard and an 143 Nd/ 144 Nd ratio of 0.511845 ± 0.000012 for the La Jolla standard. Detailed descriptions of the analytical techniques can be found elsewhere-in [47] and references therein. (2) G-1: The length/width ratios of zircons are close to 1-2. The sizes of zircons range from 100 to 150 µm ( Figure 7). The U content ranges from 249.1 to 1094.1 ppm, Pb content ranges from 12.1 to 124 ppm, and Th content ranges from 132.1 to 1072 ppm. Th/U ratios are from 0.23 to 1.81, and 206 Pb/ 238 U ages are from 206.6 ± 6.3 to 232.9 ± 7.1 Ma. The concordance age of the zircon grains is 230.8 ± 1.6 Ma (MSWD = 0.31). (3) G-3: Zircons are columnar crystals with grain sizes ranging from 50 to 150 µm, typical of acidic magmatic zircons, with Th/U ratios of 0.12-2.07, Pb content from 17.9 to 265.71 ppm, Th content from 54.46 to 1425.03 ppm, and U content from 295.74 to 12,287.53 ppm. The obtained 206 Pb/ 238 U ages reveal two notably different groups: the first group is from 200.5 ± 3.51 to 203.9 ± 3.55 Ma with a concordance age of 203.1 ± 1.6 Ma (MSWD = 7.2). The 206 Pb/ 238 U age of the second group ranges from 218.2 ± 4.11 to 226.8 ± 4.05 Ma, and the concordance age is 224.8 ± 1. 6 (2) G-1: The length/width ratios of zircons are close to 1-2. The sizes of zircons range from 100 to 150 μm (Figure 7). The U content ranges from 249.1 to 1094.1 ppm, Pb content ranges from 12.1 to 124 ppm, and Th content ranges from 132.1 to 1072 ppm. Th/U ratios Geochemistry Thirteen samples from the Chuankou ore field were analyzed and the analysis results are listed in Table 3. (12.91-16.47 wt.%) and characterized by low Na 2 O (0.12-3.54 wt.%) and ALK (3.52-6.91 wt.%) contents. G-4 has the lowest Na 2 O content (0.08 wt.%), MnO content (0.04 wt.%), and has the highest K 2 O contents (4.77-4.79 wt.%). In the SiO 2 versus ALK diagram, the granitoids plot into the subalkaline granite field (Figure 9a). In the SiO 2 vs. K 2 O diagram and Si vs. ALK-Ca diagram, all the samples plot into the high-K calc-alkaline field (Figure 9b,d). In the A/NK-A/CNK diagram, samples plot into the peraluminous field, implying that the granitoids of the Chuankou ore field belong to the high-K calc-alkaline and peraluminous series (Figure 9c). Geochemistry Thirteen samples from the Chuankou ore field were analyzed and the analysis results are listed in Table 3. G-1 is characterized by high SiO2 ( In the SiO2 versus ALK diagram, the granitoids plot into the subalkaline granite field (Figure 9a). In the SiO2 vs. K2O diagram and Si vs. ALK-Ca diagram, all the samples plot into the high-K calc-alkaline field ( Figure 9b,d). In the A/NK-A/CNK diagram, samples plot into the peraluminous field, implying that the granitoids of the Chuankou ore field belong to the high-K calc-alkaline and peraluminous series (Figure 9c). 0.06~0.23) (Figure 10a). The values of LaN/YbN range from 0.93 to 2.39. Rb, Hf, and U are enriched and Ba, Sr, and Ti are depleted (Figure 11a). Rb/Sr ratios vary from 14.26 to 34.48, K/Rb ratios range from 42.96 to 73.83, and Rb/Ba ratios range from 6.04 to 7.88. The value of Zr + Nb + Y + Ce ranges from 97.9 to 155.75 ppm. The chondrite-normalized REE patterns of G-3 are similar to G-2. The δEu values of G-3 range from 0.02 to 0.45, and the values of LaN/YbN range from 0.75 to 6.87 (Figure 10c). Rb, Hf, and Th are enriched and Ba, Sr, P, and Ti are depleted (Figure 11c) Figure 10d). Zr, Hf, Rb, Th, and U are enriched and Ba, Sr, P, and Ti are depleted ( Figure 11d). The Rb/Sr ratios vary from 13.06 to 13.20, K/Rb ratios range from 87.19 to 87.83, and Rb/Ba ratios range from 1.32 to 1. 34. The values of Zr + Nb + Y + Ce range from 281.59 to 284.42 ppm. Zircon Geochemistry and Ce 4+ /Ce 3+ Ratios The trace element compositions of zircon grains from the granitoids in the Chuankou ore field are shown in Table 4. Most of the Ti, Sr, and Ta contents of zircon grains are much closer to the values range proposed by Hoskin and Schaltegger [49] (Nb: up to 62 ppm; Sr ≤ 3 ppm; Ti: up to 75 ppm), which could be interpreted as normal magmatic zircon with various microscopic mineral inclusions, such as rutile and Zircon Geochemistry and Ce 4+ /Ce 3+ Ratios The trace element compositions of zircon grains from the granitoids in the Chuankou ore field are shown in Table 4. Most of the Ti, Sr, and Ta contents of zircon grains are much closer to the values range proposed by Hoskin and Schaltegger [49] (Nb: up to 62 ppm; Sr ≤ 3 ppm; Ti: up to 75 ppm), which could be interpreted as normal magmatic zircon with various microscopic mineral inclusions, such as rutile and ferrotapiolite [50]. The ΣREE contents of G-1 range from 787.62 to 2080.54 ppm, those of G-2 range from 935.37 to 11,137.50 ppm, and those of G-3 range from 1387.73 to 4694.70 ppm. The chondritenormalized REE patterns reveal an obvious enrichment of HREEs and depletion of LREEs but depletion of LREEs and connect with a magmatic origin [51]. All samples commonly show positive Ce anomalies and negative Eu anomalies in the zircons ( Figure 12). However, there is an obvious difference in the degree of Ce and Eu anomalies in that G-1 and G-2 contain more negative Eu anomalies and positive Ce anomalies (δEu = 0.03-0.28; δCe = 1.56-189.58) than G-3 (δEu = 0.12-0.47, with a value of 1.41; δCe = 1.04-8.81). Despite this difference, all zircon grains in the study appear to be magmatic in origin and do not show geochemical evidence of metamorphic, hydrothermal overprinting or radiationinduced damage. Ballard et al. [52] proposed a detailed calculation formula for the Ce 4+ /Ce 3+ ratio: The zircon-melt partition coefficients for Ce 3+ and Ce 4+ were estimated using the model described by Ballard [31,56]. Due to the absence of detailed field observations and efficient constraints on geochronology, the magmatic process and evolution of granitoids from the Chaunkou ore field remain unclear. In this study, zircon U-Pb geochoronological analysis of the four main phases (G-1-G-4) was carried out. G-1 is exposed at the depth of the Maowan and Tangjiangyuan deposits. The formation age of G-1 is 230.8 ± 1.6 Ma (MSWD = 0.31). G-2 is the dominant part and represents approximately 70% of the granitoids in size. The formation age of G-2 is 222.1 ± 0.56 Ma, which is similar to the results of 223.1-224. 6 Ma within the allowed error range [57]. G-3 intruded into G-2 as a dyke, and two groups of concordance ages can be identified. The first group of 224.8 ± 1.6 Ma (MSWD = 0.047) is consistent with G-2 and suggests that the zircons might be xenocrysts. The second group is 203.1 ± 1.6 Ma (MSWD = 7.2), representing the formation age. G-4 intruded into G-2 as larger veins with width from 0.5 to 3 m. The field observations and analysis results confirm the conclusion that G-4 formed at 135.5 ± 2.4 Ma (MSWD = 1.3). In summary, the Chuankou ore field experienced at least four stages of magmatism. The emplacement sequence is G-1 (phase I), G-2 (phase II), G-3 (phase III), and G-4 (phase IV). Genesis Type The granitoids of the Chuankou ore field are peraluminous, reflected in both the major element ratios (A/CNK ranging from 1.110 to 4.238) and the secondary and accessory minerals (spessartine, muscovite, biotite and tourmaline). The granitoids are commonly enriched in Rb, Zr, Hf, Th, and U, whereas they are depleted in Ba, Sr, P, and Ti. In addition, total alkali content ranges from 3.57 to 7.53 ppm, FeO T /MgO ratios range from 2.40 to 13.98, and Zr + Nb + Ce + Y values range from 97.9 to 284.42 ppm. These indexes are significantly lower than the global average of A-type granite (350 ppm) [58]. In the Zr + Nb + Ce + Y vs. ALK and Zr + Nb + Ce + Y vs. FeO T /MgO diagrams, samples plot into the FG field suggesting that the granitoids from the Chuankou ore field have an affinity for fractionated I/S-type granite (Figure 13a,b). Thirdly, in the A (Al-Na-K)-C (Ca)-F (Fe 2+ + Mg) ternary diagram, samples plot in the S-type granite field, also indicating an S-type granite affinity ( Figure 14). In summary, the Chuankou ore field experienced at least four stages of magmatism. The emplacement sequence is G-1 (phase I), G-2 (phase II), G-3 (phase III), and G-4 (phase IV). Genesis Type The granitoids of the Chuankou ore field are peraluminous, reflected in both the major element ratios (A/CNK ranging from 1.110 to 4.238) and the secondary and accessory minerals (spessartine, muscovite, biotite and tourmaline). The granitoids are commonly enriched in Rb, Zr, Hf, Th, and U, whereas they are depleted in Ba, Sr, P, and Ti. In addition, total alkali content ranges from 3.57 to 7.53 ppm, FeO T /MgO ratios range from 2.40 to 13.98, and Zr + Nb + Ce + Y values range from 97.9 to 284.42 ppm. These indexes are significantly lower than the global average of A-type granite (350 ppm) [58]. In the Zr + Nb + Ce + Y vs. ALK and Zr + Nb + Ce + Y vs. FeO T /MgO diagrams, samples plot into the FG field suggesting that the granitoids from the Chuankou ore field have an affinity for fractionated I/S-type granite (Figure 13a,b). Thirdly, in the A (Al-Na-K)-C (Ca)-F (Fe 2+ + Mg) ternary diagram, samples plot in the S-type granite field, also indicating an S-type granite affinity ( Figure 14). Origin In this study, granitoids from the Chuankou ore field are characterized by high 87 Rb/ 86 Sr ratios (varying from 13.5591 to 152.3436) and extremely high 87 Sr/ 86 Sr ratios (from 0.751712 to 1.292048). The initial 87 Sr/ 86 Sr values range from 0.67995 to 0.85226, which is beyond the range of normal continental crust and primitive mantle. Thus, these data cannot be used to trace the source of magma due to the hydrothermal alteration during the W mineralization process. Conversely, the activities of Sm and Nd and the relevant isotopic composition remain unchanged in the evolution and alteration process. The Sm-Nd isotopic composition could be considered as a reasonable indicator for the source region. In this research, ε Nd (t) values of granitoids from the Chuankou ore field are −10.77 for G-1, −7.74 to −9.3 for G-2, and −6.53 to −10.07 for G-3. The samples plot in the Cathaysia basement field in the T(Ma) vs. ε Nd (t) diagram (Figure 15b). The calculated T DM2 and ε Nd (t) values (2090 Ma for G-1, 1684 to 1764 Ma for G-2, and 1471 to 1669 Ma for G-3) reveal a crustal origin by partial melting. G-1 was derived from the metamorphic basement in the Paleoproterozoic Era, while G-2 and G-3 were of homogeneous origin in the Mesoproterozoic Era. Significantly negative correlations of the formation ages with T DM2 (2090 Ma → 1684 to 1764 Ma → 1471 to 1669 Ma) and ε Nd (t) (−10.77 → −9.3 to −7.74 → −10.07 to −6.53) indicate that the proportion of crustal components in the source area decreased gradually; however, the composition of the mantle shows an obvious increasing trend. In the AMF vs. CMF diagram, the granitoids plot near the region of metapelitic sources and metagraywackes far from the metamorphic basalt and tonalite field. This indicates that the source rocks of granitoids from the Chuankou ore field are mainly crystal schists and gneisses formed by metamorphic Proterozoic mudstones and metagraywackes (Figure 15a). affinity for fractionated I/S-type granite (Figure 13a,b). Thirdly, in the A (Al-Na-K)-C (Ca)-F (Fe 2+ + Mg) ternary diagram, samples plot in the S-type granite field, also indicating an S-type granite affinity (Figure 14). Origin In this study, granitoids from the Chuankou ore field are characterized by high 87 Rb/ 86 Sr ratios (varying from 13.5591 to 152.3436) and extremely high 87 Sr/ 86 Sr ratios (from 0.751712 to 1.292048). The initial 87 Sr/ 86 Sr values range from 0.67995 to 0.85226, which is beyond the range of normal continental crust and primitive mantle. Thus, these data cannot be used to trace the source of magma due to the hydrothermal alteration during the W mineralization process. Conversely, the activities of Sm and Nd and the relevant isotopic composition remain unchanged in the evolution and alteration process. The Sm-Nd isotopic composition could be considered as a reasonable indicator for the source region. In this research, εNd(t) values of granitoids from the Chuankou ore field are −10.77 for G-1, −7.74 to −9.3 for G-2, and −6.53 to −10.07 for G-3. The samples plot in the Cathaysia basement field in the T(Ma) vs. εNd(t) diagram (Figure 15b). The calculated TDM2 and εNd(t) values (2090 Ma for G-1, 1684 to 1764 Ma for G-2, and 1471 to 1669 Ma for G-3) reveal a crustal origin by partial melting. G-1 was derived from the metamorphic basement in the Paleoproterozoic Era, while G-2 and G-3 were of homogeneous origin in the Mesoproterozoic Era. Significantly negative correlations of the formation ages with TDM2 (2090 Ma → 1684 to 1764 Ma → 1471 to 1669 Ma) and εNd(t) (−10.77 → −9.3 to −7.74 → −10.07 to −6.53) indicate that the proportion of crustal components in the source area decreased gradually; however, the composition of the mantle shows an obvious increasing trend. In the AMF vs. CMF diagram, the granitoids plot near the region of metapelitic sources and metagraywackes far from the metamorphic basalt and tonalite field. This indicates that the source rocks of granitoids from the Chuankou ore field are mainly crystal schists and gneisses formed by metamorphic Proterozoic mudstones and metagraywackes (Figure 15a). Magmatic Process During the granitic magmatism process, Ti was mainly absorbed in ilmenite, rutile, titanite, biotite and anatase. The separation of Ti-bearing phases at relatively moderate to low temperatures would have led to a significant depletion of Ti, Nb, Ta. Eu, Sr, and Ba which existed stably by substituting into the K + site in the K-feldspar and/or Ca 2+ site in plagioclase. P is the dominant component of apatite. There is significant depletion of Sr, Ba, P, and Ti of granitoids from the Chuankou granitoids, indicating obvious fractional crystallization of feldspar, biotite, Ti-bearing minerals, and apatite in magmatic processes [59]. In addition, the Eu/Eu* ratios, Rb/Sr ratios, Sr, and Ba could be used as markers to identify fractional crystallization. The correlations between Rb/Sr and Sr, Ba and Sr, and Eu/Eu* and Ba suggest that the fractional crystallization of K-feldspar, plagioclase and biotite was the main genetic mechanism (Figure 16a-d). For the REEs (La and Yb), carrier minerals included zircon, apatite, allanite, and monazite. The correlations between Magmatic Process During the granitic magmatism process, Ti was mainly absorbed in ilmenite, rutile, titanite, biotite and anatase. The separation of Ti-bearing phases at relatively moderate to low temperatures would have led to a significant depletion of Ti, Nb, Ta. Eu, Sr, and Ba which existed stably by substituting into the K + site in the K-feldspar and/or Ca 2+ site in plagioclase. P is the dominant component of apatite. There is significant depletion of Sr, Ba, P, and Ti of granitoids from the Chuankou granitoids, indicating obvious fractional crystallization of feldspar, biotite, Ti-bearing minerals, and apatite in magmatic processes [59]. In addition, the Eu/Eu* ratios, Rb/Sr ratios, Sr, and Ba could be used as markers to identify fractional crystallization. The correlations between Rb/Sr and Sr, Ba and Sr, and Eu/Eu* and Ba suggest that the fractional crystallization of K-feldspar, plagioclase and biotite was the main genetic mechanism (Figure 16a-d). For the REEs (La and Yb), carrier minerals included zircon, apatite, allanite, and monazite. The correlations between between La and La/Yb suggests that the melt was constrained by the fractional crystallization of allanite and monazite (Figure 16e). In addition, there are no obvious xenoliths (metamorphic slate in the Proterozoic) near the stratigraphic contact belt and no significant correlation between SiO 2 content and ε Nd (t) values. This implies that the fractional crystallization process was relatively clear for the felsic melt rather than for the extensive assimilation-fractional crystallization (AFC) process (Figure 16f). between La and La/Yb suggests that the melt was constrained by the fractional crystallization of allanite and monazite (Figure 16e). In addition, there are no obvious xenoliths (metamorphic slate in the Proterozoic) near the stratigraphic contact belt and no significant correlation between SiO2 content and εNd(t) values. This implies that the fractional crystallization process was relatively clear for the felsic melt rather than for the extensive assimilation-fractional crystallization (AFC) process (Figure 16f). Furthermore, Zr + Nb + Y contents of the Chuankou complex vary from 75.57 to 187.05 ppm, and the Rb/Ba ratios range from 1.33 to 39.41. An obvious negative correlation trend is exhibited on the Zr + Nb + Y versus Rb/Ba diagram, coinciding with the Sandy Cope granite field, indicating the common regulations of highly fractionated granite ( Figure 17). Relationships between Host Rocks and Tungsten Mineralization There are three main substitution mechanisms of scheelite in the concent REEs: (1) 2Ca 2+ ↔ Na + + REE 3+ , (2) Ca 2+ + W 6+ ↔ REE 3+ + Nb 5+ , and (3) 3Ca 2+ ↔ 2R (□ vacancy) [60,61]. A significant comparative study between REE patterns of from the Chuankou ore field and Sch-3 was performed and showed high correla In addition, the Sr isotopic composition (Isr) of G-1 (0.72109) is close to the composition of Sch-1 and Sch-3, which is derived from magmatic-hydr conditions without significant fluid/rock interactions and fluid mixing. In addit G-2, and G-3 are highly fractionated S-type granite and contain W concentration several to ten times higher than average crustal concentrations (1.9 ppm and respectively [62]). This characteristic is very similar to the host rocks of we Dahutang superlarge W deposits [63]. To date, the Chuankou W deposit has been identified as the largest Indo deposit in the SCB and contains quartz vein type-, veinlet type-, and altered type-W ore bodies. Cai et al. obtained a formation age of 224.6 ± 1. 31 Ma for th two mica monzogranites [57], which are generally thought to be host disseminated wolframite and scheelite. The ore formation ages of quartz v mineralization ranged from 224 to 230 Ma [31,56,64]. These data are consistent 206 Pb/ 238 U ages of G-1 (230.8 ± 1.6 Ma) and G-2 (222-224 Ma). Field observations h shown the close spatiotemporal relationship between G-1, G-2, and W minera However, the ages of G-3 and G-4 are 203.1 ± 1.6 Ma and 135.5 ± 2.4 Ma (MSW respectively. Seemingly, these intrusions were emplaced after W mineralization. Systematic evidence indicates that the host rocks of the Chuankou W ore fi G-1 and G-2. However, how did W separate from the intrusions and becom concentrated in a limited spatial area? Generally, rutile was the main W-bearing during the early stage of magmatic activity, while wolframite and scheelite do the later stage of magmatic to hydrothermal activity. Because the six-coordina could be substituted by W 6+ accompanied by a double substitution of Fe to mai charge balance [65], W could be concentrated in large amounts in rutile significantly depleted in the residual melt and fluid. However, the granitoids Chuankou ore field (G-1 and G-2) contain 0.26-0.35 wt.% MgO and 1.29-1.77 w Relationships between Host Rocks and Tungsten Mineralization There are three main substitution mechanisms of scheelite in the concentration of REEs: (1) 2Ca 2+ ↔ Na + + REE 3+ , (2) Ca 2+ + W 6+ ↔ REE 3+ + Nb 5+ , and (3) 3Ca 2+ ↔ 2REE 3+ + ( vacancy) [60,61]. A significant comparative study between REE patterns of G-1/G-2 from the Chuankou ore field and Sch-3 was performed and showed high correlation [54]. In addition, the Sr isotopic composition (I sr ) of G-1 (0.72109) is close to the medium composition of Sch-1 and Sch-3, which is derived from magmatic-hydrothermal conditions without significant fluid/rock interactions and fluid mixing. In addition, G-1, G-2, and G-3 are highly fractionated S-type granite and contain W concentrations that are several to ten times higher than average crustal concentrations (1.9 ppm and 0.6 ppm, respectively [62]). This characteristic is very similar to the host rocks of well-known Dahutang superlarge W deposits [63]. To date, the Chuankou W deposit has been identified as the largest Indosinian W deposit in the SCB and contains quartz vein type-, veinlet type-, and altered granite type-W ore bodies. Cai et al. obtained a formation age of 224.6 ± 1. 31 Ma for the altered two mica monzogranites [57], which are generally thought to be host rocks of disseminated wolframite and scheelite. The ore formation ages of quartz vein-type mineralization ranged from 224 to 230 Ma [31,56,64]. These data are consistent with the 206 Pb/ 238 U ages of G-1 (230.8 ± 1.6 Ma) and G-2 (222-224 Ma). Field observations have also shown the close spatiotemporal relationship between G-1, G-2, and W mineralization. However, the ages of G-3 and G-4 are 203.1 ± 1.6 Ma and 135.5 ± 2.4 Ma (MSWD = 1.3), respectively. Seemingly, these intrusions were emplaced after W mineralization. Systematic evidence indicates that the host rocks of the Chuankou W ore field were G-1 and G-2. However, how did W separate from the intrusions and become vastly concentrated in a limited spatial area? Generally, rutile was the main W-bearing mineral during the early stage of magmatic activity, while wolframite and scheelite dominated the later stage of magmatic to hydrothermal activity. Because the six-coordination Ti 4+ could be substituted by W 6+ accompanied by a double substitution of Fe to maintain the charge balance [65], W could be concentrated in large amounts in rutile and was significantly depleted in the residual melt and fluid. However, the granitoids from the Chuankou ore field (G-1 and G-2) contain 0.26-0.35 wt.% MgO and 1.29-1.77 wt.% FeO T and belong to the normal ilmenite-series granite, indicating an obvious absence of rutile in the early crystalline phase [63,66]. In addition, W is a lithophilic element in the bulk silicon earth (BSE), and the multiple stages of partial melting and separation crystallization would have caused a strong concentration of W in the late period of the residual melt phase. Thus, G-1 and G-2 granitoids have significant potential for the mineralization of W. In addition, with increasing oxygen fugacity, the mineralization series of Sn → W → Mo → Cu (Mo) → Cu (Au) was carried out in succession [67]. The occurrence of W mineralization could be attributed to the reduced granitic magmas that typically belong to the ilmenite series [68,69]. A possible contribution from W 4+ may have only been at the very lowest oxygen fugacity accessible to the experimental method in the melt [70][71][72]. Zircon is a common accessory mineral in intermediate-acid igneous rocks and is stable during later hydrothermal alteration and physiochemical processes. Due to its similar ionic radii and electrovalence, Ce 4+ is more easily absorbed in zircon crystals than light rare earth metal ions (such as Ce 3+ ) that occupy the site of Zr 4+ under oxidizing conditions. Hence, zircon can be invoked as a tracer for the evaluation of relative oxygen fugacity based on its Ce 4+ /Ce 3+ ratios. In this paper, the value of Ce 4+ /Ce 3+ was calculated as 0.33-93.28, which is much lower than the host rocks of well-known, large-scale, porphyry Cu-Au deposits, such as Chuquicamata-El Abra [50], and typical Cu-Au (Mo) deposits from the SCB, such as Dabaoshan porphyry Mo deposits (Ce 4+ /Ce 3+ = 356-1300; Li et al.) [73] and Dexin porphyry Cu deposits (Ce 4+ /Ce 3+ = 495-1922) [53]. In contrast, the Ce 4+ /Ce 3+ ratios were closer to those of W and Sn-bearing granitoids, such as the Guposhan, Qitianling, and Xuehuading granitoids, suggesting a significant metallogenetic potential of W and Sn [69] ( Figure 18). mineralization could be attributed to the reduced granitic magmas that typically belong to the ilmenite series [68,69]. A possible contribution from W 4+ may have only been at the very lowest oxygen fugacity accessible to the experimental method in the melt [70][71][72]. Zircon is a common accessory mineral in intermediate-acid igneous rocks and is stable during later hydrothermal alteration and physiochemical processes. Due to its similar ionic radii and electrovalence, Ce 4+ is more easily absorbed in zircon crystals than light rare earth metal ions (such as Ce 3+ ) that occupy the site of Zr 4+ under oxidizing conditions. Hence, zircon can be invoked as a tracer for the evaluation of relative oxygen fugacity based on its Ce 4+ /Ce 3+ ratios. In this paper, the value of Ce 4+ /Ce 3+ was calculated as 0.33-93.28, which is much lower than the host rocks of well-known, large-scale, porphyry Cu-Au deposits, such as Chuquicamata-El Abra [50], and typical Cu-Au (Mo) deposits from the SCB, such as Dabaoshan porphyry Mo deposits (Ce 4+ /Ce 3+ = 356-1300; Li et al.) [73] and Dexin porphyry Cu deposits (Ce 4+ /Ce 3+ = 495-1922) [53]. In contrast, the Ce 4+ /Ce 3+ ratios were closer to those of W and Sn-bearing granitoids, such as the Guposhan, Qitianling, and Xuehuading granitoids, suggesting a significant metallogenetic potential of W and Sn [69] (Figure 18). Figure 18. The Ce 4+ /Ce 3+ versus EuN/EuN* diagram. The data of blue field named Porphyry Cu-Mo-Au are from [49,70] and there in, the orange field are from [74]. Blevin [75] carried out important work on the granite in the Lachlan fold belt and proposed the parameters to estimate the redox state of granite [75]: (1) The calculated results show that the redox state (ΔOx1) of G-1 ranges from 0.03 to 0.31, that of G-2 ranges from 0.09 to 0.91, that of G-3 ranges from 0.41 to 1.68, and that of G-4 is 0.35. The ΔOx2 of G-1 ranges from −1.19 to −0.16, that of G-2 ranges from −0.70 to 0.32, that of G-3 ranges from −0.06 to 0.56, and that of G-4 is −0.13. Obviously, G-1 and most G-2 had the lowest degree of oxidation. This condition provides an opportunity to remove substantial W from magma to hydrothermal fluids. Indeed, the slightly higher values of ΔOx1 and ΔOx2 in G-3 and G-4 indicate that W would have remained in biotite or muscovite by substitution with the Al 3+ and/or Ga 3+ site instead of expulsion from the melt. Further investigation is needed for the relationship between G-3, G-4 granitoids, and regional W mineralization. [49,70] and there in, the orange field are from [74]. Blevin [75] carried out important work on the granite in the Lachlan fold belt and proposed the parameters to estimate the redox state of granite [75]: The calculated results show that the redox state (∆Ox1) of G-1 ranges from 0.03 to 0.31, that of G-2 ranges from 0.09 to 0.91, that of G-3 ranges from 0.41 to 1.68, and that of G-4 is 0.35. The ∆Ox2 of G-1 ranges from −1.19 to −0.16, that of G-2 ranges from −0.70 to 0.32, that of G-3 ranges from −0.06 to 0.56, and that of G-4 is −0.13. Obviously, G-1 and most G-2 had the lowest degree of oxidation. This condition provides an opportunity to remove substantial W from magma to hydrothermal fluids. Indeed, the slightly higher values of ∆Ox1 and ∆Ox2 in G-3 and G-4 indicate that W would have remained in biotite or muscovite by substitution with the Al 3+ and/or Ga 3+ site instead of expulsion from the melt. Further investigation is needed for the relationship between G-3, G-4 granitoids, and regional W mineralization. Metallogenesis and Geodynamic Implications During the early Middle Triassic, the intense collision and extensive metamorphism between the Indo-China block and Sibumas-Qingtang block exerted far-reaching effects on the SCB [76,77]. In addition, the southeastward subduction and collision of the North China block (NCB) with the South China block (SCB) overlapped due to the closure of the Paleo-Tethys Ocean. The SCB experienced multidirectional compression and extensive shortening, accompanied by thickening of the continental lithosphere [78][79][80][81][82]. During the late Mesozoic period, due to the tectonic regime transformation from Paleotethys dominant to paleo-Pacific tectonic dominant, the tectonic axis changed from the E-W direction to the NE-SW direction [40,83]. The tectonic regime is characterized by multiple stages of compression and extension, resulting in the formation of extensive magmatism and mineralization [9,39,[84][85][86]. Indosinian (Figure 1b). A possible "V"-shaped distribution model in the region indicates that the central belts of W deposits are relatively older than the others. The western and eastern parts have significantly lower values than those in the central part, which may represent the reactivation of the Proterozoic Qin-Hang tectonic belt under the Indosinian collision orogenetic regime of SC. Regional Sr-Nd isotopic compositions show that ε Nd (t) values of Indosinian granitoids range from −14.4 to −8 [17,90]. The two-stage depleted mantle model ages of Indosinian granitoids range from 1.63 to 2.09 [17,90]. In general, the T DM2 values better match the formation ages of the Paleoproterozoic metamorphic basement of the SCB [82]. On the other hand, Yanshanian T DM2 values range from 1.04 to 2.28, especially in Northeast Jiangxi. The Nanling area and coastal zone of Fujian and Zhejiang Provinces show multiple belts of low T DM values (<1.6 Ga) and high ε Nd (t) values (>−9), which might match the Mesoproterozoic basement [38,[91][92][93]. Numerous research data confirm that the main source of Yanshanian W mineralization was the Mesoproterozoic metamorphic basement, such as the Shuangqiaoshan group [81,94,95], which has an abnormal enrichment of W content-ten times more than the concentration of the average crust (11.7 ppm of the Shuangqiaoshan group) [96]. The more ancient basement identified in this study suggests a relatively deeper derivation of Indosinian W mineralization. Many valuable insights have been reported regarding the tectonic mechanism of W mineralization in the SCB, and the consensus suggests that the large Yanshanian W mineralization in the SCB was constrained closely by the paleo-Pacific plate regime, which mainly includes the extension of the Shi-Hang belt [38], a mantle plume [7,97], back-arc extension and lithospheric thinning [98], and slab subduction [99,100]. However, a distinct dynamic mechanism was identified in which Indosinian magmatism and mineralization extended approximately east-west in a zone that formed under the extension of a post-collisional setting, which could have been linked to the closure effects of the ancient Tethys Ocean. This setting reflects a relative "free" extension space of the overall compression regime [40,101]. Studies have recently revealed two dominant mineral assemblages and two stages of tectonic regimes in the Indosinian in SC [48,95]. G-1, G-2, and G-3 formed about 20-10 Ma later than the peak period of orogeny triggered by the collage of the SCB, North China craton, and Indo-China block. This reflects a post-collisional setting, which is parallel to the contemporaneous A-type granite in the SCB. In the late stage of the magmatic processes of G-1 and G-2, fertile magmatic fluid converged on the upper part of the granitoids and filled the internal fissure of the slate with the formation of extensive greisenization and granite-type wolframite (Maowan, Wubeichong, and Baishui) and quartz vein-type wolframite (Huanglong, Nanwan, and Sanjiaotan) interior contact belt. The continuous migration of ore-forming fluid up to the interbedded limestone and shale of the Devonian Yanglinao formation occurred (D 2 y). Adequate fluid-rock interactions and abundant Ca 2+ ion reservoirs from the strata made it possible for large-scale dissemination and veinlet scheelite to form (Figure 19b). Ma later than the peak period of orogeny triggered by the collage of the SCB, North China craton, and Indo-China block. This reflects a post-collisional setting, which is parallel to the contemporaneous A-type granite in the SCB. In the late stage of the magmatic processes of G-1 and G-2, fertile magmatic fluid converged on the upper part of the granitoids and filled the internal fissure of the slate with the formation of extensive greisenization and granite-type wolframite (Maowan, Wubeichong, and Baishui) and quartz vein-type wolframite (Huanglong, Nanwan, and Sanjiaotan) interior contact belt. The continuous migration of ore-forming fluid up to the interbedded limestone and shale of the Devonian Yanglinao formation occurred (D2y). Adequate fluid-rock interactions and abundant Ca 2+ ion reservoirs from the strata made it possible for large-scale dissemination and veinlet scheelite to form (Figure 19b). (2) Granitoids from the Chuankou ore field had significantly high contents of Si and Al and low contents of alkali, Fe, Mg, Mn, and Ca. The granites are commonly enriched in Rb, Zr, Hf, Th, and U but depleted in Ba, Sr, P, and Ti, indicating obvious highly fractionated S-type granite affinities. The Chuankou complex was derived from the partial melting of the Cathaysia basement and underwent significant fractionation of K-feldspar, plagioclase, biotite, Ti-bearing minerals (except rutile), zircon, apatite, allanite, and monazite. (3) G-1 and G-2 showed a more reductive state than G-3 and even typical host rocks of porphyry copper deposits were identified to have an obvious correlation with W mineralization of the Chuankoou ore field. (4) Indosinian W deposits were formed in a post-collision setting triggered by the collisional orogeny of SC in the late Paleozoic to early Mesozoic. However, the Yanshanian W deposits reflect strengthened crust-mantle interactions which resulted from the multistage extension of the SCB caused by the westward subduction of the paleo-Pacific plate. (2) Granitoids from the Chuankou ore field had significantly high contents of Si and Al and low contents of alkali, Fe, Mg, Mn, and Ca. The granites are commonly enriched in Rb, Zr, Hf, Th, and U but depleted in Ba, Sr, P, and Ti, indicating obvious highly fractionated S-type granite affinities. The Chuankou complex was derived from the partial melting of the Cathaysia basement and underwent significant fractionation of K-feldspar, plagioclase, biotite, Ti-bearing minerals (except rutile), zircon, apatite, allanite, and monazite. (3) G-1 and G-2 showed a more reductive state than G-3 and even typical host rocks of porphyry copper deposits were identified to have an obvious correlation with W mineralization of the Chuankoou ore field. (4) Indosinian W deposits were formed in a post-collision setting triggered by the collisional orogeny of SC in the late Paleozoic to early Mesozoic. However, the Yanshanian W deposits reflect strengthened crust-mantle interactions which resulted from the multistage extension of the SCB caused by the westward subduction of the paleo-Pacific plate.
2022-01-13T16:23:30.292Z
2022-01-10T00:00:00.000
{ "year": 2022, "sha1": "363c999f2cac529af3d6ae58dd6095ca18c649b8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-163X/12/1/80/pdf?version=1641815396", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6cef5dabb857f979481ed869fd07c07fc5c05b1c", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [] }
214639999
pes2o/s2orc
v3-fos-license
Feature Selection for Small Sample Sets with High Dimensional Data Using Heuristic Hybrid Approach Feature selection can significantly be decisive when analyzing high dimensional data, especially with a small number of samples. Feature extraction methods do not have decent performance in these conditions. With small sample sets and high dimensional data, exploring a large search space and learning from insufficient samples becomes extremely hard. As a result, neural networks and clustering algorithms perform poorly on this kind of data. In this paper, a novel hybrid feature selection technique is proposed, which can reduce drastically the number of features with an acceptable loss of prediction accuracy. The proposed approach operates in multiple stages, starting by removing irrelevant features with a low discrimination power, and then eliminating the ones with low variation range. Afterward, among each set of features with high cross-correlation, a single feature that is strongly correlated with the output is kept. Finally, a Genetic Algorithm with a customized cost function is provided to select a small subset of the remainder of features. To show the effectiveness of the proposed approach, we investigated two challenging case studies with sample set sizes of about 100 and the number of features larger than 1000. The experimental results look promising as they showed a percentage decrease of more than 99% in the number of features, with a prediction accuracy of more than 92%. INTRODUCTION One of the challenges in data mining is high dimensional data analysis [1][2][3][4][5][6][7]. Having a small sample set adds to the difficulty of the problem. Feature selection can be an effective solution to this problem by removing noisy, irrelevant, and redundant features from a large number of features. Moreover, it is evident that with a smaller number of features, it is easier to avoid overfitting and get a more accurate classifier [1]. However, selecting an appropriate feature selection technique if existed, is not a straight forward task. When there is a large sample set such that the number of data is larger than the number of features, applying neural networks or multivariable regression analysis can * Corresponding Author Email: mbt925@shahroodut.ac.ir (M. Biglari) lead to favorable results. The problem begins when there are a small number of data, each of which has a large number of features. Even in some environment, calculating and generating features are financially or timely expensive [8,9]. Therefore, dimensionality reduction has significant importance. Dimensionality reduction can be managed by two approaches: I) feature selection, and II) feature extraction, which are completely different [2,10]. Feature selection approaches try to select a subset of relevant or effective features from the original features set. On the other hand, feature extraction approaches, project the original feature space to another feature space with lower dimensionality or a space with better discrimination ability. The new features are usually a linear/nonlinear combination of the original features. As a result, the analysis of these features will be harder than the original features, because their relevance to the problem statement is not directly assessable [2]. Filter-based methods select features based on statistical measurements that are independent of the learning algorithm and need less computational time. Some examples of these measurement criteria are as follows: Pearson's correlation [23], information gain [17], Mutual Information (MI) [24,25], Chi-square test [17], Fisher score, and variance threshold [1]. Wrapper methods wrap around a classifier to utilize it as a cost function to select the best possible subset of features. They use a kind of learning algorithm for testing the quality of the filtered features. As a consequence, their performance is affected by the classifier's accuracy. Furthermore, wrapper methods are more accurate but more computationally expensive than filter-based methods. Recursive feature elimination [19], and evolutionary algorithms are some well-known examples of wrapper methods [1]. Embedded methods employ hybrid learning and ensemble learning algorithms. These methods usually have better accuracy than the previous two categories, since they use a collective decision. Boosting and bagging [26] are examples of embedded methods. The proposed method is considered as an embedded method since it makes use of some filter-based methods together with genetic algorithm (GA). In this paper, a novel hybrid feature selection approach is proposed. The suggested approach can be applied to small sample sets with high dimensional data where traditional methods are not applicable. Our hybrid approach is made of four stages. Firstly, features with low discrimination ability will be eliminated. Secondly, features with a small variation range will be omitted. Thirdly, among the features with high cross-correlation, all of them except one will be removed. Finally, a customized GA with a novel cost function will be applied to the remaining features, and an acceptable minimum number of features will be selected. Two case studies with a small number of samples and a high number of features are investigated to demonstrate the performance of the proposed method. For comparison purposes, a feed-forward neural network is considered with the initial features set and the reduced features subset. The experimental results indicate the superiority of the proposed method. The organization of the paper is as follows: the proposed approach is described in section 3. The experimental results are provided in section 4. Finally, the paper is concluded in section 5. THE PROPOSED APPROACH The proposed approach contains four stages each of which tries to purify/reduce the original features set of length . The goal is to select at most 1 ≤ ≤ features. Figure 1 displays the flowchart of the proposed approach. The four stages of the proposed approach are discussed in the following. Furthermore, a technique is employed to determine a reasonable minimum number of features. The method is discussed in section 2.5. 1. Stage 1: Discrimination Ability The features with low discrimination ability are disregarded. By our definition, a feature with a lot of flat areas in its plot across different samples has low discrimination power. Figure 2 depicts a sample plot of four different features and output values over samples. It is evident that the output value is growing across samples. A discriminative feature should change across samples too. In order to detect the flat areas in a feature plot, the histogram of the feature is calculated. A feature that has non-zero bins count smaller than a threshold , is marked as a non-discriminative feature and will be removed. = ( 2 5 ) is an appropriate empirical threshold for the minimum number of non-zero bins. Feature 4 in Figure 2 is an example of a nondiscriminative feature that has many flat areas that are detectable in the histogram represented in Figure 3. means Non-Zero Stage 2: Variation Range To compute features with a small variation range, the first step is to normalize the features set. Min-Max normalization is used, as stated in Equation (1). where = ( 1 , 2 , … , ) is the i th feature, is the number of samples, and is the i th normalized feature. In the second step, the features with a standard deviation smaller than a threshold will be omitted. We utilized = 0.15 as a good empirical threshold. 3. Stage 3: Cross-correlation The relevancy of a feature is measured based on the characteristics of the data, not by its value. There are some statistical measures to show the relations between the features [1,27]. Usually, there are some features that have a high correlation with each other. There are different types of correlation, but the one we were interested in was a linear correlation. If there would be some highly correlated features, there is no need to include them all in the final features subset. So we can select one of them and eliminate the others. In order to filter out this kind of features, the cross-correlation between feature , ∈ {1, … , } and other features , ∈ {1, … , } − { } is measured, and if the result was greater than a threshold , the feature or will be eliminated. represents the number of features. The employed experimental threshold value was = 0.99. Between and , the one with higher cross-correlation to the output, will be kept. Algorithm 1 demonstrates the pseudo-code for the proposed method. xcorr( , ) calculates the crosscorrelation between vector and . 4. Stage 4: The Best Features In the previous stages, a number of features were eliminated. From the remaining ones, features will be selected. To pick the approximately best features, a customized GA is proposed. The implemented binary generic algorithm tries to pick at most features which minimize the proposed cost function ( ) as presented in Equation (2). where 2 is the coefficient of determination measured, as shown in Equation 3. 2 is employed to compute the accuracy of the estimation by the selected features. and are the i th output value and estimated value respectively, and ̅ is the average of y. The xcorr' measures the average cross-correlation between the selected features (Equation 4). NRMSE is the normalized root mean square error calculated, as stated in Equation (5). The 2 output is in the range of [0-1]. 2 = 0 means a completely wrong estimation, and 2 = 1 indicates an exact estimation. xcorr' is in range of [0,1]. xcorr'=0 shows no cross-correlation. In order to estimate how bad a feature set is, NRMSE term will be added to Equation 2 just when 2 = 0. In other circumstances, 2 will be sufficient, because it contains an approximation of NRMSE. In the proposed GA, a chromosome includes an Ndimensional vector of boolean values which determines whether a feature is selected or not. The goal of the GA is to pick at most features, so a chromosome cannot have more than ones. If a newly generated chromosome has more than ones ( ), − values are randomly chosen and set to zero. The flowchart of the proposed GA is provided in Figure 4. K-fold cross-validation is employed for the cost function calculation in the proposed GA as the following: I) The sample set is divided into K folds, II) The cost function is evaluated K times each of which utilizes K-1 folds for training and 1 fold for testing, and III) The results are averaged over K as the final cost function value. The parameters configuration employed in the proposed GA is demonstrated in Table 1. 5. Optimal Minimum Number of Features A technique is recommended to select a reasonable minimum number of features [28]. This technique divides the sample set into two training, and test sets; and then defines three criteria: training estimation accuracy ( ), testing estimation accuracy ( ), and training error (TE). Practically, this technique wraps around the proposed GA which, is called for different values of K starting from 1 to (the number of features). In each iteration, the three criteria are evaluated and plotted, until these three lines remain almost parallel to the X-axis. TEA and TAR are calculated using 2 measure on the training and test set, respectively. TE is computed by NRMSE measure on the training set. Figure 5 displays an example with values of = {1,2, … ,7}. For each value of K, the GA is called, and the criteria are measured and plotted. After = 4 point, all the lines are approximately parallel to the X-axis. So = 4 is picked as the optimal minimum number of features. THE EXPERIMENTAL RESULTS We have been provided two chemical datasets by Nekoei et al. [28] that were suited to be analyzed by the proposed approach. Both of these datasets have a low sample size with high-dimensional data. In the following, the two case studies based on these datasets are discussed in detail. Table 2 presents the proposed algorithm configuration used for both case studies. 1. Chemical Molecules Case Study The first case study is focused on a chemical molecules dataset, which contains 81 molecules with 1056 physicochemical properties or theoretical molecular descriptors ( Figure 6). Every molecule has a response value measured based on the descriptors. The goal is to find a linear QSAR-based model to predict the response variable with a subset of features. The descriptors and response values are all numerical and their numerical values may not be in the same range. Figure 7 shows the minimum number of descriptors suggested by the proposed technique. The average values of 2 measure over training and test sets for different numbers of selected features and the selected features itself are summarized in Table 3. It is evident from Table 3 that by selecting more than four features, there will be slight variations in 2 response values. Therefore, the suggested optimal minimum number of features is four. It is worth noting that the feature number 188 must have a significant contribution to the linear model, as it is selected in all six suggested features set. The linear model found by the proposed GA for = 4 using multiple linear regression (MLR) is given by Equation (6). The model was used to predict the response variable, and the average result measured by K-fold cross-validation is compared with a feed-forward neural network (NN) once trained with the initial features set, and once trained with the reduced features set. The comparison results are presented in Table 4. Additionally, the regression plot is demonstrated in 2. Chemical Drugs Case Study This case study includes a chemical drugs dataset with 103 samples of 1482 dimensional data. Like the previous dataset, each sample has a response value. The goal is to build a linear model to predict the response variable employing just a subset of descriptors. The optimal minimum number of features suggested for this dataset, as depicted in Figure 9 is six. Moreover, the average values of 2 term over training and test sets for 1 to 6 selected features are presented in Table 5. The linear model suggested by the proposed GA for = 6 using the MLR is given by Equation (7). Similar to the previous section, the average prediction accuracy and error of the model is compared with NN, and the results are provided in Table 6. Also, the regression plot is shown in Figure 10. Discussion Applying linear regression on the initial high dimensional data leads to poor results because there are a lot of noisy and irrelevant features. NN operates a little better by acquiring the average accuracy (Tables 4 and 6). Although, due to the small number of samples, NN overfits and thereupon, shows a sudden accuracy decrease over the test data. By reducing the number of input features drastically, the performance of NN grows significantly on both datasets. Still, the best result is earned by the proposed approach, which utilizes multiple linear regression internally. As can be seen in Tables 4 and 6, the accuracy of the proposed method on the training and test sets are very close together. It indicates that overfitting has not happened when training the model, and the built model is robust and accurate CONCLUSION In this work, a heuristic hybrid approach for feature selection is proposed. The approach reduces the number of features significantly in four consecutive stages. In the early stages, some of the irrelevant and less discriminative features are omitted. In the final stages, the approximately best feature subset of length is chosen by a GA which uses a customized cost function. The proposed cost function maximizes the prediction accuracy and minimizes the prediction error, and the cross-correlation between the selected features subset simultaneously. Two case studies with high-dimensional data were analyzed to indicate the performance of the proposed approach. Firstly, the proposed method was applied to a chemical molecules dataset and reduced the number of features from 1056 to 4 with a prediction accuracy of 2 = 0.92. Secondly, a similar configuration is used for the next dataset that led to the reduction of 99.6 percent of the features with a prediction accuracy of 2 = 0.93. The experimental results indicate that the proposed method is better suited to be used for small sample sets with high dimensional data than neural networks. Additionally, our approach can be employed as a preprocessing step in other methods. As we demonstrated the performance boost of NN when injected with our reduced features set.
2020-02-27T09:20:56.141Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "e620cabbe46e60bc395beb6e281575c238e9f301", "oa_license": null, "oa_url": "http://www.ije.ir/article_103369_1f0e61951d122be9176a407f43dfd32d.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "48f5b1526271bef891e9c93beebd8d6edf347d68", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
238920790
pes2o/s2orc
v3-fos-license
The impact of fluence dependent 120 MeV Ag swift heavy ion irradiation on the changes in structural, electronic, and optical properties of AgInSe2 nano-crystalline thin films for optoelectronic applications Swift heavy ion (SHI) irradiation in thin films significantly modifies the structure and related properties in a controlled manner. In the present study, the 120 MeV Ag ion irradiation on AgInSe2 nanoparticle thin films prepared by the thermal evaporation method and the induced modifications in the structure and other properties are being discussed. The ion irradiation led to the suppression of GIXRD and Raman peaks with increasing ion fluence, which indicated amorphization of the AgInSe2 structure along the path of 120 MeV Ag ions. The Poisson's fitting of the ion fluence dependence of the normalized area under the GIXRD peak of AgInSe2 gave the radius of the ion track as 5.8 nm. Microstructural analysis using FESEM revealed a broad bi-modal distribution of particles with mean particle sizes of 67.5 nm and 159 nm in the pristine film. The ion irradiation led to the development of uniform particles on the film surface with a mean size of 36 nm at high ion fluences. The composition of the film was checked by the energy dispersive X-ray fluorescence (EDXRF) spectrometer. The UV-visible spectroscopy revealed the increase of the electronic bandgap of AgInSe2 films with an increase in ion fluence due to quantum confinement. The Hall measurement and EDXRF studies showed that the unirradiated and irradiated AgInSe2 films have n-type conductivity and vary with the ion fluence. The changes in the films were tuned with different ion fluence and are favorable for both optical and electronic applications. Introduction The response of materials to intense excitation, such as high temperature, high pressure, or particle irradiation, is of considerable interest both for fundamental studies and technological applications. These excitations can modify the structure and many other properties of the material, which can provide new functionalities and hence make the material useful for many applications. Among the different post-deposition excitations employed for modications of materials in thin lms, irradiation by energetic ions with energies more than 1 MeV amu is unique owing to its capability to instantaneously deposit very high energy density in a highly localized columnar region of a few nm radius and a few tens of micrometer length along the ion path in the material. Present accelerators can deliver particles with an energy ranging from a few keV to several hundreds of GeV, and depending upon the energy imparted to the materials, ion beams obtained from these accelerators are classied into two types. The low-energy ion beams have an energy ranging from a few keV to a few hundreds of keV, which lose energy in their passage through elastic collisions with the atoms of the material. The second type of ion irradiation is the swi heavy ions (SHI), which are positively charged ions with large atomic mass and hundreds of MeV energy. When these ions pass through the material medium, they interact with electrons and with nuclei, possibly also with the medium as a whole. Generally, the interaction of SHI with matter results in the transfer of energy of the incident ions to the electrons since the velocity of the ions can be -Bohr velocity of electrons. The ions thereby modify the structural, electrical, optical, optoelectronic, transport, and many other properties of the materials. 1 The SHI traversing through a material loses energy mainly through two nearly independent processes: (i) elastic collisions with the target nuclei and (ii) electronic excitation and ionization of the target atoms. 2 The rst process is called the nuclear energy loss, and leads to atomic displacements through direct elastic knockon process. [3][4][5] The second process is called the electronic energy loss, S e ¼ dE dx e that prevails for SHI and creates signicant atomic rearrangements in various types of materials by transferring energy from the excited electrons to the lattice atoms along the path of the ions. [6][7][8][9] In the case of crystalline materials, when the value of S e exceeds a threshold value, S eth i.e., S e > S eth , the electronic energy loss process leads to the formation of an amorphous latent track along the ion trajectory. The S eth is material dependent, and if S e is less than S eth , some additional defects may appear, and sometimes the pre-existing defects may anneal out. 10,11 From previous studies, Ag ions are mostly used for irradiation to obtain signicant results, which are benecial for many elds including antibacterial application and photo-catalytic performance, photonic crystals, surface coatings, and enhancing solar cell efficiency. 12,13 Hence, we have preferred Ag ion instead of other ions for implantation as silver diffuses easily into chalcogenide materials. One of the prominent members of the I-III-VI chalcogenide family of semiconductors is AgInSe 2 , which has found extensive applications in many areas like photovoltaics, 14 preparation of Schottky diodes, 15 optoelectronic device as hetero-junction, 16 non-linear optical devices, 17 light emitting and detecting devices, 18 solar cell absorbing material, 19 etc. Most of the applications of this material are realized in its thin lm form. The material modication by various external energy sources like laser irradiation, 20 thermal annealing, 21 g irradiation, 22 proton irradiation, 23 etc. brings signicant changes in their optical as well as structural properties. Though SHI irradiation has been extensively used for the modication of thin lms of a variety of materials, only a few studies have been undertaken on SHI induced effects in AgInSe 2 lms. [24][25][26] In one study, Pathak et al. reported the formation of AgInSe 2 nanorods under 200 MeV Ag ion irradiation at a uence of 5  10 11 ions per cm 2 . 24 In another study, the same authors reported unexpected complete damage of the chalcopyrite structure of AgInSe 2 at extremely low ion uences (5  10 10 ions per cm 2 ). 25 In the present study, we have tried to observe the modication by 120 MeV Ag ion at uences (i) 1  10 11 ions per cm 2 , (ii) 1  10 12 ions per cm 2 and (iii) 1  10 13 ions per cm 2 in which two uences are higher in order. We have reported that each 140 MeV Ni ion traversing the AgInSe 2 medium creates two structural modications: (i) an amorphous column of 1.6 nm radius along the ion path and (ii) a radially compressed crystalline column of 8.2 nm radius surrounding the amorphous column. 26 However, the optical and electronic modication due to structural change by the SHI irradiation at different uences is not investigated in AgInSe 2 lms. In the present work, we have examined the 120 MeV Ag ion irradiation-induced modications on the structural, electrical, and optical properties of AgInSe 2 nano-crystalline thin lms grown on glass substrates by the thermal evaporation method. The irradiation was done at three different uences in order to investigate the uence dependent change in the properties. The structural transformation was studied through grazing incidence X-ray diffraction (GIXRD) and Raman spectroscopy. The morphological change was analyzed by eld emission scanning electron microscopy (FESEM). The compositional check was carried out by Energy Dispersive X-ray Fluorescence (EDXRF) spectrometer. The optical study was done by the UV-visible spectrometer that revealed the increase in the optical bandgap. We also demonstrated that the particles on the surface of the pristine lms with wide distribution in their size develop into uniform size particles of much smaller size in the irradiated lms with an associated increase of bandgap due to quantum connement. The Hall measurement study was done to measure the conductivity of the lms. Experimental procedure Thin lms of AgInSe 2 were deposited on glass substrates by the thermal evaporation method from the bulk target. The bulk synthesis involved the melt quenching of a mixture of highly pure Ag (99.99%), In (99.9%), and Se (99.95%) powders in stoichiometric proportions (1 : 1 : 2) taken in a quartz ampoule, which was evacuated at a pressure of 10 À5 torr and sealed. The temperature of the quartz ampoule placed in a furnace was slowly raised to 1000 C and maintained at that temperature for 36 h with continuous rotational shaking of the ampoule to ensure homogeneous mixing of the different constituent elements in it. The powder in the ampoule melted as the furnace temperature was higher than the melting temperature of AgInSe 2 780 C. The melt was then quenched by dropping the ampoule in ice-cooled water to obtain the target in the form of an ingot for thin lm preparation. Thin lms of AgInSe 2 were deposited on a glass substrate by the thermal evaporation method using a Hind High vacuum coating unit (Model-12A4D). The substrate was initially cleaned with a detergent solution and then ultrasonically cleaned with distilled water and acetone. The AgInSe 2 ingot was placed in a molybdenum boat kept about 15 cm below the substrate in a vacuum of $1  10 À5 torr. The deposition was done at room temperature. The lms thus developed had a thickness of $0.6 mm as measured by the quartz crystal monitor attached inside the coating unit. The as-prepared lms were annealed at 200 C for 1 h in a selenium atmosphere. The annealed lms were irradiated by 120 MeV Ag ions using the 15 UD tandem Pelletron accelerator at IUAC, New Delhi. The ions bombarded the sample surface perpendicularly. For uniform irradiation, an ion beam was made to scan 1 cm  1 cm area of the sample surface. To prevent sample heating, a thick copper ladder was used to mount the samples using silver paste and a low ion ux, 4 ($1  10 9 ions per cm 2 per S 1 ) was maintained. At this ux, the increase in temperature on the sample surface during irradiation as estimated using Fourier heat conduction equation 27 was found to be below 5 K. Thus, the beam heating effect can be ruled out to account for the observed modications induced by ion beams. During irradiation, the target ladder was placed inside a high vacuum chamber (10 À6 torr). The irradiation was done at three different uences 1  10 11 , 1  10 12 , and 1  10 13 ions per cm 2 . Structural characterization of the pristine and the irradiated lms was carried out in GIXRD mode of a Bruker D8 advanced X-ray diffractometer with Cu K a (l ¼ 1.5401Å) radiation source. The incident angle of X-rays was kept at 2 with respect to the sample surface. GIXRD patterns were recorded over the range of 2q from 20 to 55 in steps of 0.02 at a scan speed 1 per minute. The transmittance spectra were recorded using a UV-vis spectrophotometer (Bruker IFS 66v/S). Raman spectra of the lms were taken using a HORIBA T64000 Raman spectrometer with 514 nm radiation from a 10 mW argon ion laser at room temperature. The microstructural and compositional characterizations were done using the FESEM microscope (ZEISS SIGMA-40 microscope) and EDXRF spectrometer (SHIMADZU-7000), respectively. The electrical study was done by Hall measurement using Ecopia HMS-3000 Hall measurement system. Energy loss of 120 MeV Ag ions in AgInSe 2 medium To analyze the evolution of the AgInSe 2 phase with 120 MeV Ag ion uence, we have computed the irradiation parameters S n , S e , and range R of 120 MeV Ag ions in AgInSe 2 and glass substrate using SRIM code. 28 Fig. 1 gives the variation of the S n and S e with depth as the 120 MeV Ag ions penetrate rst into the AgInSe 2 thin lm and then the glass substrate. The ions lose energy along their path in the lm, and the substrate nally gets implanted at a depth of about 14.2 mm in the lm. Fig. 1 also shows that S e dominates up to a depth of 12 mm, beyond which S n increases and peaks at the end of the range before the projectile ions get implanted in the glass substrate. Since the implantation occurs far beneath the surface of the substrate, the implanted ions do not contribute to the modications induced in the lm. Further, the S e is about 180 times larger than the S n in the lm. Therefore, the modications induced into the AgInSe 2 lms is primarily due to the S e of 120 MeV Ag ions. The S e shows only a small decrease from 16.5 keV nm À1 to 16 keV nm À1 as the ions traverse the thickness (0.6 mm) of the AgInSe 2 thin lms and then a sudden decrease to 13.7 keV nm À1 as these enter the glass substrate. The S e induced modication in the lm is thus uniform along the path of the projectile ions. Compositional analysis of AgInSe 2 thin lms by EDXRF The EDXRF spectrum (Fig. 2) of AgInSe 2 thin lms revealed the presence of selenium, silver, and indium by their characteristic K a peaks at 11.20 keV, 22.10 keV, and 24.14 keV, respectively. Analysis of the EDXRF spectra gave the concentration of these elements in the ingot as well as in the lms made out of it by thermal evaporation and then irradiated by 120 MeV Ag ions at different uences ( Table 1). The ingot had a stoichiometric composition of Ag, In, and Se in the ratio $1 : 1 : 2 corresponding to the AgInSe 2 phase. However, the lms, both pristine and the irradiated ones, showed the off-stoichiometric composition of these elements (Table 1) with very low Ag concentration compared to the concentration of In and Se. The off-stoichiometry is not much affected by 120 MeV Ag ion irradiation. The defect chemistry model of ternary compounds 29 gives the off-stoichiometry parameter Dy (¼ [2Se/(Ag + 3In)] À 1) for the different samples ( Table 1). The parameter Dy is related to the electronic defects. For the lms with p-type conductivity, Dy > 0 and that with n-type conductivity, Dy < 0. The negative Dy in all our lms indicates that these are n-type, as was conrmed by the Hall measurement discussed later. Structural analysis of AgInSe 2 thin lms irradiated by 120 MeV Ag ions The XRD spectrum of the AgInSe 2 ingot, the lms made out of the ingot, and that annealed and irradiated by 120 MeV Ag ions at different uences are shown in Fig. 3a. The position and relative intensity of all the peaks of the XRD spectrum of the Fig. 1 Variation of the electronic energy loss S e and the nuclear energy loss S n of 120 MeV Ag ions with depth as the ions traverse the thickness of the AgInSe 2 thin films. Fig. 2 The line scans of Se K a , Ag K a , and In K a in the EDXRF spectrum of pristine AgInSe 2 thin film. ingot (Fig. 3a(i)) match with that of the polycrystalline AgInSe 2 powder reported in the ICDD le (card no. . The lm deposited by thermal evaporation technique using this ingot as the target did not show any GIXRD peak, which indicated its amorphous structure (Fig. 3a(ii)). Earlier observations have also conrmed the amorphous nature of the as-deposited AgInSe 2 lms. 30 Annealing at 200 C for 1 h in selenium atmosphere led to the development of a peak at 2q ¼ 25.51 (Fig. 3a(iii)), which corresponds to the (112) peak of AgInSe 2 . Annealing at a very low temperature of 200 C formed the nano-crystalline AIS phase. However, annealing at a higher temperature (250 C) formed all the peaks of AIS like Ag 2 Se (002, 120, 102, 121, 013) and AgInSe 2 (112), as reported in our earlier paper, 31 but the method of preparation was different (DC magnetron sputtering). So, the Ag 2 Se peak not being present in the current sample might be due to the preparation method (thermal evaporation) and low annealing temperature (200 C). In the present study, we have taken this annealed lm as a reference to compare its GIXRD spectrum with that of the lms irradiated by 120 MeV Ag ions at different uences ( Fig. 3a(iv-vi)). Irradiation of AgInSe 2 lms by 120 MeV Ag ions led to suppression of the intensity of its (112) peak with increasing ion uence. The peak went below the noise level at the uence of 1  10 13 ions per cm 2 . Suppression of the GIXRD peak with increasing ion uence indicates 120 MeV Ag ion irradiationinduced damage in the structure of the AgInSe 2 thin lm, while the peak disappearing at high ion uences indicates amorphous track formation along the ion path. [32][33][34] Though the value of S eth for the creation of amorphized latent tracks in AgInSe 2 is not known as yet, our observation revealed its value to be less than the S e (16.5 keV nm À1 ) of 120 MeV Ag ions, which completely amorphized this compound at high ion uences. To extract the radius of ion tracks, we have tted the uence versus normalized area under the (112) GIXRD peak to Poisson equation Here A(4t) is the area under the (112) GIXRD peak at ion uence (4t) normalized with respect to its value for 4t ¼ 0. A N is the saturated value of the area at high ion uences (4t / N), and s is the damage cross-section. Fitting of the variation of A(4t) with 4t ( Fig. 4) to eqn (1) gave the value of s as 107 nm 2 . Assuming cylindrical geometry, the radius r of the amorphized ion tracks obtained from s ¼ pr 2 is found to be 5.8 AE 1.9 nm. The value of A N similarly approaching zero at high ion uences (Fig. 3b) clearly indicates that each 120 MeV Ag ion completely amorphized AgInSe 2 along its path, and at very high ion uences, these amorphized columnar regions overlapped, leading to complete loss of crystallinity. Raman spectroscopy study Raman spectroscopy is a nondestructive chemical analysis method, which gives information about chemical structure, phase, crystallinity, and molecular interactions. It is based upon the interaction of light with the chemical bonds within a material. It is a technique in which the scattered light is used to identify the vibrational modes of the sample. Fig. 3c shows the Raman spectra of pristine and 120 MeV Ag ion irradiated AgInSe 2 thin lms. All the lms except the one irradiated at the uence 1  10 13 ions per cm 2 exhibit a band at 146.9 cm À1 , which corresponds to the B 2 mode of AgInSe 2 . B 2 mode arises due to the anti-phase motions between In and Se atoms of the chalcopyrite structure. 35 This peak exists up to 1  10 12 ions per cm 2 and disappears at 1  10 13 ions per cm 2 uence. The disappearance of the Raman peak at high ion uences indicates irradiation-induced amorphization. The Raman result thus agrees with the GIXRD result. So, the present Raman study gives information regarding amorphization and the presence of the AgInSe 2 phase. Fig. 4a and b shows the FESEM images of the pristine AgInSe 2 lm and that irradiated by 120 MeV Ag ions at a high uence (1  10 13 ions per cm 2 ). The minimum uence necessary for the tracks of radius 5.8 nm (Fig. 3b) to cover the whole lm surface is 9  10 11 ions per cm 2 assuming their non-overlap. Thus, at the uence of 1  10 13 ions per cm 2 , multiple overlaps of amorphized ion tracks would occur, and the lm surface would be completely amorphized. The FESEM images, therefore, represent the surface topography of crystalline AgInSe 2 in the pristine lm and that in the amorphous state of the irradiated lms. Both distinctly large and very small-sized particles are present in the pristine lm. The histogram (Fig. 4c) revealed a bi-modal distribution of particle size on the surface of this lm. The mean particle sizes of this distribution were 67.5 nm and 159 nm. Irradiation led to the complete disappearance of the large particles, and the small particles became still smaller. The surface of the irradiated lms was covered with uniformsized small particles of mean particle size of 36 nm (Fig. 4d). Micro-structural analysis The 120 MeV Ag ion irradiation has thus fragmented the crystalline grains of AgInSe 2 , as seen in a few other systems. 27,36,37 In addition, irradiation drastically reduced the width of the size distribution of AgInSe 2 particles, leading to the formation of uniform size particles. These particles, as was revealed from the GIXRD study (Fig. 3a), are amorphous. Fig. 5a shows the transmittance spectra in the wavelength range of 550-1200 nm of the as-deposited lms annealed at 200 C for 1 h and those irradiated by 120 MeV Ag ions at different uences. The spectra exhibit interference fringes due to uniformity in lm thickness. 38,39 The as-deposited and the annealed lms before irradiation had the highest and lowest transmittance, respectively (Fig. 5a). The as-deposited lm was amorphous, where crystallization developed on annealing (Fig. 3). The transmittance of the amorphous chalcogenide thin lms has been shown to be higher than that in the crystalline ones. 40 The high transmittance in the as-deposited lm in the present study seems to be associated with its amorphous nature, which was reduced on annealing as crystallization developed. 120 MeV Ag ion irradiation, as shown above, amorphized the lms (Fig. 3a and 4b). Consequently, transmittance increased in the irradiated lms (Fig. 5a). The amplitude of oscillation interestingly did not follow a monotonic variation with irradiation uence. It showed a sharp decrease at the intermediate uence of 1  10 12 ions per cm 2 and then regained its initial value at high ion uences (1  10 13 ions per cm 2 ). The amplitude of oscillations directly relates to the reection of the radiation from the surface of the lm and the lm substrate interface and hence to their smoothness. In addition to creating amorphous columns along their path, irradiation by 120 MeV Ag ions is also expected to corrugate these reecting surfaces and destroy their smoothness, as seen in many SHI irradiated systems. 41,42 At low ion uences, where tracks do not overlap, these corrugations would be more due to random protrusion of materials from the track region 43 or even due to materials sputtered out of the point of impact of the ions on the surface. 44,45 As a consequence, the amplitude of the oscillation is reduced at the uence of 1  10 12 ions cm À2 , where the surface will be highly roughened due to incomplete overlap of ion tracks, leading to the reduced amplitude of oscillations as observed (Fig. 5a). At the uence ten times higher than this value, a complete overlap of ion tracks would lead to the ow of materials and smoothening of the surface, as reported, 46-49 which would result in increased amplitude of oscillations as observed (Fig. 5a). Study of optical properties by UV-visible spectrometer The bandgap (E g ) was estimated from the absorption spectra, which in turn was obtained from the transmission spectra following the relation of the absorption coefficient (a) with transmittance (T) 50 for a lm of thickness 'd', as given below. The AgInSe 2 has three bandgaps, i.e., fundamental bandgap (1.24 eV), spin-orbit splitting band gap (1.34 eV), and crystal eld split band gap (1.6 eV). 51 These were determined using the Tauc relation 52 where hn is the photon energy, B is a constant (Tauc parameter), and n depends on the nature of the transition between the valence band and conduction band. For indirect transition, n ¼ 2 or 3, while for direct transition, n ¼ 1/2 or 3/2, depending on whether they are allowed or forbidden, respectively. Since AgInSe 2 is a direct allowed band gap type semiconductor, eqn (3) was best tted with n ¼ 1/2. Extrapolating the best t to the Tauc plot hn versus (ahn) 2 to the energy axis for zero absorption coefficients, as shown in Fig. 5b, gave the bandgap of the pristine AgInSe 2 lm and those irradiated at different 120 MeV Ag ion uences. The bandgap of the pristine lm was found to be 1.58 eV. This value matches with that due to crystal eld splitting of the uppermost valence band. 51 The uence dependence of the bandgap of AgInSe 2 (Fig. 5c) clearly indicates its increase from 1.58 eV for the pristine lm to 1.69 eV for the lm irradiated at the high uence of 1  10 13 ions per cm 2 . As stated above, the FESEM study showed a decrease in particle size with increasing irradiation uence (Fig. 4b). So, the observed increase of the bandgap at high ion uences is a consequence of the quantum connement observed in small size particles. 53,54 The reported spin-orbit splitting band gap (1.34 eV) could not be extracted in the present study due to the onset of interference fringes in this energy region. The presence of the same interference fringes also did not permit a straightforward determination of the fundamental bandgap from the absorption curves in the low-energy region. Had the surface not been smooth, we would have a linear decrease of the absorption with decreasing energy instead of the interference fringes. We, therefore, did a linear tting of the region containing interference pattern and extracted the bandgap from the intersection of this line with the energy axis as shown in the inset of Fig. 5a as discussed in our previous paper. 30 This bandgap was found to be 1.21 eV for the pristine lm, which matches with the reported fundamental bandgap of 1.24 eV. 55 Electrical conductivity by Hall measurement The electrical parameters like Hall mobility (m), resistivity (r), conductivity (s), and the carrier concentration (n) of AgInSe 2 thin lms were determined by Hall measurement. These parameters are given in Table 2. The negative sign of the Hall coefficient indicates the n-type conductivity of lms. This result agrees with the negative value of the off-stoichiometry parameter Dy (¼ [2Se/(Ag + 3In)] À 1) estimated from the EDXRF compositional analysis. Fig. 6 represents the variation of the mobility and carrier concentration of the AgInSe 2 lm with 120 MeV Ag ion irradiation uence. This gure shows the increment in mobility with irradiation uence up to 1  10 12 ions per cm 2 , which may be due to the ionization induced recovery of intrinsic defects during lm deposition processes and then mobility reduced at a uence 1  10 13 ions per cm 2 , which may be due to the accumulation of displacement-induced defects in the thin lm. 56 The electrical conductivity is proportional to the product of mobility and carrier concentration in a material. In polycrystalline materials, generally, grain boundaries and conned interface charges produce inter-grain band bending and potential barriers. 57 Electrons may be trapped in these potential wells between grain boundaries and could not contribute to the conduction mechanism. Upon irradiation, there is a signicant rise in carrier concentration, and hence conductivity is improved drastically. Since mobility is inversely proportional to the carrier concentration, the variation in the carrier concentration with uence is reversed as the variation of mobility with uence. The Hall conductivity is directly proportional to the Hall mobility and carrier concentration. Thus, at a low uence regime, the improvement in conductivity enhanced the carrier concentration and suppressed the same at high ion uence, where amorphization sets in. The Hall coefficient is negative in all the lms due to n-type conductivity because of electrons as charge carriers. Conclusion Thermally evaporated AgInSe 2 thin lms annealed at 200 C were irradiated by 120 MeV Ag ions at different uences. The GIXRD study revealed the creation of amorphous latent tracks along the path of 120 MeV Ag ions in the AgInSe 2 thin lms. Poisson's tting of the variation of the area under the GIXRD peaks with ion uence gave the radius of the tracks as 5.8 nm. The Raman spectroscopy of these samples also showed irradiation-induced amorphization of AgInSe 2 lms at high uence. Microstructural analysis by FESEM indicated irradiation-induced fragmentation of grains leading to uniform grain size distribution as compared to the large-sized particles with widely varying particle size distribution in the unirradiated lm. Conforming to the reduction of particle size in irradiated lms, the UV-visible study indicated the increase in the electronic bandgap due to quantum connement in small-sized particles obtained at high ion uences. Both the EDXRF and Hall measurement studies indicated the n-type conductivity of lms. The change in both optical and electrical parameters with different ion uences is suitable for optoelectronic applications. Conflicts of interest There are no conicts of interest for this manuscript.
2021-08-27T17:02:54.215Z
2021-07-27T00:00:00.000
{ "year": 2021, "sha1": "95deb4de11fd3f30cf1a415a2e68bd0c2aa214be", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/ra/d1ra03409j", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aa66530556eb42be28975928e340f855965d97a1", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
149997327
pes2o/s2orc
v3-fos-license
MUSIC CUED EXERCISES FOR PEOPLE LIVING WITH DEMENTIA: A SYSTEMATIC REVIEW Background: Dementia can be associated with motor and non-motor disorders such as cognitive impairment, depression, and behavioral disturbance. The symptoms typically progress gradually over time. Music-cued exercises have been of therapeutic interest in recent years, especially to enable people with chronic neurological diseases to move more easily and to experience greater well-being. Objective: To investigate whether music-cued exercises are more effective than usual care for the management of motor and non-motor symptoms in people living with dementia. Methods: Systematic searching of the international literature was conducted in January 2018. INTRODUCTION in dementia, which could help to explain the beneficial responses of some people living with dementia to familiar music [42]. To gain advantages of music when combined with movement, it appears to be particularly helpful to follow a strong and clear rhythm [48]. There is preliminary evidence that external rhythms can enhance motor performance, especially when a person enjoys moving to music [38,48,49]. This review evaluates studies on the effects of music-cued exercises for people living with dementia. Two main questions were addressed: (1) Does music-cued exercise have more beneficial effects on motor and non-motor signs of dementia compared to usual care? (2) What are the motor and non-motor outcomes of music-cued exercises? METHODS The protocol for this systematic review was published in 2017 (DOI: 10.15621/ijphy/2017/v4i1/136167) [50]. The review complied with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [51]. Meta-analysis was not applicable due to lack of homogeneity of the studies reviewed and the limited number of eligible studies. Eligibility criteria Study design Studies included RCTs, quasi-randomized trials, and controlled clinical trials. Comparative studies that did not have randomization, case-controlled studies and cohort studies were also considered. Studies were excluded if clinical reports, single case studies, monographs or protocols, and if they were not published in English. Participants The sample included people diagnosed with dementia of any type, any stage or any severity. All ages, medications, and comorbidities were included. Intervention Studies were included if they used any type of music, combined with any form of physical exercise and any duration of the exercise intervention. They were excluded if music was used alone, or as mental practice or when music was utilized with activities other than physical exercises. Comparator The studies were included if the interventions were compared with control conditions, such as usual care or usual physical activities. Outcomes The motor outcomes included variables related to gait, mobility, and balance. Non-motor outcomes such as depression, anxiety, behavior and other psychological and cognitive impairments were also reviewed. Information sources A search was conducted in January 2018 involving major electronic databases related to health, physical therapy, exercises, art therapy, music, and engineering. Included databases were MEDLINE, CINAHL, PubMed, AMED, Embase, PEDro, PsycINFO, Scopus, the web of science, Cochrane central register of controlled trials, Science Direct, Wiley online library, and JOVE. Searching of the grey literature and the reference lists of relevant articles also occurred. Search strategy Two groups of keywords were selected. The first was dementia OR Alzheimer(s), and the second group included keywords related to music (e.g., rhythm and auditory) and exercise (e.g., movement, mobility, training). The search was limited to English language studies within the last 30 years. An example of the search strategy for MEDLINE (OVID) is in Appendix 1. The resulting citations from the search were downloaded to EndNote© (52) which was firstly used to delete duplicates. Study identification and selection Eligibility criteria were applied to the title and abstract of every citation followed by full-text screening for final filtering of citations. Data collection process and data items A data extraction sheet was developed by the authors. Data were extracted by two reviewers (YG and RG). Any disagreements between reviewers were resolved by discussion until consensus was reached. The information extracted from each study included: (i) general characteristics of the study such as the title, authors, source of publication and type of the study; (ii) participant characteristics, such as age, gender, type and severity of dementia, and co-morbidities; (iii) intervention characteristics included the type of music, criteria for selection, type and dosage of exercise, co-interventions and the duration of each session; (iv) outcome assessment included the outcome, and its type (motor or non-motor), the measurement tools and the length of follow-up period. Methodological quality assessment The Cochrane Collaboration tool [53] was used to assess the quality and risk of bias for the RCTs. The modified Downs and Black checklist [54] was used to assess the quality of non-RCTs. The same two reviewers (YG and RG) conducted the quality appraisal of the included studies independently. Study selection The initial search of databases yielded 3187 citations. After removing duplicates, 1824 citations remained, and the titles and abstracts were screened for eligibility. Screening resulted in the removal of 1803 articles because they did not meet the inclusion criteria. The full texts of the remaining 21 articles were screened, excluding a further nine articles. The final yield was 12 articles meeting the inclusion criteria (figure 1). Hand searching and grey literature searching did not yield any additional eligible studies. Study characteristics Methods and design Of the 12 included studies, four were RCTs [55][56][57][58] and eight used other designs such as repeated measures and cross-over designs [38,[59][60][61][62][63][64][65]. The study duration was not mentioned in the trial by Clair and O'Konski (2006) [62]. For the other trials, the duration of the studies ranged from three weeks to six months, except for Wittwer et al. 2013 [61], which was a one-session study with a repeated measures design (table 1). Only one study reported a follow-up test after treatment [58]. Participants The total number of participants was 595. All were over 65 years of age. The included studies involved participants of both sexes except for the study by Van de Winckel et al. (2004) [56], who only tested females. Alzheimer's disease was the most common dementia type, and all levels of severity were included (table 1). Co-morbidities were not documented, except for the study by Cheung et al. (2016) [58] for which participants with symptoms of anxiety were selected. Intervention Investigations were conducted in different countries. Six were from the United States of America, and the others were from Australia, Belgium, China, Japan, Taiwan, and the United Kingdom (table 1). The criteria for choosing suitable music centered on the rhythm and tempo, as well as participant preferences. Music was instrumental or vocal. It included popular music genres like wartime music, folk music, blues, country, jazz, pop and music from the 1920s-1950s. The types of physical exercise included seated exercises, movement in standing, flexibility exercises, walking and strengthening. One study used other interventions (body awareness and functional mobility training) in addition to the music-cued exercises [64]. The frequency ranged from daily sessions to once per week, and the duration of sessions was 15-40 minutes. The music selection criteria and physical exercise components are summarized in table 2. Outcomes Four studies measured cognitive, behavioral and other psychological sequelae (56)(57)(58)60). Another four assessed the level of exercise participation (38,59,63,65). Three studies measured motor outcomes, including gait parameters (61,62) and mobility skills (Southampton Assessment of Mobility) (64). One trial measured both motor (e.g., activities of daily living) and non-motor outcomes (e.g., cognition and behavior) (55). Outcomes summarized in table 3. Risk of bias within the studies Three RCTs showed a moderately high risk of bias [55][56][57] and one had a low risk of bias according to the Cochrane criteria [58] (table 4). The overall risk of bias was judged according to the relative importance of each domain. The RCTs with a high risk of bias had only one domain answered with "no, " however, these domains were critical and affected the overall risk of bias. For example, in the study by Sung et al. (2006) [57], the assessment was performed by the nursing staff who provided daily care for the participants. This prevented blinding of the outcome assessors and increased the risk of bias. For the non-RCTs, the maximum score obtained on the modified Downs and Black checklist [54] was 15/28. Studies with an overall score of less than 50% or ≤ 14 are regarded as a poor quality study [66][67][68][69]. In the current review, one study scored 15 [60] (table 5). Key results for individual trials Some data related to the results of individual trials were not reported. For example, the means and standard deviations were not reported in the studies by Cheung et al. 2016 [58] and Hanson et al. (1996) [65]. Therefore, a qualitative summary for each of these is presented in table 3. Motor effects of music-cued exercise The three studies investigating the effects of music-cued exercises on motor performance were non-RCTs [61,62,64] with comparatively low quality (modified Downs and Black scores ranged from 9-14). Two trials measured gait variables after performing music-cued, metronome-cued and un-cued gait training [61,62]. The design of one depended on implementing an ambulatory program under three conditions (rhythmic music, metronome, and no auditory stimulation) interchangeably over nine sessions [62]. The other [61] tracked the changes over the very short period (one session). Both of these trials reported no significant changes associated with music-cued exercises. One study measured general mobility using the Southampton Assessment of Mobility scale [64] and showed significant improvement in mobility with music-cued exercises, despite participants having severe dementia. Non-motor effects of music cued exercises Four trials investigated the effect of music-cued exercise on non-motor signs [56][57][58]60] and one measured various non-motor outcomes in addition to movement disorders [55]. Four were RCTs [55][56][57][58], and one was a repeated measure experimental design [60]. Three RCTs had high levels of bias, and the non-RCT had a score of 15/28 on the Downs and Black checklist, indicating fair quality [66][67][68][69]. Several studies measured cognitive and behavioral changes in response to music-cued exercises, compared to control groups [56,57,60] or other interventions [55,58]. The non-motor results varied widely. For example, in the trial by Moore (2010) [60], no significant improvements in agitated behavior were seen in response to music-cued exercises. In contrast, the study by Sung et al. (2006) [57] showed large reductions in agitation with music-cued exercises, when familiar and preferred music was delivered for four weeks. In a trial by Van de Winckel et al. (2004) [56], folk songs were used with exercises (e.g., upper and lower extremity exercises, strengthening and balance) in daily sessions for three months. They showed significant improvement in cognition measured by Mini-Mental State Examination (MMSE). There was no improvement in be-havior, including depression, when compared to the control group. In contrast, Cheung et al. (2016) [58] showed improvements in depression in response to movement (e.g., batting balloons and waving ribbons) with popular and religious music. The studies showed significant improvements in cognitive and behavioral functions within the music with exercise groups, yet varied in significance when compared with other groups. Satoh et al. (2017) [55] investigated both motor and non-motor signs. It mainly measured non-motor outcomes including cognition and behavior using a wide range of neuropsychological test batteries. It also considered motor outcomes associated with the functional independence measure (FIM). This trial had the longest session duration (40 min) and the longest overall duration (six months) among the included studies. The exact music used was not reported. The trial did not involve comparison with a control group, and most of the outcomes showed non-significant differences when music-cued movements were compared to cognitive stimulation. Only visuospatial function and atrophy in the medial temporal lobes improved in the between groups comparison. Improvements in psychomotor speed, increased medial temporal lobe volumes and preserved ADL skills were observed for the group that had music-cued exercises. Level of participation In the four trials investigating the level of participation, there were large differences in participant characteristics, study designs and the duration of music-cued exercises. Two used similar types of music (jazz, blues, and folk) [38,59]. One used newly composed rhythmic music [63], and one did not mention the type of music delivered [65]. Participation levels increased in two trials [38,59]. Clair et al. (2005) [63] did not detect improvements in participation levels across the three music activity conditions (music with movement, the rhythmic playing of music or singing). The study by Clair et al. (2005) [63] had short session duration of 15 minutes, and the frequency was once per week. Hanson et al. (1996) [65] investigated exercise type and difficulty, and the stage of cognitive functioning on the quality of participation, generating a wide range of results. Participation levels were assigned to six categories, from most to least purposeful. The most purposeful participation occurred during music-cued movements, mainly for high demand tasks. The high demand tasks required more expressive or receptive verbal skills, included more active involvement and were more complex than those activities classified as low demand. DISCUSSION This systematic review and critical analysis of the literature showed a small amount of emerging evidence for the beneficial effects of music-cued exercises on the motor and non-motor disorders in people living with dementia. A key finding was that the dosage of exercise and the design of therapy programs are important determinants of the success of music-cued exercises for people with differ-ent forms of dementia. Long duration and frequent music-cued exercise sessions appear to be most helpful and appear to enable some people with dementia to respond better to music-cued exercises [29,70]. Most studies selected music with a clear rhythm that was matched to exercise tempo. For example, the trial by Mathews et al. (2001) [38] used original, instrumental music pieces with strong rhythmic beats for each different exercise to improve physical activity and participation. Other studies used music that met the needs and preferences of participants. The severity of dementia did not always determine the success of the intervention. Some studies which included participants with mild to moderate dementia showed promising results [58] and others did not [63]. Contrary to expectations, some of the studies with severely affected participants resulted in beneficial effects. For example, the study by Pomeroy (1993) [64] showed significant improvements in motor performance despite severe and advanced dementia. That study was of 12 weeks duration with three classes per week, which was comparatively frequent. The inclusion of body awareness training and functional mobility training in addition to music-cued exercise could have enhanced the results. The positive findings were not solely attributed to exercising with music. Motor performance was examined in a limited number of studies [61,62,64]. Two of these showed no positive results, highlighting the need for more studies investigating the effect of music and movement on mobility. Several non-motor outcomes responded well to music-cued exercises [55,56,58]. Significant improvements were found for some cognitive and behavioral functions (e.g., memory and depression) with music-cued exercises [56,58]. The effects of music-cued exercises on agitation were, however, inconsistent. Sung et al. (2006) [57] showed a reduction in agitation, whereas the study by Moore (2010) [60] reported no change. Both studies used the Cohen Mansfield Agitation Inventory as an outcome measure, but the intervention dosage was slightly higher, and the study design was stronger in the RCT by Sung et al. (2006) [57]. Severe cognitive impairment can sometimes compromise the ability of people living with dementia to fully respond to music-cued exercises. Two studies assessed global cognitive functions using the MMSE (56, 58) and both showed significant improvement in the music-cued exercise group. Regarding the level of participation, two investigations showed significant improvements [38,59]. Hanson et al. 1996 [65] highlighted the need to consider the effects of cognitive impairment on the outcome when designing movement to music classes. The study by Clair et al. (2005) [63] showed no change in participation. This might have been associated with the relatively short duration of sessions (5 minutes for exercises to music and 10 minutes for singing and instrument playing). Some included studies reported the benefits of incorporating visual cues, such as mimicking the therapist's movements, to enhance motor performance during music-cued exercise classes [56,65]. Whereas verbal cues require considerable language and cognitive skills which can be impaired in dementia [56,65], the use of visual cues appears to facilitate the automatic performance of well-learned motor skills [71,72]. This systematic review had some limitations. More than half of the studies did not use a control group. Those that did showed significant beneficial effects of music-cued exercises [56,57,59,60,64]. However, the comparatively small number of participants and the low quality and high risk of bias in many of the studies restrict the generalizability of the results to the population of people with dementia as a whole. The eligibility criteria were broad and included studies of different designs and outcomes. Moreover, our review included only articles published in English. It is possible that people with dementia from other cultures might respond differently to musical cues and exercise classes. The absence of effect size measures and lack of homogeneity of trial design precluded meta-analysis in this systematic review. Conclusion The results of this systematic review show a growing body of evidence that music-cued exercises may improve some motor and non-motor impairments associated with dementia, including mobility, cognition, and level of participation. The most effective music appeared to have a clear rhythmical beat to which exercises could be synchronized. Music that people enjoyed was also important to overcome the lack of motivation and increase levels of participation. Increasing the frequency and duration of sessions was associated with better outcomes. Further high-quality studies are needed with large sample sizes, control groups, long duration, follow-up measures as well as evidence-based reliable and relevant outcome measurement tools, to corroborate these findings. Conflicts of Interest The authors report no conflicts of interest. Movements are rhythmic to music cues. Activities to improve range of movement, movement quality and rhythm such as tapping the feet to music, moving upper and lower limb joints and rowing actions. 5 min moving in time to rhythmical movement, using instruments. 5 min music-cued flexibility and motor actions. 5 min songs.
2019-05-12T14:24:11.497Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "5cda22bd13325295dac1c0068e9cbe34eb5b851d", "oa_license": "CCBYNC", "oa_url": "https://www.ijphy.org/index.php/journal/article/download/347/334", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0309354efa5c1919394f652cb40f0b6a12710d9b", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
109935843
pes2o/s2orc
v3-fos-license
Earth Observation , Remote Sensing , and Geoscientific Ground Investigations for Archaeological and Heritage Research Building upon the positive outcomes and evidence of dissemination across the community of the first Special Issue “Remote Sensing and Geosciences for Archaeology”, the second edition of this Special Series of Geosciences dedicated to “Earth Observation, Remote Sensing and Geoscientific Ground Investigations for Archaeological and Heritage Research” collects a varied body of original scientific research contributions showcasing the technological, methodological, and interpretational advances that have been achieved in this field of archaeological and cultural heritage sciences over the last years. The fourteen papers, published after rigorous peer review, allowed the guest editor to make considerations on the capabilities, limitations, challenges, and perspectives of Earth observation (EO), remote sensing (RS), and geoscientific ground investigations with regard to: (1) archaeological prospection with high resolution satellite SAR and optical imagery; (2) high resolution documentation of archaeological features with drones; (3) archaeological mapping with LiDAR towards automation; (4) digital fieldwork using old and modern data; (5) field and archaeometric investigations to corroborate archaeological hypotheses; (6) new frontiers in archaeological research from space in contemporary Africa; and (7) education and capacity building in EO and RS for cultural heritage. Introduction The first Special Issue on "Remote Sensing and Geosciences for Archaeology" that I was invited to lead as guest editor by the journal Geosciences in 2017, collected 21 high-quality peer-reviewed papers (plus the editorial) outlining the state-of-the-art of research in the fields of archaeological remote sensing and geosciences.The contributions published in that Special Issue provide a wide portfolio of methodologies, data, and techniques proving that remote sensing and geosciences for archaeology are currently vibrant research and practice domains, with expertise spread across the globe, and teams fully exploiting the capability of remote sensing to investigate sites and landscapes in different geographic, social, and environmental contexts [1]. After one year of publication, the metrics of the Special Issue summarized in Table 1 can be considered promising to assess the dissemination degree of these papers across the specialist community.We also need to account for the fact that the Special Issue was the first in Geosciences which was dedicated to remote sensing and archaeology, and the journal itself was not as much known to the specialist readership as it is nowadays.In particular, it is worth mentioning that two of the published papers, i.e., Traviglia & Torsello [2] and Agapiou et al. [3], have repeatedly been listed among the dynamic ranking of the 10 most-cited papers of Geosciences in the last 24 months.Building upon the positive outcome achieved in 2017 and in order to continue this Special Series, in March 2018 I launched the call for papers for a second edition of the Special Issue with the title "Earth Observation, Remote Sensing and Geoscientific Ground Investigations for Archaeological and Heritage Research". Comparing the titles of the two editions of this Special Series, it clearly emerges that, in this second Special Issue, I intentionally: (1) broadened the spectrum of the topics to include Earth Observation (EO), to acknowledge that satellite imagery is nowadays regarded by the archaeological and heritage communities as a resource of spatial and temporal information (see the majority of the papers published in the first edition of the Special Issue: [3][4][5][6][7][8][9][10][11]); (2) cited "heritage" alongside "archaeology" to be more inclusive of the various disciplines and domains of geoscientific research focusing on cultural subjects; (3) included geoscientific ground investigations, in the hope of receiving submissions highlighting not only new methods for ground-based surveying, archaeological prospection, and diagnostic investigation, but also validation of signals, parameters, features, and marks extracted from EO and remote sensing (RS) analyses with ground-truth data collected in the field. The topics that I envisioned to cover for the submissions to this second edition included: Facts and Figures of the Special Issue A total of 21 submissions were received for consideration of publication in the Special Issue from April 2018 to January 2019.After rigorous editorial checks and the peer review process involving external and independent experts in the field, the acceptance rate was 67%.The published Special Issue therefore contains a collection of 14 research articles. Figure 1a shows the countries where the study areas of the papers published in the Special Issue are located, while Figure 1b the spatial distribution of these study areas, distinguished between cultural landscapes and individual heritage sites.By comparison with Figure 1b published in [1], it is apparent that in this second edition the study areas are more widespread across the globe, while in the first edition the majority was concentrated in Europe and in the Middle East.The latter region, alongside Peru and Germany, is still of research interest.However, this time the archaeology of the Indian subcontinent and African continent gathered specific attention of the research community.It is also worth mentioning that one of the contributions [22] provides an overview of space law and space sciences for archaeological and heritage research in contemporary Africa.Thus, the African continent has been marked in grey in Figure 1a to signify the wider geographic focus of this paper. Facts and Figures of the Special Issue A total of 21 submissions were received for consideration of publication in the Special Issue from April 2018 to January 2019.After rigorous editorial checks and the peer review process involving external and independent experts in the field, the acceptance rate was 67%.The published Special Issue therefore contains a collection of 14 research articles. Figure 1a shows the countries where the study areas of the papers published in the Special Issue are located, while Figure 1b the spatial distribution of these study areas, distinguished between cultural landscapes and individual heritage sites.By comparison with Figure 1b published in [1], it is apparent that in this second edition the study areas are more widespread across the globe, while in the first edition the majority was concentrated in Europe and in the Middle East.The latter region, alongside Peru and Germany, is still of research interest.However, this time the archaeology of the Indian subcontinent and African continent gathered specific attention of the research community.It is also worth mentioning that one of the contributions [22] provides an overview of space law and space sciences for archaeological and heritage research in contemporary Africa.Thus, the African continent has been marked in grey in Figure 1a to signify the wider geographic focus of this paper.(b) geographic distribution of the study areas distinguished by typology ("landscape" in case of regional archaeological mapping and wide-area archaeological prospection; "site" in case of site-focused studies and investigations in single location).The African continent is marked in grey because one of the published papers [22] provides an overview of space law and space sciences for archaeological and heritage research in contemporary Africa.This geographic distribution could not be predicted, was not intentional, and indeed, was the random result of the call for papers and following peer review.However, some considerations can be made.(b) geographic distribution of the study areas distinguished by typology ("landscape" in case of regional archaeological mapping and wide-area archaeological prospection; "site" in case of site-focused studies and investigations in single location).The African continent is marked in grey because one of the published papers [22] provides an overview of space law and space sciences for archaeological and heritage research in contemporary Africa.This geographic distribution could not be predicted, was not intentional, and indeed, was the random result of the call for papers and following peer review.However, some considerations can be made. The remote location and vastness of the study areas covered by the majority of the published papers once again prove the impact that EO and RS can generate in facilitating archaeological research, by making investigations more cost-effective and less risky for the operators. Furthermore, it can be rightly said that with EO and RS there is no frontier for archaeological and heritage research.On the contrary, unexplored regions and areas with limited literature are ideal geographic locations for exercises of archaeological mapping and site discovery studies. Finally, the predominance of landscape studies compared to site investigations (7 vs. 5; Figure 1b) highlights a growing interest in using EO and RS for regional and wide-area mapping.This trend has been recently observed and commented by several authors in the literature (e.g., [23,24]). Overview of the Published Papers As manuscripts were submitted and processed for peer review, it progressively became clear that this Special Issue was shaping not only along the topics that I had delineated in the call for papers (see Section 1), but also following other unexpected topics, including automation in archaeological prospection, methodological reflections on the use of old and new remote sensing data for digital fieldwork, and legal aspects of archaeological research.A summary of the published papers is reported in the following sections. Archaeological Prospection with High Resolution Satellite SAR and Optical Imagery The two papers published by Wiig et al. [25] and Zanni & De Rosa [26] respectively seem to contrast the controversial statements (sometimes written in the literature or claimed at conferences) that archaeologists are not familiar with satellite Synthetic Aperture Radar (SAR) imagery as a source of information for archaeological prospection due to difficulties with access, processing and interpretation of these data, and that high-resolution (HR) satellite optical imagery (i.e., 5-30 m) is of marginal usefulness in archaeology (see also [23,27]). Wiig et al. [25] add a novel contribution to the still open discussion whether satellite SAR sensors operating at short wavelength (i.e., in C-and X-band, 5 to 3 cm wavelength) can penetrate through the subsurface in arid regions.The authors compared the observations made at the site of 'Uqdat al-Bakrah (Safah), Oman, with HR TanDEM-X bistatic and RADARSAT-2 images that were acquired at different incidence angles at scene center (from 27 • to 53 • ) and polarization, and then processed to achieve pixel spacing of 0.87-1.14m and 2.1-2.95m, respectively.In particular, the authors' attention was concentrated on a subsurface paleo-channel that was not visible on the ground surface, but was first identified through Ground Penetrating Radar (GPR) survey and later verified by test excavations at a depth of 0.6-0.7 m.Although it is still unclear whether the microwaves are penetrating to the specific depth at which this paleo-channel was found, the findings are significant as this paper is one of the very few studies where features found in satellite SAR images were verified in the field. Zanni & De Rosa [26] tested different combinations of the spectral information collected in the 13 bands of the Multispectral Instrument (MSI) onboard the satellite Sentinel-2A of the Copernicus programme, to investigate the capabilities of these satellite data for detection of buried features belonging to Roman roads.The experimental trials were run in the Srem District in Serbia, part of the original Roman itinerary between Aquileia (Italy) and Singidunum (Belgrade).Sentinel-2A images acquired in the summer season in 2016 were first carefully selected from the available catalogue and then processed to extract the Normalized Difference Vegetation Index (NDVI), Normalized Archaeological Index (NAI), the combination of Red and NIR (RN) and Crop Coefficient 3 (CC3).The visual assessment of the obtained maps and the comparison with the same processing outputs of a matching WorldView2 image led to the identification of 60 crop-marks in the portion of territory stretching from Sremska Mitrovica to Zemun.Of these, during the in-situ validation surveys, 13 were found to correspond to already known archaeological sites and stretches of the Roman road, whereas 47 crop-marks remained unmatched, thus highlighting the benefits and limitations of Sentinel-2 and WorldView2 observations. High Resolution Documentation of Archaeological Features with Drones In the current practice of archaeological remote sensing where small Unmanned Aerial Vehicles (UAV)/Remotely Piloted Aircraft System (RPAS) are increasingly used by archaeologists as data acquisition platforms and (semi-)autonomous measurement instrumentation, the paper published by Pavelka et al. [28] demonstrates the agility of this RS solution in arid environments and the opportunity that it can offer for fascinating discoveries while documenting cultural landscapes.The authors exploited very high resolution (VHR) satellite data and super resolution data from the drone to improve the digital documentation of the "Pista" geoglyph in Palpa, Peru, and refine the knowledge and interpretation of this geoglyph that had been researched several times by the archaeologists, but still poses some open questions.Through the description of the methodological workflow of data capture, processing and post-processing, the authors present the final vector map that they generated, achieving more detailed delineation of surviving archaeological features than older outputs based on satellite or old aerial data.The surveys also offered the opportunity to discover unknown geoglyphs (a bird, a guinea pig, and other small drawings), thus adding new information in an area of well-known geoglyphs.While dating these new geoglyphs remains a challenging task, the digital record of these newly found geoglyphs allowed the authors to observe similarity in the iconography compared with other well-known geoglyphs. Archaeological Mapping with LiDAR towards Automation There is no doubt about the great value of airborne LiDAR (Light Detection and Ranging) for archaeological mapping [29], as well as the high degree of appreciation that this technology finds across the archaeological community. The contribution by Moyes & Montgomery [30] adds a further proof of the usefulness of this technology to explore Maya lowlands and other tropical regions, where dense vegetation usually prevents archaeologists from conducting extensive surveys or, at least, makes this type of archaeological survey less cost-effective.In particular, the authors describe a method for locating potential cave openings using local relief models that require only a working knowledge of relief visualization techniques.This method was exploited in Chiquibul Forest Reserve, a heavily forested area in western Belize, where caves were utilized by the ancient Maya people as ritual spaces.Almost all attempts to find caves using LiDAR data focused on locating sinkholes that lead to underground cave systems, but caves in Chiquibul can be entered in some cases by sinkholes, in others via vertical cliff faces or by dropping into small shafts.Therefore, the authors aimed to locate and investigate not only sinkholes but other types of cave entrances using point cloud modeling.Validation was undertaken through an opportunistic survey to verify selected caves identified on the LiDAR, and a systematic pedestrian survey that was completed over two six-week field seasons in the summers of 2017 and 2018 using two to three crews of three people each.The opportunistic survey led to 86% success rate with only three false positives, verifying 26 cave openings, and proved LiDAR to be expedient in meeting the project goals of locating and investigating unknown cave sites. Regional and national LiDAR collections are increasingly made available by territorial administrations under open data policies for land management and scientific research purposes.Although these data are generally acquired in the context of flood or other hazard management, it is envisaged that their continuous release to the public will only further increase the impact of airborne LiDAR on archaeological research and heritage management [31].While these initiatives are welcome as they provide an extraordinary source of spatial data, there is lively discussion about the impact that automation can bring to improve the operator's capabilities to handle huge quantities of LiDAR data for archaeological mapping of large regions.However, it cannot be neglected that the development of automation methods and approaches in archaeological prospection is still in its infancy. Towards this direction, Meyer et al. [32] exploited the LiDAR datasets acquired between 2008 and 2010, and later in 2016, and made available to archaeologists in North Rhine-Westphalia, Germany, by the provincial government according to the Open Geodata principle, to assess the potential for automated classifications using Object-Based Image Analysis (OBIA).Three types of field monuments were considered: Ridge and Furrow areas (of early medieval fields), Burial Mounds (Bronze and Iron Ages), and Motte-and-Bailey castles.The latter two are not classified as binary, but in multiple classes, depending on their degree of erosion.After a detailed description of the methodology and processing workflow, the authors focus their results discussion around the challenge of discriminating between true and false positives in situations where the terrain becomes complex and a more anthropogenic influence is present.On the other side, the detection rate of field monuments with OBIA is ~90%, although this technique is vulnerable to distortions and frequently can be implemented in commercial software that may limit the accessibility to archaeologists due to fund constraints. Digital Fieldwork and Reflections on Challenges of Archaeological Mapping with Old and Modern Data One of the main objectives of this Special Issue was to capture the state-of-the-art of the methods of digital fieldwork in remote and inaccessible areas.The picture coming out from the collection of the papers described in this section is that archaeologists, from different countries, are making efforts to develop rigorous and robust methodologies for archaeological mapping which are at the same time systematic, accurate, reliable, and cost-effective.Digital fieldwork is undertaken as a desk-based task in the perspective of precisely planning ground-truth and validation surveys, to optimize resources and prioritize in-situ inspections in areas of higher archaeological potential. In this regard, Nsanziyera et al. [33] present a predictive model based on GIS and remote sensing data to locate areas with high potential to be archaeological sites.The authors apply a multi-criteria decision making method-analytic hierarchy process (AHP)-that integrates archaeological data and environmental factors, geospatial analysis, and predictive modeling, to identify possible tumuli locations in Awsard (total study area of 980 km 2 ), southern Morocco.The results consist of a prediction map with a gain of 92.8%, in a scale where 1 means a high predictive model and 0 no a predictive model.Interestingly, 56.87% of all sites were found to be located in only 4.04% of the total study area.This method proves effective to prioritize areas for archaeological expeditions. Smith & Chambrade [34] showcase the results of the systematic analysis of the arid "Black Desert" of north-eastern Jordan, which they conducted in the framework of the archaeological project Western Harra Survey (WHS), using the full VHR Google Earth coverage released in 2017, with further GeoEye and CNES/Airbus satellite imagery becoming available, as well as DigitalGlobe products appearing in Bing Maps.The high spatial resolution of such datasets enabled a more clear definition of structural differences between the types of prehistoric structures (e.g., enclosures, "wheels", "pendants", "kites", and meandering walls).The major benefit of this satellite digital fieldwork was the precise planning of ground surveys, with advanced knowledge of which sites were vehicle-accessible and how to efficiently visit a stratified sample of different site types.The fieldwork-derived data were then fed back into the satellite imagery survey, helping the authors to interpret what can be seen in remote sensing more accurately for future investigations. However, the advent of new EO and RS data, visualization platforms, and processing technologies does not mean that archaeologists and heritage scientists disregard historical mapping resources.On the contrary, the community is working on bringing these old fashioned resources back to light, standardizing the methodology for their use and interpretation, and combining the information extracted with modern data, to achieve a diachronic and dynamic reconstruction of the cultural landscape evolution in time. Petrie et al. [35] and Garcia et al. [36] are two interlinked papers that need to be read in conjunction, because they were conceived and published in the framework of TwoRains, WaMStrIn and Marginscapes projects.Petrie et al. [35] advocates the value and importance of the Survey of India 1" to 1-mile map series, an historical mapping resource which was under-utilized and, with this paper, gains the attention it deserves since it is a precious reservoir of spatial information of topographic features and elevated mounds visible at the time of the surveys, but which were either damaged or destroyed by the expansion of irrigation agriculture, and urbanism, and are no longer visible.The authors present a method for accurately georeferencing these maps and review the symbology that was used to represent elevated mound features that have the potential to be archaeological sites.Certainly, this method will be very useful to support further studies by other scholars willing to use this mapping resource alongside modern RS data, as it is well demonstrated by the accompanying Garcia et al. paper [36].Within the latter paper, the authors investigate the historical inundation that hit the city of Dera Ghazi Kkan, in Punjab, Pakistan, in 1909.Historic news reports, books, and maps are used to undertake a regressive analysis to reconstruct the historical dynamics between the urban settlement and the river morphodynamics in the Indus alluvial plain.Declassified CORONA images, multispectral Landsat time series, and microtopographic data derived from ALOS Global Digital Surface Model "ALOS World 3D-30 m (AW3D30)" using the Multi-Scale Relief Model (MSRM), are combined to examine: (1) how historical hydrological dynamics are reflected in RS data; (2) the implications of river morphodynamics in the interpretation of settlement patterning; and (3) the documented socio-political responses to the geomorphological change of the local environment. If old mapping data preserve an otherwise vanishing memory, they have to be handled carefully, especially if they have been collected by different operators and according to different study purposes.In this context, the feature paper by Banaszek et al. [37] will be, in the author's opinion, a reference piece of research, since it provides a practical discussion of the challenges that archaeologists need to deal with for creating systematic datasets of national-scale archaeological mapping, where the standards to which these datasets were created are explicit, and against which the reliability of the knowledge of the material remains of the past can be assessed.With the focus on Scotland, the authors start by acknowledging that the National Record of the Historic Environment (NRHE) is an inventory of what has been recorded over the years and it reflects the interests and recording policies of those who created it, with bias in content as a result.The lack of scalability in traditional approaches to large area mapping which rely heavily on human resources and field visits, is definitely a constraint to deal with.The authors use the Isle of Arran as an outdoor laboratory for scoping their approach to rapid large area mapping and test how airborne laser scanning derivatives and orthophotographs, supplemented by field observations, can help to increase the records of the known monuments.This exercise demonstrated the strengths and weaknesses of remotely sensed data acquired for general purpose, the variability of desk-based interpretation between individuals, and the necessity for targeted field observations in areas with poor data coverage and where background noise obscures the visibility of archaeological features in the visualizations derived from the airborne laser scanning surveys. Field and Archaeometric Investigations to Corroborate Archaeological Hypotheses In a multidisciplinary perspective, geoscientific ground investigations and laboratory analyses remain essential to achieve an insightful knowledge of the near surface in archaeological and heritage sites, as well as of objects and findings, that EO and RS alone could not be able to document or investigate.While most of the analytical techniques and research methodologies in geo-archaeology and archaeometry are well-established and standardized, there are always opportunities to employ advanced approaches and collect elements to support or modify existing archaeological hypotheses. Festa et al. ref. [38] is an archaeometric paper presenting the results of non-destructive analyses carried out on 36 Sumerian pottery fragments found in the settlement of Abu Tbeirah (3rd millennium BC), southern Iraq.The analysis aimed to characterize the crystallographic composition of the ceramic material, to shed light on the ancient technology and manufacturing techniques.Combining non-invasive neutron diffraction (ND) with chemometrics such as Principal Component Analysis (PCA) and Cluster Analysis (CA), the authors observed a general uniformity of the raw materials and could suggest a local origin of the clay used for Sumerian vases, by comparison with modern clay collected from the canal near the excavated site.The secondary minerals found and their marker-temperature formation are compatible with two different ranges of firing temperature that never exceeded 1000 • C. In the absence of kiln traces in the archaeological site of Abu Tbeirah, it appears reasonable to hypothesize that the analyzed pottery was produced with pit-firing techniques and not kiln firing. Because kilns have been documented in the Mesopotamian archaeological record for earlier periods, the finding of this research would suggest the coeval presence of different firing methodologies that has been neglected by archaeologists so far. Delle Rose et al. [39] attempt to find stratigraphic evidence corroborating (or confuting) the hypothesis that the ceremonial center of Cahuachi, Rio Grande de Nazca, in southern Peru, was first severely damaged, then completely buried by catastrophic river floods as a result of two Mega El Niño events, which occurred around 600 Common Era (CE) and 1000 CE, respectively.The occurrence of such catastrophic events would be proved by the presence of a conglomerate layer in the stratigraphy.Therefore, during the 2012 archaeological excavation works at Cahuachi, the geological substratum close to the Piramide Sur was temporarily exposed and stratigraphic, grain-size distribution, and petrographic investigations were carried out.No fundamental discontinuity was found in the studied stratigraphic interval which instead, due to the lithological features, matches with common regional successions (i.e., Changuillo or Changuillo-Canete Formations) of the pampa of Nazca rather than the deposits related to El Niño-Southern Oscillation (ENSO) events. New Frontiers in Archaeological Research from Space in Contemporary Africa As recalled in Figure 1a, the last paper published in the Special Issue [22] provides an overview of space law and space sciences for archaeological and heritage research in contemporary Africa, which could become a new frontier for activities of discovery and preservation in this continent.This paper also reminds the reader that there are far more diverse categories of heritage and archaeological features than those commonly studied with EO and RS.Indeed Oduntan [22] articulates a series of insightful reflections on the legal aspects of EO and RS, trying to answer questions about the impact that these aspects of space law and space sciences have in relation to: (a) international boundaries disputes and demarcation activities; (b) management and preservation of the African heritage; (c) disaster and conservation management.In particular, the paper tests the hypothesis that it is crucial for the development of the African continent that states should sustain and increase investment in the following areas: archaeological prospection, condition assessment of heritage assets; Geographic Information System (GIS) analysis of spatial settlement patterns in modern landscapes, and assessment of natural or human-induced threats to conservation.Through a critical, comparative, and socio-legal methodology, the author focuses on the space active African states and the emergent patterns in African domestic space-related policies and space-dedicated legislation.The connection with the EO and RS practice of archaeological and heritage research lies in the area of the reconstruction of African territories from space, the demarcation of boundaries, and geodetic ground investigations, not only to resolve disputes but also to preserve state boundaries and ancient African "relict boundaries".The latter term refers to antecedent boundaries which were abandoned for political purposes but are still evident in the cultural landscape and, as such, manifest themselves in space by, among other features, direct border remains such as border stones, mounds, ancient walls, border roads, clearings, customs houses, and watch-towers.The latter are among the less known African heritage and treasures that EO and RS can help to unveil, document, and preserve within national and international legal frameworks and space policies. Education and Capacity Building in EO and RS for Cultural Heritage All the papers summarized above were published by expert scientists and researchers who are extremely familiar with and competent in EO, RS, geoscientific ground investigations, and laboratory analytical techniques.The knowledge transfer and the capacity building to heritage stakeholders and early beginners are still challenging tasks, and require a specialist educational preparation that is not obvious.Showcasing the ability of a technology to support a specific operational task (e.g., condition assessment of heritage sites) does not mean that the potential users of that technology will be able to use it themselves or, after training, will recognize the value of that technology and will search for it in their daily duties.In the current context where more work is definitely required to reach the users and stakeholders and generate real impact on archaeological and heritage practice, the paper by Matusch et al. [40] is proof that some initiatives are ongoing.The authors present the e-learning module Space2Place that they developed in the framework of the project "Space4Geography" carried out between 2013 and 2017, with the aim to empower UNESCO site stakeholders to incorporate EO into their working routines.This e-learning module is contextualized in the current situation of knowledge gaps by the user, limited technical and financial facilities, or the lack of ready-to-use data, despite the abundance of satellite data and user-oriented services made available by EO programs such as the European Commission Copernicus.Space2Place is therefore a capacity building initiative to enable heritage stakeholders obtain a substantial introduction into EO and overcome the knowledge barriers that may exist.One of the key features of this paper is the discussion of the results collected after an expert survey that the authors ran with the participation of 11 experts coming from various institutions.The survey provides insights into the main barriers and expected benefits that stakeholders perceive in the use of EO to address specific threats to conservation of cultural heritage (e.g., climate change, natural hazards, intentional destruction, and warfare).Of all the interesting elements emerging from this direct feedback, two are worthy of mention.First, not all EO data are appropriate for each task, thus stakeholders need to be able to choose themselves the appropriate EO sensor(s) with regard to their specific needs, the study time, and the size and location of the site to observe.This approach will make the stakeholders aware and become critical users of these technologies.Second, there is a clear demand for up-to-date information with high cost-efficiency, that can be used in support of daily and routine tasks such as detection of impacts, evaluation of interventions, and early detection of critical changes in heritage sites.However, accessibility in terms of finance, infrastructure, and human resources remains a constraint. Figure 1 . Figure 1.(a) Countries where the study areas of the papers published in the Special Issue are located;(b) geographic distribution of the study areas distinguished by typology ("landscape" in case of regional archaeological mapping and wide-area archaeological prospection; "site" in case of site-focused studies and investigations in single location).The African continent is marked in grey because one of the published papers[22] provides an overview of space law and space sciences for archaeological and heritage research in contemporary Africa. Figure 1 . Figure 1.(a) Countries where the study areas of the papers published in the Special Issue are located;(b) geographic distribution of the study areas distinguished by typology ("landscape" in case of regional archaeological mapping and wide-area archaeological prospection; "site" in case of site-focused studies and investigations in single location).The African continent is marked in grey because one of the published papers[22] provides an overview of space law and space sciences for archaeological and heritage research in contemporary Africa. Table 1 . Article metrics of the papers published in the first edition of the Special Issue as of 01/04/2019 (source: Geosciences).
2019-04-10T07:35:08.072Z
2019-04-07T00:00:00.000
{ "year": 2019, "sha1": "8f7ce0fb2fc6c99cd1d16d828fe52f4cce2db165", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3263/9/4/161/pdf?version=1554624333", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "8f7ce0fb2fc6c99cd1d16d828fe52f4cce2db165", "s2fieldsofstudy": [ "Environmental Science", "Geology", "Geography" ], "extfieldsofstudy": [ "Engineering" ] }
4619839
pes2o/s2orc
v3-fos-license
Resources and Costs Associated with the Treatment of Advanced and Metastatic Gastric Cancer in the Mexican Public Sector: A Patient Chart Review Background Little evidence is available on the management and cost of treating patients with advanced or metastatic gastric cancer (GC). This study evaluates patient characteristics, treatment patterns, and resource utilization for these patients in Mexico. Methods Data were collected from three centers of investigation (tertiary level). Patients were ≥18 years of age, diagnosed between 1 January 2009 and 1 January 2015, had advanced or metastatic GC, received first-line fluoropyrimidine/platinum, and had ≥3 months follow-up after discontinuing first-line treatment. Data were summarized using descriptive statistics. Results The study sample totaled 180. Patients’ mean age was 57.2 years (±12.4) and 57.0% were male; 151 (83.9%) patients received second-line chemotherapy. A total of 16 and 19 regimens were identified in first- and second-line therapy. Of the sample, 51 (28.3%) received third-line therapy, and <10% received more than three lines of active chemotherapy. Supportive care received in first- and second-line chemotherapy, included pain interventions (12.2 and 7.9%), nutritional support (3.3 and 1.3%), radiotherapy (6.1 and 16.6%), and transfusions (13.3 and 10.6%), respectively. Using Mexican Institute of Social Security (IMSS) tariffs, the average total cost per patient-month in first- and second-line therapy was US$1230 [95% confidence interval (CI) 1034–1425] and US$1192 (95% CI 913–1471), respectively. Administration and acquisition of chemotherapy comprised the majority of costs. Conclusions This study shows considerable variation in first- and second-line chemotherapy regimens of patients with advanced or metastatic GC. Understanding GC treatment patterns in Mexico will help address unmet needs. Introduction Over the last 50 years, the reduction in incidence and mortality due to gastric cancer (GC) worldwide has been significant [1]. However, despite this, GC remains highly ranked in both incidence and mortality due to cancer; in 2012 the World Health Organization (WHO) ranked it as the fifth most common malignancy and the third most common cause of cancer death worldwide [2]. In particular, GC has a high burden of disease in developing countries, where approximately 60% of all cases are detected [3], and where stomach cancer is ranked among the most frequent type of cancer in terms of incidence and mortality [4]. The high mortality rate associated with GC is in part due to its lack of distinct symptoms, which allow GC to go unnoticed until advanced stages, where treatment options are limited [5]. While surgery is considered standard treatment for early-stage GC, the chemotherapy recommended for advanced stages remains relatively nonstandardized in terms of regimen selection. International guidelines recommend a two-drug combination of fluoropyrimidine and platinum in first-line treatment, without recommending specific regimens or specific product endorsements [6][7][8]. In second-line chemotherapy, ramucirumab, paclitaxel, docetaxel and irinotecan are labeled as the preferred treatment options by the National Comprehensive Cancer Network (NCCN; 2014), however only ramucirumab has formal approval in the indication. Nonetheless, in terms of real-world experience, little evidence is reported on GC management practices that identify the most frequently implemented strategies from the wide range of options available. This lack of data is surprising as, according to the few published studies available, the economic burden of GCs is relatively high and data regarding treatment patterns could potentially identify areas of cost saving. Presented as an abstract of a retrospective study, in 2011 Knopf et al. collected monthly resource utilization data on patients with GC versus a nondiagnosed control, for the period 2007-2009. The mean monthly costs for patients with GC was US$10,653, versus US$571 for the control group [9]. Knopf et al. posited that while GC has a low prevalence in the US, the cost per patient has an overproportionate impact in the cost of care when compared with other cancers. In a study by the US National Cancer Institute, the cost of GC was estimated to be US$1.82 billion in 2010 [10]. In a third study in the US by Yabroff et al., it was observed that the cost of GC, in terms of initial care, was approximately US$5348, while for the last year of patient care the cost rose to US$7435 [11]. Mexico is currently considered a medium-risk area of GC, as defined by incidence rate; in 2012, the WHO estimated a rate of 7.9 per 100,000 inhabitants [2]. Nevertheless, national resources rank GC as the second cause of death associated with cancer, and the first cause of mortality in the country due to digestive tract neoplasms [12]. However, similar to international literature, there is little published evidence on patient management, resource utilization and economic burden, and none on the economic impact of the disease [13]. This study was developed to better understand the current treatment patterns of patients diagnosed with GC in Mexico in order to support public policy regarding GC treatment programs. The primary objectives of the study were to (1) describe the demographic and clinical characteristics of the target patient population in Mexico, and (2) identify and describe treatment patterns used in standard practice. Secondary objectives were to estimate direct costs associated with the treatment of these patients. This study focuses on second-line therapy. Methods A retrospective, observational study was designed to collect data regarding patient characteristics and institutional resource use from medical records in the Mexican public system. The target population was patients diagnosed with metastatic/unresectable GC (including gastroesophageal junction) between 1 January 2009 and 1 January 2015, and treated in tertiary-level centers of investigation. We defined the index date as the date recorded for diagnosis of advanced or metastatic GC. A sample size of 200 was set as a target. Ethics approval was obtained, and data capture respected international patient privacy regulations. Inclusion criteria were: • Patients completed first-line chemotherapy that included a platinum analog and a fluoropyrimidine, with or without another medication, and continued with either second-line treatment or palliative therapy; • Patients were[18 years of age at the time of diagnosis; • Medical records were required to have a follow-up of C3 months following the last administration of firstline treatment, except those recording a documented death: • This criterion was applied due to a pilot analysis of data collected within the first 3 days which showed that the number of patients with either (1) less than 3 months of follow-up, or (2) documented death within 3 months, exceeded one-third of the collected sample. The prespecified criterion regarding minimum follow-up was included in order to allow for analysis on types of agents used in second-line treatment in a situation where a large percentage of patients was being lost to follow-up. This loss of follow-up was experienced in a similar study conducted by the sponsor in Brazil, where a high percentage of patients left third-level facilities once receiving best supportive care (BSC) [14]. As such, the sponsor recommended the inclusion of this criterion in order to prioritize the capture of resource utilization of patients who remain in tertiary-level hospitals over the calculation of the percentage of patients who are treated in second-line with an active chemotherapy and the resource utilization of BSC. Exclusion criteria consisted of patients who had participated, or were currently participating, in any controlled clinical study, and patients with a second malignant disease diagnosed before or after the diagnosis of metastatic/ unresectable GC. Data Collection Data were captured using a paper Data Report File (DRF) that had been validated by the principal investigator against the Mexican Institute of Social Security (IMSS) patient files. DRFs were monitored for completeness and preciseness by a third-party monitor and transferred to an electronic database. Variables collected included patient demographics and clinical characteristics, treatment received, adverse events, hospitalization and outpatient visits, and resource utilization. Outcomes The primary outcomes of the analysis were defined as the demographic and clinical characteristics of the patient population in Mexico, the proportion of patients treated with each chemotherapy regimen per treatment line, and resource utilization, while secondary outcomes were defined as cost per patient-month and the distribution of cost per patient-month by category. Start and end dates, as well as duration, were calculated using dates reported in the patient files. The number of days from diagnosis to treatment was calculated using the index date of diagnosis to the day of the first chemotherapy treatment in first-line. In order to include all resources used in each line of therapy, lines of treatment were calculated from the first day of treatment to the day before the following line of chemotherapy; for the first-line of treatment, this was defined as the first date of hospitalization, radiotherapy or chemotherapy following a diagnosis of advanced or metastatic GC. End of treatment was calculated by the last date registered before loss of follow-up, death, or the cut-off date of data capture. Healthcare utilization rates were categorized by the type of medical resource service; specifically, acquisition of chemotherapy and premedication products, administration, adverse events, radiotherapy, inpatient hospitalization, outpatient visits, use of supportive care procedures and tests. Medicine use, both chemotherapy and premedication, was calculated using the number of cycles multiplied by total milligrams per cycle, based on reported posology. Costs per milligram were used. Costs of administration, radiotherapy, and supportive care units were calculated by the reported number of sessions or units. The number of grade 3 and 4 adverse events was collected, and the cost of treatment was calculated by adding reported treatment and procedures. Hospitalization for adverse events was included in costs for inpatient hospitalization; inpatient hospitalization days were calculated according to reported admission and discharge dates only. Outpatient costs were calculated according to the reported type of visit, specifically emergency room (ER) visits, pain clinic, or oncology clinic/consultation. Supportive care resources included tests and procedures not captured in adverse events, but did not include medication as posology data were not captured for supportive care. Treatment prior to diagnosis of advanced or metastatic GC was not included in the analysis. This study estimated direct medical costs from a payer perspective, from the index date to the recorded data point or to the end of data collection (8 August 2015). Cost estimates were calculated using resource utilization data and their corresponding unitary costs. Unit costs were taken from the IMSS unitary costs list for procedures (2015) [15], while the acquisition cost of medication was taken from the IMSS public tenders (2015) [16]. A limited number of supportive-care costs not published by the IMSS were taken from the National Institute of Cancer (INCAN) 2015 unitary costs list [17] (n = 16 variables); however, due to the number and general low cost of the affected variables, this had little impact on results. All estimated costs were in 2015 Mexican pesos (MXN$) and then converted to 2015 US dollars (US$). The exchange rate was 0.06317, calculated as the average exchange rate of 2015 from the database of the Bank of Mexico (1 January to 31 December). Statistical Methods Statistical analysis was descriptive due to the observational nature of the study, and was performed for all main variables collected. The mean, median, mode and standard deviation was calculated for continuous variables, and frequency and proportion were calculated for categorical variables. All measures were assessed using complete case analysis, with missing values being omitted in the final analysis of each variable. Descriptive analyses were completed in Excel 2010 (Microsoft Corporation, Redmond, WA, USA), and cost analysis was completed in STATA v11 (StataCorp LLC, College Station, TX, USA). Demographic and Clinical Characteristics The final sample size of the study was 180, collected from three tertiary-level centers of investigation. Due to the low incidence, all patients meeting the inclusion criteria in the participating institutes were included. In the IMSS hospital, Centro Médico Nacional (CMN) Siglo XXI, patient files are organized according to consulting office; all patients who met the inclusion criteria of the two consulting offices were included. The majority of patients were treated in IMSS CMN Siglo XXI (n = 167; 92.8%), followed by the Secretary of the Navy (SEMAR) (n = 7; 3.9%), and patients found through the investigation center IBiomed (n = 6; 3.3%). The patient selection process is outlined in Fig. 1. The demographic and clinical characteristics of patients are summarized in Table 1. The maximum level of education reported was generally low (n = 137): no schooling, 7.3%; primary school (age 6-12 years), 29.2%; secondary school (age 12-14 years), 27.0%; high-school, 21.9%; and completing university or postgraduate studies, 14.6%. The majority of patients (57.1%) reported no comorbidities, 14.9% reported diabetes, and 9.1% reported idiopathic hypertension; five patients had missing values. All other comorbidities impacted\5% of the sample. Smoking was associated with 52.0 and 3.5%, actively and formerly, respectively, of the 173 patients who reported on the variable. Almost all patients included in the study were diagnosed with adenocarcinoma, totaling 98.9% of the 179 patients reporting data on the variable. Of the 173 patients with data, 96.1% were diagnosed with metastasis: approximately onethird of patients (28.9%) in two or more locations, 8.9% in three or more sites, and only 2.2% in four sites. Tumor characteristics of both the primary location and the metastatic location are shown in Fig. 2. Patients had an average waiting time of 30.5 days [95% confidence interval (CI) 19.4-41.7] from diagnosis to first-line treatment. Treatment Patterns A total of 16 and 19 unique treatment regimens were identified for first-and second-line active treatment, respectively. Each of the five most used regimens in firstline therapy represented C10% of the total population, summing to 93.9% of all selected therapies. Selection of second-line treatment showed each of the three most frequently used regimens representing C10% of the overall population, and 64.9% of all regimens. Of the sample, 29 patients went on to receive BSC after first-line treatment, however, due to the small sample and lack of data on resource use, results for these patients are not presented. No obvious tendency of cycle length per regimens could be observed, however due to the small sample size of each regimen, statistical differences were not tested for. Details for treatment characteristics and frequently used regimens are included in Tables 2 and 3. Resource Utilization The majority of hospitalization in first-and second-line treatment was associated with surgery. In second-line treatment, inpatient care was also linked to treatment of toxicity and adverse events. Supportive care was most associated with pain treatment (pain clinic and narcotics) and the use of endoscopies (see Table 4 for details). Of the sample, 151 (84%) patients received two lines of chemotherapy, 51 (28.3%) received three lines, and \10% received more than three lines of active chemotherapy. The maximum lines of therapy received by a patient was seven. Estimated average costs per patient-month are presented in Table 5. Given the limited information that currently exists on patients diagnosed with advanced or metastatic GC in Mexico, this study was designed to better understand patient characteristics, real-world treatment patterns, and healthcare resource use. Additionally, this study estimated the average cost per patient-month. The patient population of Mexico follows international tendencies for late-or advanced-stage diagnosis, with the majority of patients being diagnosed with stage III and IV GC. Of the study population, approximately 80% of patients started first-line treatment in Eastern Cooperative Oncology Group (ECOG) 1. While this number decreased to 56.6% in second-line treatment, this reflects patients who had C3 months of follow-up, and likely represents patients who had a more positive response to treatment. Nonetheless, the data further suggest that few patients are treated once reaching ECOG 3. The findings of this study show a wide variety of regimens used in both first-and second-line treatment, with a total of 16 and 19 unique treatment regimens, respectively. Variability in treatment patterns has been demonstrated to be an internationally consistent trait, as demonstrated in similar studies conducted in the US, Taiwan, and South Korea [18][19][20]. The variation in Mexico remains comparatively high, which may reflect the lack of hospital-specific guidelines for the institutions included in the analysis. However, the fact that the most frequently used regimens in first-and second-line therapy represent 93.9 and 64.9% of patients, respectively, suggests a level of conformity in the selection process. When the investigating physicians were consulted on the results, it was submitted that regimen selection is influenced primarily by the availability of specific chemotherapies in the hospital pharmacy at the time of prescription, and their form of administration. The preference towards prescribing orally administered products may be a reflection of this latter variable; specifically, capecitabine was administrated to 58.3 and 60.3% of patients in first-and second-line treatment, respectively. Patients were purposefully switched to capecitabine from first-to second-line therapy when the viability of oral treatment was improved by first-line intravenous regimens. Oral chemotherapy may also be a demonstration of cost-constraining policies being implemented in the main hospital of the study, i.e. IMSS, as an attempt by doctors to reduce hospital visits. Administration of chemotherapy was identified as the main cost driver of this analysis, and a reduction of visits for this purpose would have an important impact on overall costs. Additionally, the study estimated that, on average, patients wait 30.5 days from diagnosis of metastatic GC to treatment. This may be another cost-constraining effort as the IMSS manages a global budget and a delay in treatment may increase cashflow flexibility for administrators. However, it is important to note that this may be institute-specific as wait times for elective services are reported as highly variable across and within institutes [21]. Finally, the study saw a lack of follow-up in tertiary-level care institutes for patients receiving BSC as the demand for resources by the high-volume IMSS hospital requires that these patients move to primary care units for follow-up care. In terms of overall cost estimates, the administration schedule of the selected regimen had the largest impact on the average cost per patient, representing 35-48% of total costs in first-and second-lines of treatments. This, compared with the 14-21% for the cost of drug acquisition, highlights the generic status of the products used. Inpatient hospitalization and supportive care used a comparable proportion of resources as medication. Supportive care medication and outpatient care were minimal and were primarily associated with analgesics and narcotics (morphine, buprenorphine, and tramadol), as well as nutritional support. The total average cost per patient-month of first-and second-line care were very similar, at US$1230 and US$1192, which is significantly lower than the US$10,653 per patient-month published by Knopf et al. for patients diagnosed with GC in the US [9]. In comparison, Yabroff et al. estimated that the last year of patient care for a patient with GC in the US was US$7435 [11]. This may be comparable with the results of a recently published paper in Mexico that estimated the cost per patient-year of latestage breast cancer in the IMSS for stages III and IV of MXN$154,018 and MXN$199,274, respectively, equivalent to US$9729 and US$12,587, respectively, using the previously stated 2015 average exchange rate [22]. While firm conclusions are hard to make given the different time horizons of the analysis, if considering the short overall survival of GC patients it may be expected that the overall spending on advanced GC in Mexico is similar to that of breast cancer. This is the first Health Resource Utilization study completed in Mexico focusing on the standard care of treatment of patients diagnosed with advanced and metastatic GC, and can be seen as a step towards providing information regarding treatment patterns and estimating the overall costs of these patients. This is of particular importance in a field where the majority of treatment options remain generic and the recent and future development of innovative products will increase overall treatment costs for public providers. Healthcare in Mexico is provided by multiple public institutes that deliver full or subsidized care depending on employment status. However, provision of care between institutes is not equal; each institute makes its own decision on the benefits and products to be provided, given the available resources. The results presented in this study are primarily a reflection of treatment practices and costs in the IMSS, the largest public healthcare provider in Mexico, which covers private sector employees and their dependents. [24]. These differences in spending are accounted for by the social/economic difference of the contributing population. While the study provides an important starting point for data collection in GC in Mexico, it is important to note certain limitations. In particular, the small sample size combined with the range of regimens identified in both first-and second-line therapy limited the ability to estimate costs per regimen and to compare results across regimens. Furthermore, the generalization of these results to all public institutes is limited. While the IMSS is the largest CI confidence interval a Pain interventions include use of the pain clinic and use of narcotics, including morphine, buprenorphine and tramadol b The radiotherapy data presented in this paper differ from the poster presented on the same study (Jones et al. [23]); differences were found between the reporting of posology, necessary for the cost calculations included in this paper, and general resource use included in the poster. This paper decided to report resource use using posology data due to the greater level of detail presented and in order to maintain consistency of data between resource use and costs. Differences are potentially due to capture error (transfer from paper Data Report File to electronic database) and patients who received radiotherapy but did not have posology details on file public institute in Mexico (insuring approximately 32% of the population [25]), as can be inferred by the different levels of expenditure between institutes, treatment patterns may vary across institutes. The treatment patterns presented here illustrate the IMSS as an institute rather than the national tendencies. Similarly, the unitary costs are representative of the IMSS, and care should be taken when applying to other institutes with different cost structures. A direct comparison between the price lists of the IMSS and the Secretariat of Health shows that the costs published for INCAN and the National Institute of Medical Science and Nutrition Salvador Zubirán (INCMNSZ) are approximately 5 and 12% of the cost of the IMSS. This difference has the potential to change both the total costs and the distribution of costs, even while resource use remains constant across institutes. Importantly, the protocol planned for the possible loss of patients to the system and included a criterion of a minimum of 3 months of follow-up in cases where more than one-third of all patients were lost before 3 months. This prioritized the capture of resource utilization of patients treated in tertiary care hospitals over the calculation of the percentage of patients treated in second-line with an active chemotherapy, and proved an important bias as patients are frequently sent to local health units to receive BSC. As a result, these patients were not captured in the study and it was not possible to calculate the proportion of patients treated with active care in second-line versus BSC. Finally, future investigations should look to expand the objective patient population to all patients, increase the study size, and include additional public institutes of interest in order to calculate more universally applicable results. Conclusion To our knowledge, this is the first study to look at patient management and resource use for patients with advanced or metastatic GC in Mexico, with results showing considerable variation in first-and second-line chemotherapy regimens. Understanding GC treatment patterns in Mexico will help measure the impact of new innovations in treatment practice and create opportunities to harmonize treatment options. content; provided final approval of the version to be published. JAT: Substantial contributions to conception and design of the study; acquisition, analysis, and interpretation of data; drafting and critically revising the manuscript for important intellectual content; provided final approval of the version to be published. DN: Substantial contributions to conception and design of the study; acquisition, analysis, and interpretation of data; critically revising the manuscript for important intellectual content; provided final approval of the version to be published. KJ: Substantial contributions to conception and design of the study; acquisition, analysis, and interpretation of data; drafting the manuscript; provided final approval of the version to be published. BSB: Substantial contributions to conception and design of the study; acquisition, analysis and interpretation of data; critically revising the manuscript for important intellectual content; provided final approval of the version to be published. JAS: Substantial contributions to conception and design of the study; acquisition of data; critically revising the manuscript for important intellectual content; provided final approval of the version to be published. Compliance with Ethical Standards Data availability statement The data are not made available at this time as they are currently being analyzed for further publications. . Patient consent was not required as the study was a retrospective, observational, patient chart review. Consent for publication Data capture respected international patient privacy regulations. Data that allowed for patient identification were not collected. Funding Funding for this study was provided by Eli Lilly and Company. Conflict of interest Miguel Quintana has received professional fees as a speaker on issues of GC for Eli Lilly and Company in Mexico. Diego Novick declares he is an employee of and owns stock in Eli Lilly and Company. Kyla Jones has received professional fees to conduct both the current study and additional studies for Eli Lilly and Company. Brenda S. Botello is an ex-employee of Eli Lilly and Company who was employed during data acquisition, analysis and development of the manuscript. She has no current competing interests. Open Access This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2018-04-26T19:49:05.519Z
2017-07-31T00:00:00.000
{ "year": 2017, "sha1": "e46e2ededab74bbe23e6a8a676a932babaf2a691", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s41669-017-0043-2.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e46e2ededab74bbe23e6a8a676a932babaf2a691", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
238650949
pes2o/s2orc
v3-fos-license
Evaluation of Raw Material Inventory in Socks Home Industry Using Economic Order Quantity (EOQ) This study aims to evaluate purchasing raw materials and the Home Industry Socks by using the Economic Order Quantity (EOQ) method and the compare to the company's actual method. And to find out whether the EOQ method is effective to be applied to the company. The data used in this study are in the form of notes, reports, and company documentation for the period 2017, 2018, and 2019 provided by the owner through observation, interview, documentation, and literature study. Meanwhile, the usage data, raw material prices, and raw material purchase data are obtained from company records. The analysis technique in this study used EOQ formula, standard deviation, safety stock, Total Holding Cost, Total Ordering Cost, and Total Inventory Cost. The result of this research is that EOQ method is effective to be applied to socks home industry companies to control inventory more efficiently because it can minimize inventory costs. Key-words: Economic Order Quantity, Safety Stock, Total Ordering Cost, Total Holding Cost, Total Inventory Cost. Introduction Covid-19, has had a global and local impact on the economy, so that many projects or businesses are not running smoothly, resulting in a sizeable deficit. This happens to large companies, and small companies, including Small and Medium Industries (SMI), for example, textile products, where 80% of companies stop their activities. (https://mediaindonesia.com). The influence also occurs in the small textile industry that operates at home or is commonly called the home industry. One of the SMIs that has been affected by this pandemic is the home industry in the production of socks, which is located in the Bekasi district, south Tambun sub-district. Socks are one of the processed socks decreases. This makes it difficult for business actors to estimate the costs of procuring raw materials and the number of raw materials needed because of the limited capital available from profit. Therefore, this research is focused on the supply of raw materials used in producing socks, namely yarn. The types of yarn used in the production of socks are polyester, rubber, spandex, and PE yarns. These raw materials are the dominant raw materials in the manufacture of socks. The selection of home industry socks is the object of research because the inventory control system used by the home industry socks is still simple. The owner only orders with traditional estimates, that is, if the amount of raw material availability has started to decrease or runs out, the company immediately orders raw materials again so that the production process does not stop. The calculation method has not yet been implemented in the home industry management, business actors do not know how big the optimal amount of raw materials is in one order. This happens because business actors do not have sufficient knowledge in inventory control and do not have experience in inventory control. Data on raw material purchases obtained from company records in the last 3 years has increased, this is due to an increase in finished socks products, with the price per kilogram of raw materials as follows: Data on the use of raw materials are obtained from the calculation of the amount of raw material used for every dozen socks products which are then multiplied by the number of sales from each month in one period. The use of raw materials for each dozen has been detailed in table 5. Therefore, it is necessary to use the sales data for the last 3 years that have been processed in table 4 to obtain the use of raw materials as a socks production process. If you look at sales in the last 3 years, the company's sales of finished socks look good because sales every year always experience a fairly large increase. Socks Type Poly Span PE Rubber office socks 120 g 120 g 120 g 48 g MK adult male 84 g 72 g 84 g 60 g MK adult ladies 60 g 48 g 60 g 24 g TOTAL 264 g 240 g 264 g 132 g Source: Data processed There are 3 types of socks produced, namely office socks, male adult ankle socks, and female adult ankle socks. It can be seen in table 5 that each product of one dozen socks requires different raw materials. This study aims to determine the number of economic orders at each time ordering raw materials, look for Safety Stock, Re-Order Point and determine storage costs, ordering, find out the frequency of economical orders and determine the total cost to obtain an efficient level of raw material supply. and determine the value-added tax of 10%. From the description above, researchers are motivated to want to use the EOQ method in controlling inventory, because this method is better known and more often applied in various companies. In addition, researchers choose the Economic Order Quantity method because this method can answer questions about conditions that often occur in companies, namely determining the amount of inventory under the needs of the company. Literature Review Inventory management is part of an inventory system to achieve minimum costs. (Panday et al.). The related things are the amount of order, reorder time, the number of items to be ordered, average inventory level. Inventory management aims to service the customers, in anticipation of meeting demand, maximizing the efficiency of purchasing, minimizing stock cost, and maximizing profits. Too much inventory will cause increased inventory costs, while too little inventory will result in increased ordering costs and excessive ordering frequency. (Panday et al.) Type of inventory according to (Heizer, J., & Render) (Hillier and Lieberman) (Blumenfeld)) as raw material inventory, work-in-process inventory, maintenance repair operating, finish-good inventory. EOQ (Economic Order Quantity) EOQ is one of the oldest and most widely known inventory management models, to determine the number of goods / raw materials with minimal costs (Heizer, J., & Render) (Hillier and Lieberman) (Blumenfeld) (Kalaiarasi). The goal of the EOQ method is the efficiency of inventory levels in terms of low cost and optimal needs. The EOQ method has been used in several studies by (Yuliana et al.) Prior Research Research using EOQ to optimize inventory and costs has been used in research (Wahyudi) to sandals inventory at Samarinda's New Era store, (Nurhasanah) Method This research is a quantitative study, conducted at Tambun Selatan, Bekasi, dilakukan pada bulan September 2020 hingga Januari 2021. The calculation method uses Economic Order Quantity. The data needed is inventory data which includes sales data, ordering costs, holding costs, and product ordering data. After collecting all data, a calculation is made using the Economic Order Quantity formula. Then a comparative analysis is carried out on the implementation of inventory management. Result and Discussion Before doing the calculation, EOQ will calculate the storage and ordering costs per unit. Home Socks Industry purchases yarn raw materials independently using their private car once a month, purchasing raw materials in the Bandung Regency area has a fairly far reach from Bekasi Regency. costs incurred by the company, namely the cost of gasoline of Rp. 300,000, the entrance fee for travel via TOLL is Rp. 150,000 for one trip, and consumption on the trip is Rp. 50,000 when the total costs are Rp. 500,000 x 12 months than in one period of Rp. 6,000,000. Total Order cost/ Frequency Ordering 6.000.000 12 = 500.000 ∶ 4 = 125.000 The value of 4 is the number of types of raw materials ordered so that it is found that the ordering costs incurred are Rp. 125,000 in one order. Comparison of Actual Procedures and EOQ The results of the computation have been carried out previously and then the results of the two methods are compared to be a decisive choice for the home industry company socks which method is the most effective and efficient. The results of the comparison can be seen in the table below: It can be seen carefully in the company procedure table with the EOQ procedure that the EOQ method produces a larger optimal order quantity while the warehouse capacity is only 500 kg which exceeds the warehouse capacity. Therefore the company must increase the capacity of the existing warehouse so that the EOQ method can be implemented. The company can have a safety stock and an order point when the company must reorder. The results obtained by the EOQ method of ordering frequency are reduced in one period and the total cost of inventory becomes more economical and the value-added tax also decreases. With the EOQ method, the company can save costs of Rp. 3,733,498, 2018 period Rp. 3,898,388 and the 2019 period of Rp. 4,068,296. Conclusions Based on the results obtained by using the economic order quantity (EOQ) method at the home socks industry company, the following conclusions can be drawn:
2021-09-27T18:53:59.755Z
2021-08-16T00:00:00.000
{ "year": 2021, "sha1": "09f19239bbae71451cfeb884216cbd11f7bf23f6", "oa_license": "CCBYNC", "oa_url": "https://www.revistageintec.net/index.php/revista/article/download/2484/1783", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "eefd672fa33f704f760e330cd7e57450fae8fb99", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Economics" ] }
17566583
pes2o/s2orc
v3-fos-license
High-dose intravenous selenium does not improve clinical outcomes in the critically ill: a systematic review and meta-analysis Background Selenium (Se) is an essential trace element with antioxidant, anti-inflammatory, and immunomodulatory effects. So far, several randomized clinical trials (RCTs) have demonstrated that parenteral Se may improve clinical outcomes in intensive care unit (ICU) patients. Since publication of our previous systematic review and meta-analysis on antioxidants in the ICU, reports of several trials have been published, including the largest RCT on Se therapy. The purpose of the present systematic review was to update our previous data on intravenous (IV) Se in the critically ill. Methods We searched MEDLINE, Embase, and the Cochrane Central Register of Controlled Trials. We included RCTs with parallel groups comparing parenteral Se as single or combined therapy with placebo. Potential trials were evaluated according to specific eligibility criteria, and two reviewers abstracted data from original trials in duplicate independently. Overall mortality was the primary outcome; secondary outcomes were infections, ICU length of stay (LOS), hospital LOS, ventilator days, and new renal dysfunction. Results A total of 21 RCTs met our inclusion criteria. When the data from these trials were aggregated, IV Se had no effect on mortality (risk ratio [RR] 0.98, 95 % CI 0.90–1.08, P = 0.72, heterogeneity I 2 = 0 %). In addition, when the results of ten trials in which researchers reported on infections were statistically aggregated, there was no significant treatment effect of parenteral Se (RR 0.95, 95 % CI 0.88–1.02, P = 0.15, I 2 = 0 %). There was no positive or negative effect of Se therapy on ICU and hospital LOS, renal function, or ventilator days. Conclusions In critically ill patients, IV Se as monotherapy does not improve clinical outcomes. Electronic supplementary material The online version of this article (doi:10.1186/s13054-016-1529-5) contains supplementary material, which is available to authorized users. Background Selenium (Se) is an essential trace element with antiinflammatory and immunomodulatory properties, and it is currently considered the cornerstone of the antioxidant defense system [1,2]. Over the past 30 years, a large number of basic and clinical studies have revealed the crucial role of Se in the maintenance of immune, metabolic, endocrine, and cellular homeostasis, which is attributed to its presence in selenoproteins, as the 21st amino acid selenocysteine [3]. These selenoenzymes are involved in redox signaling, antioxidant defense, thyroid hormone metabolism, and immune responses [4]. Thus far, critical illness with systemic inflammation and multiple organ dysfunction syndrome was deemed to be associated with an early reduction in plasma/serum Se and glutathione peroxidase (GPx) activity, where both parameters correlate inversely with the severity of illness and clinical outcome [5,6]. Over the last two decades, researchers in several randomized clinical trials (RCTs) have evaluated the role of parenteral inorganic selenocompounds such as sodium selenite or selenious acid, either as a single-agent strategy or in combination with other antioxidant micronutrients (antioxidant cocktails) and using different dose regimens in critically ill patients with systemic inflammation. These studies have shown beneficial results in terms of reduction of infections, mortality, and other relevant clinical outcomes in the critically ill. In 2005, the authors of the first comprehensive systematic review and meta-analysis on antioxidant nutrients in the critically ill [7] demonstrated that Se supplementation could be associated with a reduction in mortality, while nonselenium antioxidants had no effect on mortality. More recently, authors of other systematic reviews and meta-analyses [8][9][10][11][12][13][14] on Se therapy in intensive care unit (ICU) patients found that pharmaconutrition with parenteral Se monotherapy may be able to significantly reduce mortality in patients with sepsis, particularly when an intravenous (IV) bolus was provided and daily doses higher than 500 μg were administered. In addition, Se substitution was more effective in those patients with higher risk of death [13]. Nonetheless, since 2013, several studies of the effects of parenteral Se supplementation as single or combined therapy have been published [7,[14][15][16][17]. The REducing Deaths due to Oxidative Stress (REDOXS) trial [16] investigators were unable to find a therapeutic benefit of a combined Se supplementation regimen (300 μg enteral plus 500 μg parenteral). The most recent and largest RCT on Se monotherapy in severe sepsis and septic shock, the Sodium Selenite and Procalcitonin Guided Antimicrobial Therapy in Severe Sepsis (SISPCT) study [17], further demonstrated that high-dose IV sodium selenite was not associated with improved survival. Therefore, with the aim of elucidating the overall efficacy of parenteral Se as single or combined therapy (antioxidant cocktails) in adult critically ill patients, we performed an update of our previous systematic review and meta-analysis of the literature. Study identification We conducted a systematic review of the literature published between 1980 and 2015 using the computerized databases of MEDLINE, Embase, the Cochrane Controlled Trials Register, and the Cochrane Database of Systematic Reviews. Text words or MeSH headings containing "randomized," "blind," "clinical trial," "parenteral," "intravenous," "selenium," "sodium selenite," "selenious acid," "antioxidant cocktails," "critical illness," and "critically ill" were used without any language restriction. We also reviewed our personal files and comprehensive reviews for additional original studies. Study selection criteria Original studies were included if they met the following criteria: 1. Randomized controlled trial study design with a parallel group 2. A population of critically ill adult patients (>18 years old), defined as patients admitted to an ICU (If the study population was unclear, we considered a mortality rate higher than 5 % in the control group to be consistent with critical illness.) 3. The administration of parenteral Se in the intervention arm (either with or without an initial bolus) as a single-agent strategy or in combination with other antioxidant micronutrients compared with a control group with a placebo 4. The evaluation of clinically relevant outcomes such as mortality, infectious complications, ICU or hospital length of stay (LOS), length of mechanical ventilation (MV), and new renal dysfunction including the requirement of renal replacement therapy Clinical studies that reported only biochemical, metabolic, or immunologic results were excluded. All original studies were evaluated and abstracted in duplicate independently by two reviewers using a data abstraction form that had been used previously [18]. A discussion was held and consensus was obtained between the reviewers when a disagreement occurred. When additional data were needed, we attempted to contact the authors of the published article. We scored the methodological quality of the original trials, aiming to obtain a score between 0 and 14, with the following high-quality criteria: (1) the extent to which randomization was concealed, (2) intention-to-treat (ITT)-based analysis, (3) extent of blinding, (4) baseline comparability of groups, (5) extent of follow-up, (6) description of treatment protocols and cointerventions in both arms, and (7) definition of clinical outcomes [18]. We designated studies as level I if all of the following criteria were fulfilled: concealed randomization, blinded outcome adjudication, and an ITT analysis, which are the strongest methodological tools to reduce bias. A study was considered as level II if any one of the abovedescribed characteristics was unfulfilled. Data analysis The primary outcome was overall mortality. Hospital mortality, when available, was used for the statistical analysis. If not reported, we used ICU mortality or 28day mortality. When not specified, mortality was assumed to be hospital mortality. Secondary outcomes included infections, hospital and ICU LOS, MV days, and new renal dysfunction as defined by the authors of the original articles. We used the definition of infections employed by each author. The data from all trials reporting the specific outcome were combined to calculate the pooled risk ratio (RR) for mortality and infections, and pooled weighted mean difference (WMD) for LOS, both with 95 % CIs. All analyses were conducted using Review Manager (RevMan) 5.3 software, except for the test for asymmetry. Pooled RRs were calculated using the Mantel-Haenszel estimator, and WMDs were estimated by the inverse variance approach. The random effects model of DerSimonian and Laird [19] was used to estimate variances for the Mantel-Haenszel and inverse variance estimators. When possible, studies were aggregated on an ITT basis. Heterogeneity in the data was tested by a weighted Mantel-Haenszel chi-square test and quantified by using the I 2 statistics implemented in RevMan 5.3 software. Differences between subgroups were analyzed using the test of subgroup differences described by Deeks et al. [20], and the results were expressed using the P values. Funnel plots were generated to assess the possibility of publication bias, and the Egger regression test was used to measure funnel plot asymmetry [21]. Asymmetry was calculated using Comprehensive Meta-Analysis 3.0 statistical software (Biostat Inc., Englewood, NJ, USA). P values <0.05 and <0.10 were considered as statistically significant and indicators of a trend, respectively. A priori hypothesis testing Significant differences in the protocols of the original studies were expected. Thus, several prespecified hypothesisgenerating subgroup analyses were performed to identify potentially more beneficial treatment strategies. First, we compared the results of trials in which investigators administered parenteral Se as monotherapy with studies in which researchers provided parenteral Se in antioxidant cocktails. Based on previous RCTs showing a beneficial effect of an initial loading dose, those RCTs using an initial loading dose as an IV bolus of Se were then compared with trials those that did not. In addition, because researchers in previous trials found that daily doses higher than 500 μg were associated with better outcomes, we compared the results between three subgroups having different daily doses: lower than 500 μg, equal to 500 μg, and greater than 500 μg. Moreover, on the basis of a possibly larger treatment effect in patients with higher risk of death, we compared studies including patients with higher mortality vs. those with lower mortality. Mortality was considered to be high or low based on whether it was greater or less than the mean control group mortality of all the trials. Additionally, we postulated that trials with lower quality (level II studies) might demonstrate a greater treatment effect than those trials with higher quality (level I studies). Furthermore, as current evidence showed benefits in terms of reduction in mortality in septic patients, the results of RCTs performed only with patients with sepsis were compared with RCTs performed with heterogeneous patient populations (nonsepsis studies). We also assessed the effect of Se in soils according to the geographical region where the trial was conducted. For this purpose, we compared RCTs performed in deprived regions (Europe, South America, and Asia) versus trials performed in nondeprived regions (North America). Finally, given the interaction between Se and procalcitonin (PCT) in the SISPCT study [17], we conducted a sensitivity analysis excluding the PCT guidance group of patients. Study identification and selection A total of 41 relevant citations were identified in the search of computerized bibliographic databases and a review of reference lists in related articles. Of these, we excluded 20 for the following reasons: 8 trials did not include ICU patients (mostly surgery patients) [22][23][24][25][26][27][28][29]; 1 study did not evaluate clinical outcomes [30]; 1 study compared high-dose with low-dose Se [31]; 3 articles were duplicates [32][33][34]; 4 articles were systematic reviews; 1 trial was published as an abstract [35], and we were unable to obtain the data from the authors to complete our data abstraction process; 1 study was not an RCT [36]; and in 1 trial Se was not given intravenously [37]. Ultimately, 21 studies [14-17, 38-54] met our inclusion criteria and were included; they comprised a total of 4044 patients (Tables 1 and 2). The reviewers reached 100 % agreement for the inclusion of the trials. The mean methodological score of all trials was 9 of a maximum possible score of 14 (range 4-13). Randomization was concealed in 9 (43 %) of 21 trials; ITT analysis was performed in 14 (67 %) of 21 trials; and double-blinding was done in 7 (33 %) of 21 of the studies. There were 6 level I studies and 15 level II studies. The details of the methodological quality of the individual trials are shown in Table 1. Primary outcome: mortality When the results of the 21 trials in which researchers reported mortality were aggregated, no statistically significant difference was found between Se supplementation and placebo (RR 0.98, 95 % CI 0.90-1.08, P = 0.72, heterogeneity I 2 = 0 %) (Fig. 1). In the sensitivity analysis, after excluding the PCT guidance group of the Bloos et al. study, we found that the new RR was 0.95 (95 % CI 0.79-1.15, P = 0.63, I 2 = 35 %) (Additional file 1: Figure S1). Secondary outcomes Overall effect on new infectious complications When data from ten studies were included in the metaanalysis, no significant effect of Se supplementation on infections was found (RR 0.95, 95 % CI 0.88-1.02, P = 0.15, I 2 = 0 %) (Fig. 2). Overall effect on ICU and hospital length of stay and ventilator days When ten RCTs in which researchers reported ICU LOS were statistically aggregated, there were no significant differences between the groups (WMD 0.32, 95 % CI −0.80 Overall effect on new renal dysfunction After aggregating the data from eight RCTs in which researchers reported new renal dysfunction, Se therapy was not associated with a significant reduction in the incidence of renal dysfunction (RR 0.79, 95 % CI 0.57-1.08, P = 0.14, Subgroup analyses PN selenium monotherapy vs. combined therapy The RR associated with parenteral Se monotherapy for mortality was 0.91 (95 % CI 0.79-1.04, P = 0.16; I 2 = 12 %) ( Fig. 1), compared with 1.08 for studies in which researchers used Se as part of combination therapy (95 % CI 0.93-1.25, P = 0.33). The test of subgroup differences was not significant (P = 0.10). There was no effect on infections with either Se monotherapy or combined therapy (test for subgroup differences, P = 0.59) (Fig. 2). PN selenium loading dose vs. no loading dose There was no treatment effect difference in mortality between studies in which researchers used a parenteral loading dose of Se as an IV bolus (RR 0.90, 95 % CI 0.75-1.08, P = 0.30, I 2 = 18 %) and those not using a loading dose (RR 1.01, 95 % CI 0.90-1.14, P = 0.83) (Fig. 3). The test for subgroup differences was not statistically significant (P = 0.30). There was also no effect on infectious complications in studies in which researchers used a parenteral loading dose (RR 0.99, 95 % CI 0.90-1.09, P = 0.84, I 2 = 0 %), whereas there was a significant reduction on infections in studies without a parenteral Se loading dose (RR 0.87, 95 % CI 0.77-0.99, P = 0.03, I 2 = 0 %) (Fig. 4); the test for subgroup differences was not significant (P = 0.11). PN selenium high dose vs. low dose There were no significant treatment effect differences in mortality (test for subgroup differences P = 0.89) and infections (test for subgroup differences P = 0.52) when we compared studies with high-dose vs. low-dose parenteral Se supplementation (data not shown). Higher vs. lower mortality The mean hospital mortality rate (or ICU mortality when hospital mortality was not reported) in the control group of all the trials was 33 %. After aggregating 11 studies with a higher mortality rate, we found that IV Se did not have an effect on mortality (RR 0.94, 95 % CI 0.84-1.05, P = 0.26, I 2 = 17 %). In addition, IV Se did not have an effect on mortality in eight studies with a lower mortality (RR 1.08, 95 % CI 0.90-1.08, P = 0.24, I 2 = 0 %). The test for subgroup differences was not significant (P = 0.17) (see Additional file 2: Figure S2) In addition, there were no significant treatment effect differences in infections (test for subgroup differences P = 0.35) when we compared studies with high vs. low mortality rates in the control group (see Additional file 2: Figure S3). Study quality on outcomes When we evaluated study quality on outcomes, a statistically significant effect of IV Se on the reduction of mortality was found in the low-quality trials (RR 0.79, 95 % CI 0.66-0.94, P = 0.007, I 2 = 0 %), whereas trials with higher methodology scores did not show any significant effect (RR 1.06, 95 % CI 0.95-1.19, P = 0.27, heterogeneity I 2 = 0 %). The overall tests for significance revealed statistically significant differences between these subgroups (P = 0.004) (see Additional file 3: Figure S4). Neither high-nor low-quality trials demonstrated any effect on infections (test for subgroup differences P = 0.72) (see Additional file 3: Figure S5). Effect of geographic representation of study patients on outcomes We found no significant treatment effect differences in mortality (test for subgroup differences P = 0.33) and infections (test for subgroup differences P = 0.57) when we compared studies in which researchers administered IV Se in North America vs. other geographic regions (see Additional file 4: Figs. S6 and S7). Publication bias There was no indication that publication bias influenced the observed aggregated results. In fact, funnel plots were created for each study outcome, and the tests of asymmetry showed a trend for mortality (P = 0.08), although the data were not significant for overall infections (P = 0.19), hospital LOS (P = 0.50), or MV days (P = 0.37). However, the test for asymmetry was significant for ICU LOS (P = 0. 047). Discussion This updated systematic review and meta-analysis of the effects of parenteral Se as single or combined therapy on PN Selenium Combined Berger 1998 [16,17], the main finding of our meta-analysis is that there was a lack of treatment effect when critically ill patients were treated with IV Se as single or combined therapy. In fact, we were unable to demonstrate any effect of IV Se supplementation on mortality or any significant effect on infections, ventilator days, or ICU and hospital LOS. Furthermore, our a priori defined subgroup analyses did not show any treatment effects on mortality. However, we found a significant effect of Se therapy on infectious complications in those studies without an initial IV loading dose, as well as a similar effect in trials conducted in nonseptic patients. Over the last few years, different meta-analyses of Se supplementation in the critically ill have been published [8][9][10][11][12][13]. The differences with our present review are due largely to the variations in the studies included in the reviews, as our systematic review and meta-analysis is the first to include the two largest trials done to date on Se therapy in the critically ill. Our present findings do not support the concept of pharmaconutrition by which micronutrients such as Se are provided in high (i.e., supraphysiological) doses in order to derive a pharmacological effect. Conversely to previous findings regarding IV high-dose Se monotherapy [8,9], we did not find an overall effect of Se on infectious complications in the critically ill, although parenteral Se without an initial bolus significantly reduced infections. Nonetheless, the overall point estimate on infections of Se monotherapy is primarily and largely influenced by the Bloos et al. [17] study, the largest trial (n = 1089) (n= 1089) on pharmaconutrition with high-dose Se monotherapy and PCT-guided antibiotic therapy in patients with severe sepsis and septic shock. After giving an initial IV loading dose of 1000 μg sodium selenite followed by a continuous infusion of 1000 μg sodium selenite daily for no longer than 21 days, Bloos et al. [17] found that secondary infections were similar in both groups of patients. Interestingly enough, the SISPCT study [17] demonstrated that high-dose Se supplementation had no therapeutic benefit in septic patients, although plasma Se depletion at baseline was restored to the normal range already by treatment day 1, suggesting that correction of plasma Se concentration may have no beneficial value. According to this study, plasma Se levels in the Se or placebo groups were not affected by allocation to the PCT guidance or non-PCT guidance group. Our sensitivity analysis showed that, after excluding the PCT guidance arm, the new RR was 0.95 (previous RR 0.98), which confirms that excluding the PCT arm of the Bloos et al. study [17] did not affect the overall result of our analysis. In contrast to previous knowledge, we were unable to find beneficial effects with daily doses higher than 500 μg or with providing an initial loading dose as an IV bolus (usually 1000-2000 μg in 30 minutes to 2 h). However, the absence of a significant test of subgroup differences weakens any inferences drawn from this subgroup analysis, but it likely shows the ineffectiveness of employing a loading dose with an aim of improving outcomes. So far, it has been proposed that a loading dose of 1000-2000 μg Se as pentahydrate sodium selenite has prooxidant and cytotoxic effects [1,53,55]. In a previous pharmacokinetic study [31], it was demonstrated that an initial IV bolus of 2000 μg followed by a continuous infusion of 1600 μg/day was the most effective dose for returning serum Se to physiologic levels and safely maximizing extracellular GPx activity and therefore the antioxidant capacity in critically ill patients. According to current knowledge, a very high Se concentration may be able to produce an inhibition of nuclear factor-κB binding to DNA, controlling gene expression and the synthesis of proinflammatory cytokines [56,57], and also may be able to induce apoptosis and cytotoxicity in activated proinflammatory cells [58]. In addition, [59] demonstrated that an IV Se bolus improved hemodynamic status, decreased inflammation biomarkers, and reduced mortality. Meanwhile, contrary to our present data, in 2012 we found that a parenteral loading dose showed a trend toward reduction in mortality, whereas studies that did not use a bolus-loading dose did not show any effect on mortality. Similarly, Huang et al. [9], after aggregating nine RCTs on Se monotherapy, demonstrated that an IV bolus was associated with a significant reduction in mortality (RR 0.73, 95 % CI 0.58-0.94, P = 0.01). However, neither of the previous meta-analyses considered the SISPCT study [17]. So far, most clinical studies using Se at low doses have been underpowered and have involved Se administered in a cocktail approach. Thus, positive results in those trials cannot be clearly attributed solely to Se supplementation. Notwithstanding this, according to the concept of nutrient replacement, by which micronutrient substitution is aimed at replenishing losses and target restoration of physiological function [60], Se must be supplemented at standard doses by the enteral (77-100 μg/day) or the parenteral (100-400 μg/day) route [61] because the results of our meta-analysis do not refute previously recommended Se substitution doses. Despite earlier results of the Angstwurm study [41], which demonstrated that IV high-dose Se substitution in septic patients with an Acute Physiology and Chronic Health Evaluation II score higher than 15 significantly reduced the requirements of renal replacement therapy (P = 0.035), the post hoc analysis of the REDOXS [16] study demonstrated that patients with renal failure might have a worse outcome when treated with highdose antioxidants. In fact, Heyland and coworkers [16] demonstrated that both glutamine and antioxidants appeared harmful in patients with baseline renal dysfunction, showing a higher 28-day mortality (OR 3.39, 95 % CI 1.41-8.17, and 3.07, 95 % CI 1.24-7.59, for antioxidants alone and glutamine plus antioxidants, respectively). Nonetheless, in the recently published SISPCT study [17], researchers did not find any risk of increased harm in patients with baseline renal failure. In addition, in the present study, we were unable to find any deleterious effect of Se therapy on renal function in the critically ill. Thus, the current understanding of why there is a lack of therapeutic effect of IV Se therapy in critically ill patients and patients with severe sepsis remains unclear. Notwithstanding this, Se therapy could show benefits in other patient populations that were not considered in our meta-analysis. In fact, it is currently known that circulating Se levels significantly decrease in the perioperative period of cardiac surgery [62]. Also, in a nonrandomized interventional trial, Stoppe et al. [63] demonstrated that high-dose sodium selenite therapy as a pharmaconutrient strategy was effective in preventing the decrease of Se levels and that clinical outcomes may be superior in supplemented patients compared with a historical control group. The SodiUm SeleniTe Administration IN Cardiac Surgery (SUSTAIN CSX®trial, ClinicalTrials.gov identifier NCT02002247), an RCT aimed at evaluating the effects of perioperative high-dose Se supplementation in high-risk cardiac surgical patients undergoing complicated open heart surgery, is currently recruiting participants [64]. According to current evidence derived from recent trials and our meta-analysis, the updated version of Canadian Clinical Guidelines [65] recommended not using IV Se alone or in combination with other antioxidants in critically ill patients, which means that this strategy has recently been downgraded. The strength of our meta-analysis is based on the fact that we used several methods to reduce bias (comprehensive literature search, duplicate data abstraction, comprehensive search strategy using specific criteria, and including non-English-language articles), we contacted trial authors to obtain additional data and refine our analysis, and we ultimately focused on clinically important primary outcomes in ICU patients. In addition, given the wide variety of clinical diagnoses and the heterogeneous population of ICU patients included in this systematic review (sepsis, septic shock, trauma, pancreatitis, surgical ICU patients), the results and conclusions may be applied to a broad and heterogeneous group of critically ill patients. While having a fairly large overall sample size and low heterogeneity, which makes the estimate quite robust, our subgroup analyses are limited by the small number of trials. Conclusions In this updated systematic review and meta-analysis, we found that parenteral Se as single or combined therapy with other antioxidant micronutrients had no effect on mortality, infections, renal function, ICU and hospital
2018-04-03T04:31:37.612Z
2016-10-28T00:00:00.000
{ "year": 2016, "sha1": "f58e5afbf013270802dd6f94e3970cf797120052", "oa_license": "CCBY", "oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/s13054-016-1529-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f58e5afbf013270802dd6f94e3970cf797120052", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
252282889
pes2o/s2orc
v3-fos-license
A Systematic Literature Review of Virtual Reality Locomotion Taxonomies The change of the user's viewpoint in an immersive virtual environment, called locomotion, is one of the key components in a virtual reality interface. Effects of locomotion, such as simulator sickness or disorientation, depend on the specific design of the locomotion method and can influence the task performance as well as the overall acceptance of the virtual reality system. Thus, it is important that a locomotion method achieves the intended effects. The complexity of this task has increased with the growing number of locomotion methods and design choices in recent years. Locomotion taxonomies are classification schemes that group multiple locomotion methods and can aid in the design and selection of locomotion methods. Like locomotion methods themselves, there exist multiple locomotion taxonomies, each with a different focus and, consequently, a different possible outcome. However, there is little research that focuses on locomotion taxonomies. We performed a systematic literature review to provide an overview of possible locomotion taxonomies and analysis of possible decision criteria such as impact, common elements, and use cases for locomotion taxonomies. We aim to support future research on the design, choice, and evaluation of locomotion taxonomies and thereby support future research on virtual reality locomotion. INTRODUCTION L OCOMOTION allows users to change their viewpoint in an immersive virtual environment (IVE) and is therefore part of most user interfaces for virtual reality (VR) systems.A locomotion method (LM) realises locomotion in VR and can lead to different advantages and disadvantages such as simulator sickness [1], [2] or disorientation [3].Over the recent years the number of LMs has risen [4] to meet new requirements, due to new possibilities enabled by technical advances, or because of new insights how LMs affect users.The rising number of LMs presents researchers and designers with a novel challenge: How can LMs and the related knowledge be structured, e.g., to implement VR applications, identify research gaps, or deduce insights from the knowledge gathered by multiple authors? Several researchers addressed this challenge by proposing a taxonomy, i.e., a knowledge representation [5] in the form of a classification scheme [5], [6] that has a hierarchical structure [5], [7], [8], [9].Locomotion taxonomies can consist of higher-level locomotion concepts (e.g., Walking Techniques) or axes of the design space (e.g., Input Conditions).Thus, taxonomies can group locomotion methods or provide a basis to compare them. Researchers developing a taxonomy have to identify potential use cases which also build the basis for evaluating the taxonomy later on [10], [11], [12], [13], [14], [15].Currently, there exist several different VR locomotion taxonomies [16].Thus, researchers who want to use a VR locomotion taxonomy or the contained locomotion concepts in surveys [17], related works [18], or knowledge databases [19] first need to choose one.A well-founded decision requires an overview of all potential choices and a decision basis. The work of Al Zayer et al. yields a short introduction into 12 VR locomotion taxonomies [16].Di Luca et al. [19] provide content-wise insights by describing similar nodes of 13 taxonomies.In a previous work we provided potential decision aids for 28 VR locomotion taxonomies based on publication data including the year and impact [20]. However, there exists no systematic content-wise overview and analysis of VR locomotion taxonomies, their evolution, or an identification of possible use cases.Moreover, the amount of identified taxonomies in our previous work suggested that there are taxonomies that have not been considered in existing analyses. We performed a systematic literature review (SLR) of VR locomotion taxonomies including an overview and analysis.Our aim is to support a well-founded choice and the development of new taxonomies based on insights into the research field, providing knowledge from previous work, and by presenting common use cases that enable a user-centred design approach.The insights into the research field focus on presenting where researchers agree and where possible gaps or less explored areas may exist.This is achieved by analysing the agreement among taxonomy authors by extracting common elements and forming clusters of taxonomies.Moreover, interest in the taxonomies from other researchers is considered by means of impact.Since knowledge can change over time, the common elements, the taxonomy clusters and the impact are also considered over time.In addition to providing insights into the evolution of the research field and the knowledge that has already been acquired, use cases can provide the basis for creating, choosing and evaluating taxonomies in a user-centred approach.Usefulness of taxonomies is one of the most common quality criteria [10] and is often evaluated based on use cases [11], [13], [14], [15].To support this approach, we have extracted several use cases by extracting the goals described by the authors in the taxonomy publications, which were subsequently fused by several researchers.The identified intentional use cases described by the taxonomy authors can be used for future analyses of actual use cases.Overall, our work provides readers with an overview of existing locomotion concepts, their similarities, and evolution over time as well as common use cases for locomotion taxonomies.Our contributions are the following: an overview of existing locomotion elements such as concepts presented by taxonomies and a comparison how they are related, a temporal analysis that can be utilised by researchers who are interested in the history of VR locomotion research, an identification of common use cases for VR locomotion taxonomies to enable a user-centred design approach and future validation of new taxonomies.Our contributions are intended to help researchers in the design, choice, and evaluation of VR locomotion taxonomies and concepts and thus drive the research of locomotion in virtual reality.Our SLR shows that researchers can choose between 27 different VR locomotion taxonomies that have been introduced between 1994 and 2020.We extracted three clusters of taxonomies with different elements: decomposition of LMs based on the control elements, grouping of LMs based on the metaphor, and a discrimination between the interaction fidelity or plausibility.The temporal analysis shows a recent trend to the second group of taxonomies and a greater interest in the first group in earlier years.We identified five common use cases by fusing the aims that are described by taxonomy authors.Among the use cases, the exploration of the design space as well as the design and evaluation of LMs were the most frequent ones.Our results suggest that there exist differences in the applicability to some use cases for the identified taxonomy clusters. Overall, our results show that there are different VR locomotion taxonomy clusters.Within these clusters there is consensus among researchers with respect to the aims of the taxonomy, the use cases, and the common elements while they differ between the taxonomy clusters.In addition, we found that the knowledge and perspectives on knowledge change over time leading to different locomotion taxonomies. BACKGROUND & RELATED WORK We conducted an initial non-systematic literature review to get an overview of already existing work and background knowledge of VR locomotion taxonomies and taxonomies in general. Al Zayer et al. list some VR locomotion taxonomies, subdivided into general and specific taxonomies [16].Di Luca et al. provide an introduction into top-level elements of common VR locomotion taxonomies and later integrate some of them as filter options into their VR locomotion database [19].However, a systematic literature research or analysis of VR locomotion taxonomies was beyond the scope of both works.In a previous work [20], we performed an analysis of the publication data for VR locomotion taxonomies but did not analyse the taxonomies and their evolution contentwise.Content-wise analyses can provide insights into the knowledge of researchers that is the basis for VR locomotion taxonomies.Kersten-Oertel et al. [21] conduct a contentwise analysis of a mixed reality taxonomy by comparing its components against text corpus statistics. Other works examine how taxonomies in general can be evaluated.Szopinski et al. [10] performed an SLR of evaluation criteria for taxonomies and found usefulness to be the most frequent one.Nickerson et al. [11] argue that taxonomies should be evaluated with respect to their usefulness, i.e., how well they serve identified use cases or purposes, by including users.The taxonomy evaluation methods identified by Szopinski et al. [13] include case studies where user experience (UX) methods are used, illustrative scenarios where the taxonomy is applied and evaluated, e.g., with respect to its completeness or usefulness, and action research where the taxonomies are introduced into the work process to assess their usefulness.Oberl€ ander et al. [14] reviewed several evaluation methods, the most frequent ones were cluster analysis and case study research.Sch€ obel et al. [15] validate a gamification taxonomy in two use case studies. Thus, content-wise analyses as well as evaluations with respect to the usefulness based on use cases can help researchers comparing and evaluating VR locomotion taxonomies. RESEARCH QUESTIONS The state of the art in Section 2 served as a basis for formulating our research questions.In addition, the procedure for identifying the research questions involved group discussions among VR researchers.Our goal is a thorough SLR and later analysis of VR locomotion taxonomies to provide an overview and support the choice, creation, and evaluation of locomotion taxonomies. The basis for a later analysis is the identification of VR locomotion taxonomies leading to the first research question: R 1 : What are existing taxonomies or categorisations for LMs? Previous works [16], [19], [20] identified different taxonomies where some taxonomies were not identified by other authors and vice versa.This suggests that the identification of taxonomies could be incomplete such that both works do not give a thorough answer to R 1 .A reason for this could be that the taxonomies are a means to an end in both works and, consequently, the authors did not conduct a systematic literature review to identify as many taxonomies as possible.Thus, we performed an SLR to identify taxonomies as completely as possible and answer R 1 . Research question R 1 builds the basis for the subsequent analysis of VR locomotion taxonomies to provide contentwise insights.Taxonomies consist of multiple elements that can differ due to different perspectives and estimations which aspects of locomotion are important.Taxonomy elements that have been identified by multiple authors are more likely to be important parts of VR locomotion.Usually, meta-analyses in SLRs fuse different medical studies or user studies with the same research question by deriving a common estimate as answer that is closer to the truth.The meta-analysis approach can be adapted to taxonomies to extract a common estimate of important elements for classifying VR locomotion methods by dense areas of overlap where researchers are in general agreement.Thus, our second research question is: R 2 : What are common elements of these taxonomies?Common elements are key elements that many researchers identified in their previous works.At the same time, less common elements are part of less explored areas that could be potential research gaps.Thus, researchers get an insight into important elements and research gaps of taxonomies.In addition, we identify clusters of similar taxonomies based on the common elements of the taxonomies.These represent dense areas of overlap where researchers are in general agreement. Apart from the agreement among authors of a VR locomotion taxonomy, there also exists the interest of the overall community, e.g., researchers applying, adapting, or reviewing the proposed taxonomy.The interest of other researchers in a taxonomy is not considered when regarding the overlap between the taxonomies.This interest shows how much a taxonomy is approved and can therefore be an indication of a useful taxonomy incorporating important key elements.Thus, taxonomies with a higher impact might be interesting when choosing taxonomies, e.g., if the taxonomy is used as a common reference.Therefore, the third research questions considers the impact: R 3 : What impact do these taxonomies have?So far, our research questions analyse the current state of the research field.However, taxonomies are knowledge representations and knowledge can evolve over time.Novel LMs can be introduced that cannot be assigned to previously considered categories of locomotion, changing the perspective of VR researchers.Disruptive technologies might open new possibilities that have not been considered before.User studies can shift the focus to different categories of LMs.As a result, novel taxonomies are introduced to fill research gaps and cover new trends.Thus, apart from the current state of the research field described by R 2 and R 3 , the evolution over time is important to identify possible trends and gaps, i.e.: R 4 : How did the research field of taxonomies evolve?Research question R 4 involves the impact of taxonomies (R 3 ) over time as well as the temporal evolution of common elements and taxonomy clusters (R 2 ).Research questions R 2 , R 3 , and R 4 help to chose and design taxonomies based on the agreement among taxonomies and their impact.However, less explored areas are not necessarily promising research gaps.Similarities between taxonomies and their impact determine the usefulness of the taxonomies based on the opinions of researchers which do not have to be the users of taxonomies.Thus, a user centred-approach can provide further insights into the usefulness of existing VR locomotion taxonomies as well as reveal gaps that have not been covered yet.In a user-centred approach the context of use, including use case scenarios, is analysed to derive requirements for which solutions are designed and subsequently evaluated. Thus, use cases can enable a user-centred approach for deriving requirements for taxonomies, designing taxonomies, and evaluating taxonomies in future works.This motivated a further research question: R U : What are common use cases described by the authors of VR locomotion taxonomies? Research question R U focuses on applying an existing taxonomy and omits objectives for designing taxonomies.The use cases are based on the objective the authors describe for applying a VR locomotion taxonomy. METHOD SLRs can help to reduce bias [22] and enhance the comprehensibility and reproducibility since they are documented.SLRs are frequently used in medicine and most procedures are described for medical research [23], [24], [25], [26].Kitchenham adapted these procedures for software engineering [26] and the resulting guidelines have been applied in software engineering and, more specifically, in VR research [4], [27], [28], [29].Thus, we followed Kitchenham's procedure for performing an SLR consisting of three steps: Planning, conducting, and reporting the review [26]. During the planning phase, the need for an SLR is identified and the review protocol is specified.In the following, we describe the review protocol before describing the screening process and results for the identification of articles. Search Strategy The overall research topic of the above research questions and the SLR are VR locomotion taxonomies.Thus, the three main keywords we added to our search strategy were virtual reality, locomotion, and taxonomy.We followed Kitchenham [26] by adding similar terms (e.g., synonyms, abbreviations, or alternative spellings) for each of these keywords to the query keywords.Our preliminary literature review showed that more recent papers tend to use the term immersive virtual environments, while earlier works rather use the term virtual reality.Thus, we included both terms as synonyms.Next, all query keywords were combined by AND's and OR's to generate the subsequent search strings: (taxonomy OR classification scheme OR survey) AND (locomotion OR travel) AND (virtual reality OR immersive virtual environments) We executed the queries in multiple search databases: ACM Digital Library [30], CiteSeerX [31], dblp [32], Google Scholar [33], IEEE Xplore [34], Scopus [35], and Semantic Scholar [36].Following Badampudi et al. [37], we retrieved the first ten results for each search string.The SLR database queries have been carried out during February and September 2020.In a subsequent step, we performed backward snowballing [38], i.e., we retrieved and screened the publications cited in the primary sources.Since the SLR focuses on the whole history and evolution of taxonomies, there was no restriction to the publication year. Study Selection In group discussion among VR experts, selection criteria were identified and further refined during the study.Papers identified by the search strategy above were narrowed down to relevant publications using selection criteria, i.e., inclusion and exclusion criteria.A directly inflicted exclusion criterion concerned patents since the initial literature review did not yield any patents introducing locomotion taxonomies and their exclusion increases the probability of relevant publications among the first ten search results.Additional criteria selected relevant studies among the retrieved search results.First, duplicates found using different search strings or retrieved from different databases, were omitted.Second, non-English papers were excluded, due to understandability and comparability issues.Third, two researchers screened the remaining publications for an explicit categorisation or taxonomy of LMs in general or of a subcategory of LMs.The first researcher labelled the publications as red, yellow, or green.Publications marked in red most certainly contain no VR locomotion taxonomies, e.g., publications on the travel and holiday industry.Yellow publications were publications in the VR research field and contain a taxonomy or classifications but described them very implicitly.Green publications contained explicit VR locomotion taxonomies.The second researcher was given the evaluation of the first researcher, assessed the labels based on the title, and screened all yellow and green publications for a VR locomotion taxonomy.Subsequently, both researchers discussed the publications and decided by consensus which publications were included.In the last step, reintroductions of taxonomies or taxonomy parts were excluded.Only the first publication of a taxonomy was included.This enabled assessing the impact of taxonomies based on a similar measure: the number of citations of the publication they were first introduced in. Study Quality Assessment Currently, there exists no common procedure to evaluate the quality of locomotion taxonomy papers or the introduced taxonomy.Thus, we did not enforce any quality criteria. Data Extraction The required data is extracted non-automatically from the retrieved publications and Google Scholar.We extracted the full text and reference as well as the following data: The proposed taxonomy (addressing R 1 , R 2 and R 4 ) The title, authors, publication type (book or book chapter, journal paper, conference paper, or miscellaneous), conference, and year as suggested by Isenberg et al. [39] (addressing R 3 and R 4 ) The number of citations for each year between 1994 and 2021 (addressing R 3 ) Data Synthesis After extraction, the data had to be synthesised to answer the research questions R 2 , R 3 , R 4 , and R U which is described in the following.For research question R 1 , addressing existing taxonomies, the extracted taxonomies already provide the necessary data. Common Elements (R 2 ).We used JSON as a humanand machine-readable standard to collect the data and structure of all identified taxonomies.We have made the JSON file publicly available on Zenodo to enable other researchers an integration into their research projects [40].We also provide the source text or images from which the taxonomies were extracted to allow an easy traceability of the taxonomy extraction. Word frequencies can provide a first idea of keywords present among all taxonomies.However, they can have a different spelling or the concept linked to the word might be referred to via synonyms and antonyms.A semantic similarity measure is required to cluster words meaning the same key concept.Common approaches to determine the semantic similarity between two words include computed measures based on databases [41], [42], [43], [44], [45], a text corpus [43], [46], or search engine results [43].Another option are user studies which yield human similarity measures, e.g., by asking participants for the perceived similarity of words [46], [47], [48]. User studies can be considered as the gold standard [42], [45] but require many participants [46].This makes user studies especially difficult for VR locomotion taxonomies that might require expert knowledge to assess the semantic similarity of domain-specific concepts.Outcomes between user studies might also differ, resulting in a correlation of up to 0.9 between human similarity measures [45], [46]. Computed similarity measures can have a correlation of 0.65-0.8against human similarity measures and require less effort [45].Among the computed similarity measures, using the lexical database WordNet [49], [50] is the de-facto standard [41] that is commonly used to semantically annotate benchmark datasets [51].For small datasets with only few words, there is less variation in the results when different statistical algorithms are used [52].Since the number of words contained in the taxonomies is small compared with large text corpora, we expect both elaborated and simple statistics to yield similar results.Thus, we use simple statistics on WordNet, a low threshold of 0 distance between at least 3 synsets to prevent wrong semantic clusters, and an additional human estimation of the identified clusters afterwards. Overall, our method contains the following steps to synthesise the taxonomy data into word clusters, given the JSON-modelled taxonomies: 1) Extract single words from taxonomies by separating at space, slash, and comma signs.2) Omit simple words (i.e., of, to, and, or, the, a, in, from, yes, no, for), ellipsis and numbers from the analysis.3) Cluster step 1 (Misspellings): cluster words that have more than two characters with a Levenshtein distance of up to one.4) Cluster Step 2 (Alternative Spellings): cluster all different spellings of a word (e.g., walk, walking, walked,...).5) Manually check for correctness of first two cluster steps.6) Cluster Step 3 (Semantics): cluster words that are both in at least 3 different synsets in WordNet.7) Manually check for correctness of third cluster step.We used WordNet 3.1 and a revised version of Google's WordNet-Blast [53] to access it.We dissolved clusters in step 5 but did not manually cluster words to avoid biased and subjective clusterings.Clusters are also difficult to separate and easier to form ex post by researchers based on the presented results. In the next step, we identified the word clusters that were used by many authors.For each taxonomy only one occurrence per word cluster was counted to calculate the frequency and the ten most frequent word clusters were extracted. Taxonomies consist of edges and nodes that contain one or multiple words.In addition to single words, we also aimed to extract similar taxonomy nodes between the taxonomies.For each pair of taxonomy nodes the similarity was computed as the sum of the word similarities between each word of the first node and all words of the second node.We defined the word similarity as one if the two words were in the same word cluster.If this was not the case, we computed the mean synset similarity between each synset of the first word and all synsets of the second word.The synset similarity is based on the shortest path distance (SPD) computed using WordNet-Blast [53]: WordNet-Blast traverses the synset tree path for all ancestors common to both synysets and returns the shortest calculated path.We calculated a normalised node similarity measure where the node similarity is divided by the maximum word count of both nodes.For all normalised node similarities from one node to all other nodes, we calculated the z-score based on the normalised similarity measure to one node and the mean and standard deviation of the normalised similarity measures to all other nodes.Identifying a taxonomy node that is more similar to a node than others equals upper-tailed hypothesis testing.For upper-tailed hypothesis testing, a level of significance of.001equals a z-score of above 3.902.Thus, if node B has a z-score of above 3.902 for a node A it can be considered as more similar to node A than other nodes on a p-value level of < :001.To calculate the similarity from one taxonomy to another taxonomy, the score of all similar nodes are added, i.e., all z-score similarities between their nodes that are above 3.902.This sum is divided by the multiplied number of nodes in both taxonomies.If the similarity values differ, we take the minimum of both similarities.To get the z-score values of the taxonomy similarities, the mean and standard deviation from one taxonomy to all other taxonomy are calculated.The z-score similarity between two taxonomies is based on the average mean and average standard deviation of each taxonomies to all other taxonomies.In contrast to the node similarity z-scores where only nodes on a p-level of.001 are considered, we analyse taxonomies with similarities on a p-level of.05. Taxonomy Impact (R 3 ).The number of citations can give an estimation of the impact of the extracted locomotion taxonomies.The overall accumulated number of citations is difficult to compare since it will rise over time.Thus, we observed the number of citations for each year between 1994, where the first taxonomy was introduced, and 2021, which is the last completed year.Long papers or books can have a substantially higher number of citations than shorter papers. Research Field Evolution (R 4 ).Our analysis with respect to the research field evolution focuses on the impact and common elements.Together, they provide an idea of uprising ideas, elements, and whole taxonomies.In contrast to research questions R 2 and R 3 , we focus on the temporal evolution of impact and common elements.To analyse the evolution of the impact, we observe the change in the number of citations during March 2020 and August 2021, yielding an estimation of the recently gained impact.Our analysis of the evolution of common elements consists of computing the top ten common elements for each year, starting with the year where at least three taxonomies had been introduced. Use Cases (R U ).We extracted text passages where use cases are described and clustered them in common objectives.In a next step, we described the use case in own words and added a title.The use cases were then reviewed and improved in two steps.In the first step, feedback was provided by a researcher with domain-specific knowledge of locomotion taxonomies.In the second step, the use cases where given to a researcher in human-computer interaction without focus on VR. Screening Process and Results In the following, we describe the process of identifying VR locomotion taxonomy articles using the search protocol described in Section 4. Fig. 1 visualises the described process based on the PRISMA 2020 statement [54]. The queries retrieved 587 publications (ACM Digital Library: 120, CiteSeerX: 119, dblp: 3, Google Scholar: 120, IEEE Xplore: 43, Scopus: 62, and Semantic Scholar: 120).Among these results were 460 duplicates for which the original 132 articles were included while the 328 duplicates were excluded.In addition to the 132 originals, 127 articles without any duplicates were included such that 259 articles remained. Two papers were written in Korean and Portuguese language and were excluded as well as two papers for which the text was not available and requests to the authors were not answered. In the next step, 232 articles were excluded since they did not contain a VR locomotion taxonomy or categorisation.The remaining 23 publications contained a VR locomotion taxonomy.Two of the 23 publications reintroduced a VR locomotion taxonomy and were excluded resulting in 21 publications.Two further taxonomy publications were discarded after a review and discussion of their structure among three researchers.The taxonomy in the publication by Arns and Cruz-Neira [55] merely takes up parts of a taxonomy that was already introduced by Arns [56] in a previous publication.Yi et al. [57] apply the taxonomy by Boletsis [4] and do not propose their own taxonomy.Some of the remaining 19 publications were retrieved from multiple search databases, resulting in hit rates from 0.83% to 33.33% (ACM Digital Library: 1/120 (0.83%), Cite-SeerX: 3/119 (2.52%), dblp: 1/3 (33.33%),Google Scholar: 14/120 (11.67%),IEEE Xplore: 5/43 (11.63%),Scopus: 5/62 (8.07%), and Semantic Scholar: 8/120 (6.67%)).Backward snowballing yielded eight further publications. Overall, 27 publications were used in our analysis, containing 19 publications found directly via queries and 8 publications found via backward snowballing. RESULTS In the following, we describe the results of our analysis according to the method described in Section 4 for the 27 identified VR locomotion taxonomies.The subsections each address one of the research questions outlined Section 3. Locomotion Taxonomies Fig. 2, 3, and 4 depict the extracted VR locomotion taxonomies based on the clusters identified in Section 5.2 to provide insights of their content and structure.Common elements (see Section 5.2), are coloured according to Fig. 6 to allow the reader an easy localisation and exploration of common elements. Common Elements According to the method described in Section 4, we extracted the ten most common words among all VR locomotion taxonomies. In addition to the most common elements and the taxonomies referencing them, we also extracted the taxonomy nodes that include the identified common elements.Depicting the different taxonomy nodes shows the different perspectives and descriptions that the taxonomies provide on these common elements.Fig. 6 shows a word cloud of the taxonomy nodes including concepts that have been referenced by more than eight taxonomies.The size of the cloud elements was chosen according to the number of taxonomies including them as a node.Below the taxonomy node the reference to the taxonomy paper is given.The colour depicts which common element (Walk, Technique, Locomotion, User, Virtual, or Travel) the cloud element includes. Taxonomy Similarities In addition to the word clusters, we computed the taxonomy similarities and node similarities which are described in the following. We found the strongest relation between the taxonomies of Arns (Fig. 3f, [56]) and Nabiyouni & Bowman (Fig. 3g Many of the similarities are due to the integration of the taxonomy by Bowman et al. (Fig. 3b, [72]) into Arns taxonomy, which is the second strongest relation (z ¼ 3:6041). Another highly significant similarity was found between the taxonomy of Nabiyouni and Bowman and the taxonomy by Tan et al. (Fig. 3e, [74]).Both have a node for Speed and Orientation or View (Orientation).In the taxonomy of Two significantly related taxonomies are the ones by Fisher et al. (Fig. 4e, [76]) and Boletsis (Fig. 2g, [4]) with z ¼ 2:0484.However, this is due to the use of locomotion and interaction in both taxonomies und thus can be ignored. The analysis reveals three clusters where taxonomies are connected by strong similarities within the cluster and nonsignificant similarities to other taxonomies.The first cluster consists of the taxonomies (Fig. 3) by Mine [77], Bowman et al. [72], Tan et al. [74], Arns [56] and Nabiyouni and Bowman [81] and focuses on the segmentation of a single LM based on the direction/path, speed/ velocity/acceleration, position, orientation, input or output. The second cluster (Fig. 5) contains the taxonomies of Jerald [73], Suma et al. [69], and Bozgeyikli et al. [71].These taxonomies focus on grouping LMs with similar patterns and metaphors especially walking and steering concepts. The third cluster (Fig. 4) consists of the taxonomies of Slater and Usoh [75] and Nilsson [70] which categorise LMs based on the metaphor plausability of the interaction as mundane or magical. Research Field Evolution To analyse evolution of the research field, the change of impact and common elements as well as the similarities between the taxonomies can show a shifting interest in certain taxonomies and a different focus in the taxonomies itself. To assess the impact of taxonomies, we retrieved the number of citations for each year between 1994 and 2021 from Google Scholar [33] as described in Section 4. Fig. 7 shows the proportionate number of citations per year for all taxonomy publications where the colour depicts the taxonomy cluster. During the first years, approximately until 1998, the work of Mine [77] and especially the work of Slater and Usoh [75] made up the majority of citations.Between 1998-2016, Cluster 1 taxonomies were the most prominent publications followed by a rise of interest in Cluster 2 taxonomies in 2009, that surpass the citation part of Cluster 1 taxonomies in 2017.Since 2017, the book by Jerald [73], which introduced a Cluster 2 taxonomy, has the greatest part of all citations.Unclustered taxonomies made up a substantial part of the number of citations until 1998.A major part can be related to the work of Hand [58] and later to the work of Boletsis [4] that had the second highest number of citations in 2021 after Jerald [73]. Fig. 8 shows a temporal depiction of the VR locomotion taxonomies, with the clusters identified in Section 5.2 together with the change of the ten most common elements since 1996.In the following, we examine the temporal evolution of the taxonomy clusters and their impact on common words in VR locomotion taxonomies. The first taxonomy cluster, depicted in blue in Fig. 8, starts with the taxonomy by Mine in 1995 and ends with the taxonomy by Nabiyouni and Bowman in 2016.However, Cluster 1 can rather be placed in the period from 1995 to 2002, since all taxonomies but the last one have been published in this time period.The same holds for the second cluster in green where all taxonomies but the one by Bowman et al.The taxonomies of Cluster 2 shape the period from 2010 to 2020 when elements that were often used in taxonomies of Cluster 1, e.g., the input, direction or acceleration, were discarded in favour of grouping LMs based on common metaphors or design patterns. Use Cases for VR Locomotion Taxonomies The previously described results provide data of the taxonomies and publications itself.One of the follow-up questions that arose from these results concerned the motivation and intention for VR locomotion taxonomies and how they can be applied.To answer this question, we derived use cases based on the method described in Section 4. Overall, we extracted five use cases in which VR locomotion taxonomies can be applied and depicted them in Fig. 9.The use cases were ordered based on how a process with a locomotion method or component could look like.First one explores the design space.Subsequently, one either finds an existing method or component, or if none of the existing ones satisfies the requirements one creates a new method or component.Afterwards, this locomotion method or component is evaluated and the taxonomy can be used as a common reference to transfer its design and idea. This process is similar to parts of the user-centred design process according to the ISO 9241-210 standard [82] where a design solution is created first, e.g., by exploring the design space and finding or creating a locomotion method or component, before evaluating it. We found that most authors described exploring the design space and evaluating locomotion methods or their components as a use case for VR locomotion taxonomies, followed by the use case of creating a locomotion method or component, finding a locomotion method or component, and using the taxonomy as a common reference. The exploration of the design space was identified by authors introducing a Cluster 1 taxonomy, i.e., Mine [77], Bowman et al. [66], Tan et al. [74], and Nabiyouni and Fig. 8. Temporal depiction of the taxonomies and taxonomy clusters (top) and the common elements (bottom).Top: Edges depict the similarities between the taxonomies and significant similarities if they are coloured.The colour depends on the associated cluster: Unclustered (grey), Cluster 1 (blue), Cluster 2 (green), and Cluster 3 (orange).Blue edges are darker for a higher p-value level of significance.Bottom: For each year the top ten common elements are shown.More elements per year are displayed, if there are equally ranked elements for the 10th place.For each year the common elements are sorted in descending order by the number of taxonomies it was referenced by.The thickness of the lines is proportional to the number of taxonomies referencing the element.The top 6 elements also displayed in Fig. 6 have a coloured background, while all other elements have a gray background. Bowman [81], Jerald [73] who introduces a Cluster 2 taxonomy, and Boletsis [4] who introduces a non-clustered taxonomy.Design spaces consist of multiple dimensions which represent the design possibilities and potential choices [83], [84].The design space can be defined in a systematic approach by identifying similarities and differences of multiple existing designs to define the dimensions of the design space [83], [84].These identified design spaces can be useful for design space exploration where multiple designs or design options are compared by designers with respect to the given requirements [83].Cluster 2 taxonomies focus on describing components of LMs and possible design choices instead of clusters of LMs, i.e., they "partition the design space" [66].Thus, they are designed to allow an exploration of the design space.Mine and Bowman et al. argue that their Cluster 2 taxonomies provide "a good understanding of the types of interaction that are possible" [77] and help to "understand the space of possible techniques" [66] by decomposing it into "smaller, more easily understandable pieces" [66].Nabiyouni and Bowman point out that this decomposition and identification of design space parts enables users to "analyze the components of [...] locomotion techniques".Tan et al. explicitly state that their taxonomy is meant to "drive the exploration of the design space" [74].In contrast, the taxonomy of Jerald does not decompose LMs and thus does not contain different dimensions of the design space.Instead, it identifies groups of LMs and allows the user to "identify possible design choices," i.e., discard or further examine whole groups of LMs.Boletsis' [4] approach for defining a taxonomy is equivalent to the process of defining a design space: in an SLR existing design solutions, i.e., LMs, are identified and subsequently analysed and compared to "map the VR locomotion research field [and] identify research gaps in the field that warrant further exploration".During the analysis different design space dimensions are identified and integrated into the taxonomy.Zielasko et al. [85] provide an example for this use case by using the taxonomy of Suma et al. [63] for their design space exploration. Four authors suggested that taxonomies can help finding an already existing locomotion method or component [56], [66], [73], [81] by helping with the choice [66], [81].In order to make a well-funded choice one needs a given set of requirements as well as an overview of all possible choices to prevent skipping a possible solution that could have fulfilled the requirements better than all considered solutions.A taxonomy can provide such an overview by describing "types of locomotion available" [56].Jerald suggest that taxonomies can also help when searching for alternatives "when a specific technique fails" [73] since users can then consider "other techniques within the same pattern" [73].In the same way taxonomies can help to choose a component [66], [81]. A related use case is the creation of a locomotion method or component [56], [66], [67], [74], [81].By separating LMs into their components, locomotion taxonomies allow a more modular workflow where single components can be replaced [56] or multiple components can be combined [56], [81].Other authors point out that taxonomies can guide [56], [66] and inspire [74] during the creation process.Nabiyouni [86] provides an example for creating a novel locomotion method based on a taxonomy. The evaluation of an LM or its components was described by six authors as an application for VR locomotion taxonomies [2], [56], [66], [71], [73], [81].The suggested ways that taxonomies could support evaluations range from helping with the planning of the experiment [66], over supporting comparisons [2], [56], [73] to making the results more understandable [81].Arns [56] and Bowman et al. [66] propose to use their taxonomies as a "framework".Bowman et al. [66] further specify that, in addition to the taxonomy, performance metrics and outside factors are also part of the framework.In this framework taxonomies can help to "generate ideas for experimental evaluation" [66] by " [d]esigning experiments that vary particular components systematically and independently".In addition, locomotion methods can be compared to each other [56], [73] by classifying them using a taxonomy which reveals similarities and differences [56].When evaluating LMs, different results can be attributed to the identified differences which helps to "understand the effects of design choices" [81].An example of this use case is the user study by Dewez et al. [87] where the taxonomy by Boletsis [4] was used to choose the locomotion methods evaluated in the use study. A less frequently described use case is to use taxonomies as a common, standardised description [4] or a "common reference" [66].This supports the communication between researchers [4], [73] by using "broader pattern names and concepts" [73], such as Flying.Communication on a more abstract level conveys the rough idea without giving a more time-consuming description that clearly distinguishes one technique from another" [4].A taxonomy can be especially helpful for "interaction aspects and functionalities that were previously difficult to describe" [4].For example, Martinez et al. [17] use the taxonomy by Bowman et al. [68] to structure their systematic review and Di Luca et al. [19] use taxonomy elements for their VR locomotion online database called LocomotionVault. DISCUSSION To address R 1 (What are existing taxonomies or categorisations for LMs?), we retrieved 27 VR locomotion taxonomies that have been introduced between 1995 and 2020.The overlap between the taxonomies identified in previous works of Al Zayer et al. [16], Di Luca et al. [19], and our previous work [20] shows the difficulty of identifying and extracting locomotion taxonomies.Smaller overlaps can be due to different methods, foci, and different understandings of what a taxonomy is.We retrieved 12/12 (100%) of the taxonomies found by Al Zayer et al. and 11/14 (79%) of the taxonomies found by Di Luca et al. (see Section 5).The three works included in the analysis by Di Luca et al. were not included in our analysis and most likely not found since they describe locomotion methods instead of categories [88], [89], [90].We discarded two of the taxonomies included in our previous work because one was a slightly altered re-introduction and one applied an already existing taxonomy.Al Zayer et al. found 12 of our 27 taxonomies (44%), but at least four were published after their publication (12/23, 52%).Di Luca et al. found 8 of the 27 taxonomies we analysed (30%).The comparison to previous work shows that we found 79%-100% of the locomotion taxonomies detected previously suggesting that we provide a thorough answer to R 1 . To answer research question R 2 (What are common elements of these taxonomies?), we identified common elements among all taxonomies but also common elements that were predominantly found in clusters of taxonomies.Some common elements over all taxonomies identified by us were too general to provide much insight, e.g., Locomotion, while others, e.g., Input, were predominantly used by specific clusters of taxonomies.Some of the identified common elements overlap with the one's identified by Di Luca et al. (Walking, Move/Motion, Input, Continuous).Other elements identified by Di Luca et al. do not occur in our list of common elements, e.g., the discrimination between egocentric and exocentric have only been used by Hand et al. [58].We found that some of the identified elements of Di Luca et al. were frequently used in Cluster 1 taxonomies but are less common when regarding all taxonomies (Control, Velocity/Speed, Acceleration, Direction).As our results show, taxonomies are not homogeneous but form groups with unique common elements and a different focus, e.g., grouping or decomposing LMs.Thus, it is more meaningful to extract common elements for taxonomy clusters and there is not a single answer to R 2 but rather multiple answers for each cluster of taxonomies.Our analysis revealed three clusters of taxonomies: Cluster 1 taxonomies focused on the decomposition of LMs, Cluster 2 taxonomies grouped LMs based on the metaphor, and Cluster 3 taxonomies separated concepts between Mundane/Natural and Magical.The elements Speed/Velocity, Acceleration, Selection, Constant and Object are used exclusively by Cluster 1 taxonomies.Position, Control, and Direction are used mainly by Cluster 1 taxonomies.Cluster 2 taxonomies often contained elements related to Technique, Walking, and Steering.The focus on walking and steering suggests that other metaphors such as teleportation are currently underrepresented and could be considered more closely.Taxonomies in Cluster 3 all contain the elements Mundane or Natural and Magical. Our analysis of the citation data (R 3 : What impact do these taxonomies have?)estimates how the impact evolves over time.We found that recently the interest in taxonomies in general increased but also shifted from decomposing taxonomies to metaphor-based taxonomies.Our results are based on the citation data and merely estimate the impact since higher citation numbers can be due to multiple reason.However, our results are consistent with previous work.This is in line with our previous work where we found a similar rise of interest in 2015 based on a rising number of taxonomies that have been introduced. Our results for research question R 4 (How did the research field of taxonomies evolve?) suggest that the knowledge that taxonomies model evolves over time.Decomposing taxonomies have been mostly introduced during 1995 and 2002 while metaphor-based taxonomies have been a more recent trend.With the introduction of new taxonomies, the importance of common elements that are present in all taxonomies can increase or decrease.In our previous work we also found a shift to more specific taxonomies instead of taxonomies for locomotion in general.This change in VR locomotion taxonomies can be due to novel knowledge or a changing understanding of the knowledge.Di Luca et al. pointed out that knowledge changes over time and proposed a database for locomotion methods that evolves over time, i.e., enables users to add novel locomotion methods.To provide a further ability to adapt to the changing knowledge, the underlying knowledge model, i.e., the taxonomy, should be also capable to change over time. To answer research question R U (What are common use cases described by the authors of VR locomotion taxonomies?), we identified five use cases.The use cases we present can be used in a user-centred approach to design novel taxonomies but also to evaluate existing taxonomies as proposed in several related works [11], [13], [14], [15].We found that some use cases were only described for specific taxonomy clusters.The use case of creating a method based on components was only described for decomposing Cluster 1 taxonomies.The authors of Cluster 2 taxonomies mainly described the use case to explore the design space and to use the taxonomy as a common reference.Thus, our results can help to identify which use cases might be applicable to a given taxonomy.The rising interest in metaphor-based taxonomies could lead to an increased interest in associated use cases as exploring the design space and using a common reference.With a growing design space and accumulating existing knowledge due to the rising amount of locomotion methods over the recent years [4], the importance of these use cases can increase. CONCLUSION & FUTURE WORK Locomotion is part of most virtual reality applications and over the recent years the amount of knowledge on VR locomotion has risen.While knowledge representations such as taxonomies can help to structure this knowledge and many VR locomotion taxonomies have been introduced, there exists no survey and in-depth analysis of VR locomotion taxonomies.We performed an SLR to retrieve VR locomotion taxonomies and further analysed them by visualising their structure and outlining their scope (R 1 ) extracting common elements, similarities and clusters of taxonomies that are similar (R 2 ) comparing their impact based on citation data (R 3 ) analysing the temporal evolution of common elements together with the temporal evolution of taxonomy clusters (R 4 ), and extracting common use cases (R U ) Our work provides researchers, developers and designers with a visual overview and analysis of current VR locomotion taxonomies and the locomotion concepts contained within them.The locomotion concepts within taxonomies support several use cases, including the common use cases we identified and described. Our SLR provides a systematic overview of locomotion taxonomies and concepts as well as insights into gaps, such as little emphasis on teleportation, and emerging trends, as the increasing focus on groups of LMs based on metaphors instead of splitting LMs into components.Our analysis supports the decision for locomotion concepts or whole taxonomies, e.g., when structuring locomotion knowledge or communicating locomotion methods.Researchers and designers can use these insights to create novel locomotion methods, e.g., by using metaphor-based design approaches or focusing on less explored areas as teleportation. The temporal analysis of VR locomotion elements shows how locomotion concepts evolved over time and can be used by researchers interested in the history of VR locomotion.Together with other temporal depictions such as the introduction of locomotion methods [19] it can provide additional insights into the temporal evolution of the research field. The identified use cases support a user-centred taxonomy design and can be utilised later on to evaluate the created taxonomy.We found that the structure, focus and interest in taxonomies changes over time and suggest to enable future taxonomies to adapt and evolve.The use cases also enable researchers to evaluate and compare multiple locomotion concepts such as metaphors with respect to their usefulness for the identified use cases. Our work provides insights into how researchers aim to structure the rising knowledge in VR locomotion research and what their main objectives are.We hope to inspire and drive future work in the area of VR locomotion and the structuring of VR locomotion knowledge. Future work could focus on an extension of the use cases by use cases described in other research areas, as, e.g., using taxonomies for learning and teaching [91].Since our analysis of citation data merely estimates the impact of taxonomies, further insights into the impact of VR locomotion taxonomies are required.This could be achieved by analysing how taxonomies have been applied and what the results were, e.g., user studies and novel locomotion methods.This analysis could also provide interesting examples for the identified use cases and taxonomy preferences for specific use cases.Additionally, we are interested in evaluating how well already introduced taxonomies perform for the identified use cases to provide a decision basis based on use cases for researchers applying locomotion taxonomies.The integration of VR locomotion taxonomies into typical workflows as the user-centred design process [92] as suggested by Schweißet al. for an AR taxonomy [93] could further motivate the usage of locomotion taxonomies.We are interested in how taxonomies could change over time to adapt to a changing knowledge or shifting interest. Tintu Mathew received the BTech degree in computer science and engineering from the SCT College of Engineering, India, and the MSc degree in autonomous systems from Hochschule Bonn-Rhein-Sieg, Germany.She is currently leading the research group Intelligent and Immersive Systems with Fraunhofer FKIE.Her research interests include user-centred design of augmented and virtual reality applications with a focus on natural user interfaces, navigation in virtual environments, and human-robot interaction. Benjamin Weyers (Member, IEEE) received the PhD degree from the University of Duisburg-Essen, Germany, in 2011.He is currently Assistant Professor for Human-Computer Interaction with University of Trier, Germany.His research interest lies in development and investigation of interactive systems in work-related context with a specific focus on virtual and augmented reality, persuasive systems as well as the application of formal modeling methods. Fig. 1 . Fig.1.The study identification and screening process depicted as a flow chart based on the PRISMA 2020 statement[54]. Fig. 6 . Fig.6.In the elements referenced by at least nine taxonomies are depicted in descending order, each with a different colour.The taxonomy nodes containing these elements are depicted in the same colour with the reference to the taxonomies below them. in 2004 have been published in the time period from 2010 to 2019.The third cluster in orange consists of the taxonomies by Slater and Usoh in 1994, by Nilsson in 2015, and by Fisher et al. in 2017.The taxonomies of the Cluster 1 shape the period from 1995 to 2010 and classify LMs based on the way different Control elements (Common Element in Fig. 8: 1999-2010, 2015-2016; Mine 1995, Bowman et al. 1999, Templeman et al. 1999, Tan et al. 2001) have been chosen, i.e., how the Input (2001-2010, 2016; Bowman et al. 1996, Templeman et al. 1999, Tan et al. 2001., Arns 2002, Nabiyouni and Bowman 2016) has been designed.Many contain a discrimination based on the specification of the Direction (1996-2001, 2008-2010; Mine 1995, Bowman et al. 1996, Templeman et al. 1999) and Position (2001-2010, 2016; Bowman et al. Fig. 7 . Fig. 7. Percentage of citations per year between 1994 and 2021 for the publications in which the taxonomies where introduced.Unclustered taxonomies are displayed in grey, Cluster 1 taxonomies in blue, Cluster 2 taxonomies in green, and Cluster 3 taxonomies in orange.The black line shows the overall number of citations per year for all introduced taxonomies. The foundation was laid by Arns in 2002 with the first integration of a walking category and Bowman et al. in 2004 where the focus was a discrimination merely based on metaphor, e.g., as steering and target-based, or type of Technique (2010-2020; Bowman et al. 2004, Suma et al. 2010, Bozgeyikli et al. 2019).Taxonomies in Cluster 2 integrated different Walking methods (2008-2020; Suma et al. 2010, Wendt 2010, Nilsson et al. 2013, Jerald 2015, Ferracani et al. 2016, Bozgeyikli et al. 2019), e.g., via a Treadmill (2010, 2015-2016; Wendt 2010, Jerald 2015).The most common ones are Redirected Walking, Walking in place, and Real Walking (see Fig. 6).The second most common category are Steering methods (1999-2001, 2004-2020; Bowman et al. 2004, Suma et al. 2010, Jerald 2015).The third cluster contains taxonomies which categories LMs as mundane or natural and magical.While this categorisation was first introduced in 1994 by Slater and Usoh, it has not been adapted over two decades.Only two more recent taxonomies embedded such a categorisation: The one by Nilsson in 2015 and the one by Fisher et al. in 2017.Thus, a mundane/magical discrimination is not reflected in the common words depicted in Fig. 8. Fig. 9 . Fig. 9. Identified use cases for applying VR locomotion taxonomies with the title, description, and citations of mentions in the taxonomy publications. Both taxonomies have Rotation and Translation nodes with a similar structure.Translation is related to a DoF node with child nodes that list different DoFs.Both taxonomies have a Position(-based) node in relation with the Velocity and Acceleration Selection or the Speed of the Input and Output.Arns lists Sliding Sandals as Interaction Device while Nabiyouni and Bowman have Sliding as a Walking Movement Style.Arns attached a Body node as a child of Physical Rotation.Nabiyouni and Bowman list different body parts that can be tracked and used as input properties.Nabiyouni and Bowman describe the Input and Input Properties Sensed and Arns the Input Conditions.While Nabiyouni and Bowman designed their taxonomy for Walking-based Locomotion Techniques many nodes are similar to Arns taxonomy which describes locomotion in general.Both taxonomies also integrate nodes specifically for walking, e.g., Walking Surface, Scaled Walking, and Regular Walking. Tan et al. they are children of the Travel Control while Nabiyouni and Bowman attached Speed to the Input as well as Output and related Orientation to the tracked input properties.Both taxonomies have integrate multiple nodes to describe the input: Input, Input Properties Sensed (Nabiyouni and Bowman) and Audio Input, Input Mechanism (Tan et al.).In addition both taxonomies contain a node Mapping (for the Rotation and Translation) and Control Mapping with a child Constant (1:1) and 1:1, respectively.The taxonomy of Tan et al. is also related to the taxonomy by Arns (z ¼ 1:9616).Both have a node for Discrete and Continuous.Tan et al. integrated them as categories for the Control Frequency while Arns integrated the taxonomy part of Bowman et al.where the two nodes are children of the Explicit Selection node of the Velocity/Acceleration Selection.The discrimination between Explicit Selection, Automatic/ Adaptive, and Constant Velocity and/or Acceleration of Arns and Bowman et al. can also be found in the taxonomy of Tan et al.where the Control Mapping is divided into the nodes Constant (1:1) and Variable (modal), which is again divided into Explicit (user) and Implicit (system).Tan et al. distinguish different display types based on the degree of immersion while Arns integrated several Display Devices into her taxonomy.While Tan et al. split Simultaneous Views and Existence into Single and Multiple, Arns divided the Projection display device into single and multiple walls.Both taxonomies also
2022-09-16T06:17:13.962Z
2022-09-15T00:00:00.000
{ "year": 2023, "sha1": "75c8d8cad46b73e1fe1f484137e81fb9e4738edf", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/2945/4359476/09893375.pdf", "oa_status": "HYBRID", "pdf_src": "IEEE", "pdf_hash": "ea495b63508c043d7ffe6be0a638455aa111efeb", "s2fieldsofstudy": [ "Engineering", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
29068367
pes2o/s2orc
v3-fos-license
Contacts in the last 90,000 years over the Strait of Gibraltar evidenced by genetic analysis of wild boar (Sus scrofa) Contacts across the Strait of Gibraltar in the Pleistocene have been studied in different research papers, which have demonstrated that this apparent barrier has been permeable to human and fauna movements in both directions. Our study, based on the genetic analysis of wild boar (Sus scrofa), suggests that there has been contact between Africa and Europe through the Strait of Gibraltar in the Late Pleistocene (at least in the last 90,000 years), as shown by the partial analysis of mitochondrial DNA. Cytochrome b and the control region from North African wild boar indicate a close relationship with European wild boar, and even some specimens belong to a common haplotype in Europe. The analyses suggest the transformation of the wild boar phylogeography in North Africa by the emergence of a natural communication route in times when sea levels fell due to climatic changes, and possibly through human action, since contacts coincide with both the Last Glacial period and the increasing human dispersion via the strait. Introduction At present, Africa and Europe are in close proximity to each other geographically speaking, and are by only 14 km separated through the Strait of Gibraltar. However, it is known that major falls in sea level (~100 metres) related to glacial periods, and the consequent emergence of islands, reduce this distance to smaller marine barriers, less than 5 km each [1][2][3]. In this situation the interaction between both sides of the strait seems possible despite it being a barrier for some species [4][5][6][7][8]. Evidence of contacts across major marine distances is not new. Human dispersal across a marine barrier 0.88 million years ago (MYA) has been proven in the Flores Island (Java) [9,10]. During glacial periods, the Strait of Sicily would not have acted as a major geographical barrier for some species [11,12]. On the Strait of Gibraltar, some documented cases are found of movements of hominids and fauna across this permeable barrier [2,10,[13][14][15][16][17]. For example, the arrival of humans and vertebrate fauna to the Iberian Peninsula from Africa has been recorded at the sites of Orce (southeast Spain) as early as the Plio-Pleistocene boundary [2,10,18]. a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 Due to the complex biogeographic histories of some species, it may be complicated, or even impossible, to distinguish the cause of movements in the late Pleistocene. The sea level was lower until the Last Glacial Maximum (LGM), some 25,000 and 18,000 years ago. Thus contacts could have taken place by natural migrations or colonisations from anthropogenic introductions [17,19]. In any case, the North African wild boar (Sus scrofa) is closely related to the European wild boar, which indicates a strong gene flow [20,21], but no studies exist have focused on the possible routes of these contacts. Very few studies exist about African populations, and the history of the native wild boar in North Africa is poorly known. We found some references from historical and paleontological records about its possible origin [22][23][24], one study about the genetic structure of the wild boar population of Tunisia [25], and a number of studies about African pigs [26,27]. In GenBank we only found information on three sequences of cytochrome b and four of the control region identified exclusively in Morocco, or which belonged to the sequences also found for wild boars from other areas. In this study, five Moroccan wild boar samples were analysed and incorporated into GenBank [28]. This dataset has sufficed to allow us to study the hypothesis of the present work. The present study aims to elucidate contacts between Africa and the Iberian Peninsula across the Strait of Gibraltar by considering the genetic similarity of the wild boar populations on both sides of the strait. We decided to analyse mitochondrial DNA (mtDNA) cytochrome b and the control region because the latter is more hypervariable. The first analysis of Y-chromosome polymorphisms of a Moroccan wild boar is also provided. Samples and DNA extraction Hair and tissue were obtained from five wild boar individuals. Specifically, four females and one male (WBMoroc2) were sampled from the Middle Atlas in Morocco. Samples were collected during the 2014 and 2015 hunting seasons. Throughout the study area no special permits were required to legally hunt wild boars, only a general hunting licence. Animals were killed for other purposes and no authors were involved in hunting. Samples were obtained directly from licenced hunters. Mitochondrial DNA was extracted from hair roots and tissue samples using the material and protocols for DNA isolation of the Invisorb1 Spin Forensic Kit (STRATEC Biomedical AG, Berlin). Mitochondrial DNA amplification and sequencing Mitochondrial DNA was amplified using the primers and amplification profiles described by Alves et al. [29] (S1 Table). The thermocycling profile was the same for both cytochrome b and the control region: one cycle at 94˚C for 2 min, followed by 30 cycles of 94˚C for 45 seconds, 55˚C for 45 seconds, 72˚C for 1 min, and finally an extension step at 72 ºC for 10 min. PCR products were purified and sent to Macrogen (http://www.macrogen.com/eng/) for sequencing. We obtained two complementary fragments for each region, which were assembled using BioEdit 7.2.5 [30]. conditions: 95˚C for 10 min and 35 cycles of 94˚C for 1 min, annealing temperature (Tm) (S1 Table) for 1 min, 72˚C for 1 min, and finally an extension step at 72˚C for 15 min [21,27]. PCR products were purified and sent to Macrogen for sequencing. Mitochondrial DNA analyses In all, 1,152 base pairs (bp) including the entire cytochrome b, were obtained for the five analysed wild boar samples (GenBank accession numbers: KU664546, KU6645407, KU66454, KU664553 and KU608293) and were aligned with the 358 wild boar sequences available in GenBank (Table A in S2 Table). Bearded pig (Sus barbatus), celebes wild boar (Sus celebensis), philippine warty pig (Sus philippensis) and common warthog (Phacochoerus africanus) were employed as the outgroups. The partial control region gene (995 bp) was amplified for the five analysed wild boar samples (GenBank accession numbers: KU664554-KU664558) and were aligned with the 1,210 sequences from GenBank ( Table B in S2 Table). Celebes wild boar (S. celebensis) and common warthog (P. africanus) were employed as outgroups. The sequences used for the analysis comprised wild boar with a wide geographical distribution, including Europe, Africa, Near East and Asia. These regions represent the geographic areas of interest for our study. From the available GenBank sequences, we selected those from wild boar. We ruled out most sequences from clones, feral pigs, mixed and archaeological specimens, as well as the sequences classified as unverified and predicted. The size of the final dataset to be used for the analyses varied after aligning all the selected sequences and removing the positions that contained gaps and missing data (N). Sequences were aligned using BioEdit 7.2.5 and ClustalW alignment tool included in this software. The number of haplotypes was calculated using the DnaSP 5.10 software [31]. The cytochrome b haplotypes obtained here were given the code "CB", and those obtained for the control region were coded as "CR". The best nucleotide substitution models were selected using jModelTest 2.1.7 [32] under the Bayesian Information Criterion (BIC). The Pairwise genetic distances between sequences were calculated by MEGA6 [33] with 1,000 bootstraps replicates and gamma distribution (shape parameter = 0.5). Time of divergence (T) was estimated using the molecular clock equation T = K/ (2r) [34], where T = divergence time in years, K = genetic distance and r = rate of nucleotide substitutions. The genetic distance (K) between P. africanus and genus Sus was calculated with the Tamura 3-parameter model and gamma distribution (shape parameter = 0.5) using MEGA6 in both cytochrome b and the control region. We assumed a substitution rate (r) of 1 x 10− 8 per site per year for cytochrome b. This rate was previously estimated for complete mtDNA [35]. We used a higher substitution rate (r) of 1.37 x 10 −8 per site per year, as estimated by Pesole et al. [36], for the control region in mammals [37,38]. Bayesian phylogenetic trees were constructed using BEAST 1.8.2 [39]. We assumed a strict clock and the coalescent prior with a constant size. Evolutionary parameters were given by jModeltest. At least two independent Markov chain Monte Carlo (MCMC) chains were run for 50 million generations, and parameter values were sampled every 1,000 generations. We examined the results using Tracer 1.6 [40]. We used TreeAnnotator 1.8.2 [39] to obtain the consensus trees. The first 10% of the sampling trees were ruled out as burn-in and the resulting trees were visualized in FigTree 1.4.2 (http://tree.bio.ed.ac.uk/software/figtree/). Two medianjoining networks [41] were generated and visualized using Network 5.0.0.1 (http://www. fluxus-engineering.com). For the cythocrome b, with more complex data than the control region in the final dataset, we used the star contraction option and epsilon = 20. Results For methodological reasons, the analyses were carried out with wild boar samples from Europe, Africa, Near East and Asia, but we focused on the analysis of clades that was more related to Africa and Europe. Asian sequences were included to improve the analysis of the divergence times and to better understand the spread of wild boar from Asia to Europe. Cytochrome b analyses Except for the network (1,030 bp), we used a 897 bp fragment for the analyses that corresponded to the cytochrome b gene. In all, 107 haplotypes were identified from 363 wild boars (S7 Table). The Moroccan and Tunisian samples are included in haplotypes CB9 and CB90 ( Table A in S2 Table). CB9 is the commonest haplotype in Europe and is shared by some North African and 33 European (including Italian), four Asian and four wild boars from Near Eastern countries. When focusing on the Iberian Peninsula, we found that on the other side of the Strait of Gibraltar of the 13 Spanish wild boars, 8 were included in this haplotype. For the phylogenetic analysis, the best model was the Generalised Time-Reversible Evolutionary Model with gamma distribution (GTR + G) [46] for cytochrome b. The Bayesian phylogenetic tree revealed the previously observed clades: E1 (European clade), E2 (Italian clade), NE (Near Eastern clade), A (Asian clade) (Fig 1A). The haplotypes with sequences from Morocco and Tunisia belonged to the European clade (E1). The pairwise genetic distances between clades ranged from 0.0085 to 0.0138 (Table A in S3 Table). The mean distances between the haplotypes included in the European clade (E1) were calculated ( Table B in S3 Table). The MoroccoA haplotype (CB90), exclusive of Africa, differs from the rest by more than CB9, which is the other haplotype with African sequences. The cytochrome b gene contains six single nucleotide polymorphisms (SNPs) that allow the differentiation between European and Asian haplogroups [26,37,47]. In order to understand the genetic diversity in African wild boar populations, variable sites for their sequences were analysed (S5 Table). The cytochrome b sequences (1,047 bp) have seven nucleotide polymorphic sites in the analysed fragment. There are five transitions, one transversion and one deletion. The time of divergence estimated between genus Sus and P. africanus was 7 million years (between 5.95 and 8.05) for cytochrome b (S6 Table). From the Bayesian phylogenetic tree, it was deduced that the time of isolation between the Asian and the other clades occurred approximately 818,300 years ago. The isolation of the European clade (E1) occurred some 429,500 years ago. The beginning of the isolation of haplotypes found in North Africa took place 63,800 years ago. Networks were constructed to better visualize the relationships among clades (Fig 2A). We used fewer sequences with more base pairs to better understand existing relationships and to check for possible variations when using a larger segment size (1,030 bp). North African, Near Eastern and European wild boars clustered in the same way in both the network and the Bayesian phylogenetic tree. [46], with invariant sites and gamma distribution (GTR + I + G) for the control region. The Bayesian phylogenetic tree revealed the same clades seen in cytochrome b (Fig 1B). The sequences from Morocco and Tunisia clustered in the European clade (E1), except for the CR182 haplotype from Tunisia. The control region sequences from Egypt (CR46), Sudan (CR164) and Tunisia (CR182) belong to the Near Eastern clade (NE). Egypt shares a haplotype with some sequences from Iran. The pairwise genetic distances between clades ranged from 0.0168 to 0.0293 (Table A in S4 Table). The mean distances between the haplotypes included in the European clade (E1) ( Table B in S4 Table) showed that there was a longer distance between the haplotypes found in Morocco (between CR1 and CR 140/141) than between some Moroccan ones and the Tunisian one (between CR1 and CR181). The higher values in the mtDNA control region distances can be explained by it being a more hypervariable region than cytochrome b. The analysed partial control region (406 bp) has 12 nucleotide polymorphic sites, including 11 transitions and one deletion (S5 Table). The time of divergence estimated between genus Sus and P. africanus was 6.75 million years (between 5.51 and 7.99) for the control region. The time of the isolation between the Asian and the other clades occurred approximately 1,234,100 years ago (S6 Table). The time of divergence of the European clade (E1) was 696,000 years ago, and the beginning of the isolation of the haplotypes found in North Africa took place 116,600 years ago, according to the phylogenetic tree. Finally, we generated the network (Fig 2B). In this case we included only the sequences that belonged to clades E1 (Europe), E2 (Italy) and NE (Near East) to focus on the connections of these groups. The relationships that we found were similar for both the network and the Bayesian phylogenetic tree. Y-chromosome haplotype For the Y-chromosome analysis, we sequenced the partial AMELY, USP9Y and UTY (UTYin1 and UTYin9) regions in one male from Morocco (WBMoroc2) in order to compare them with the published sequences of previous studies [21,48,49] and to identify their haplotype. There were three defined haplotypes, and our results showed that, according to Ramírez et al. [21], our sample belonged to the HY2 haplotype. Discussion We explored the relationships via the Strait of Gibraltar based on an analysis of the partial mtDNA of wild boar (S. scrofa). We agree with the presence of one European clade (E1) that is widely distributed in Europe and North Africa (Morocco and Tunisia), one exclusive of Italy (E2), and another with most Near Eastern sequences (NE). These results are congruent with those reported by Larson et al. [45] and Meiri et al. [50]. The control region phylogenetic tree also shows some Asian haplotypes in the basal clade. This coincides with the results of Larson et al. [45]. Regarding genetic distances, in both cases the European clade is closer to the clade of the Near East than to the Italian one. When considering the distances only in cytochrome b, the Asian clade displays the same distance with both the Italian and the Near East one, which gives rise to the different distribution in the corresponding trees. However, the cytochrome b shows some Asian sequences that are closer to the European ones, which is due to the specific dispersal process of these populations and their contacts with others [51]. Finally, the analyses confirm that the modern wild boars from Morocco and Tunisia share European haplotypes. Only the Tunisian wild boar with haplotype CR182, which is less present in our results, belongs to the Near Eastern clade as wild boars from Egypt and Sudan. The median-joining networks showed the same relationships when fewer sequences with more base pairs were used. We obtained an interesting finding for the Moroccan wild boar ( Table B in S4 Table). The estimated genetic distance between the sequences belong to the CR1 haplotype from Morocco and the Tunisian CR181 is 0.0025. The estimated distance with the CR140 and CR141 haplotypes from Morocco is 0.0051 and 0.0077 respectively. CR1 also display shorter distances (0.0025) with some haplotypes found in Portugal, Spain, and even Greece (haplotypes CR12, 21,48,52,60,100,148). A long distance is seen between haplotypes CR141 and CR142. Differences between Moroccan populations are consistent with results from previous studies, and might indicate different origins or isolation due to geographical barriers in Maghreb [6,52]. However, very little information is available in other studies on samples used from Morocco to obtain mtDNA. The specimens that belong to our study (CR1) are from the province of Khenifra. Morocco1 (CR1) is from Taforalt (Oujda) [45] and Morocco2 (CR140) and Morocco4 (CR1) are labelled with the location Atlas/Rabat [53]. Although these last two samples are referenced with the same location, this covers a wide area and could belong to different populations. As we do not have further information, and without knowing the location of Morocco3 (CR142), more samples will be needed to confirm the hypothesis. When focusing on the sequences from Israel, we find that they are included in the European clade (E1) and within the most frequent haplotype (CB9). Larson et al. [45] obtained a similar result for one wild boar from Armenia, and suggested that it might be due to introgression from European wild boars or feral pigs. This latter possibility does not seem plausible because Far Eastern haplotypes have a 29% frequency in International pig breeds [8,37], but there is no evidence for these haplotypes appearing in wild boars from Armenia, Israel or North Africa. The most logical explanation is the interchange of haplotypes between wild boars through human action or natural movements. In any case, even when the near Eastern and African wild boars share this haplotype, it does not seem directly related to one other. Neither the phylogenetic trees nor networks indicate any similarity between both populations, except for one haplotype from Tunisia (CR182), which is located in the Near Eastern clade. Therefore, the commonest haplotype might have been transmitted to North Africa by contacts with the European wild boar. The six SNPs located at specific mtDNA cytochrome b positions, and used to differentiate the origin of samples, confirm that the origin of all the sequences from Morocco and Tunisia used for these analyses is European. The results of the variable sites in the African wild boar sequences are representative of a strong transitional bias, usually found in mammals' mitochondrial evolution [54][55][56]. From the genetic distances and Bayesian phylogenetic trees included in this study, we can estimate that the Near Eastern clade originated between 857,100 and 429,500 years ago (S6 Table). After this event, the wild boar arrived in North Africa possibly through Egypt, which would have isolated it from the Near Eastern clade. Presence of wild boar fossils from the Middle Pleistocene in North Africa and in several Near Eastern countries [24,57] supports our results, and the idea of wild boar forming part of fauna of Maghreb during this period. Nevertheless, the North African wild boar currently has European haplotypes. Since the control region is more hypervariable than cytochrome b to differences like the size of the mtDNA segment used, and the fact that the analysed specimens of both regions do not belong to the same specimens in most cases, the years of divergence shown in S5 Table vary within a narrow range. For the haplotypes found in Morocco and Tunisia, our analyses gave an approximation of their isolation of 63,800 years ago for cytochrome b, and of 116,600 years ago for the control region. Not until these haplotypes appeared could their dispersion between Europe and Africa have begun, which caused the genetic similarity. Therefore, by taking into account all these conditions and the years obtained, we offer an average approximation of 90,000 years (in the Late Pleistocene) for the time when a major genetic flow started between populations on both sides of the Strait of Gibraltar. This process must have given rise to the modern African wild boar, and something seems to have happened in Israel [50]. The Y-chromosome analysis result shows that our sample belongs to haplotype HY2, and is present in at least Tunisia, Spain, Russia, Iran and Japan. Haplotypes HY1 and HY2 have been documented in Tunisia [21]. With the information available from the Moroccan and Tunisian Y-chromosome sequences, it seems that they might be related to the European or Near Eastern wild boar. Haplotype HY3 is relatively abundant in Far Eastern specimens, and has been detected in Kenyan and Mukota pigs [21], but is absent in the North African wild boar. These results support the hypothesis that there was no direct genetic flow between Asian and African wild boars, which would be congruent with the fact that the commonest haplotype in mtDNA would not be in Africa due to introgression by pigs, as previously mentioned. Although these data are not enough to draw definitive conclusions, given the lack of patrilinear history information on this point in Morocco, we feel the analysis has been valuable. The role of the Strait of Gibraltar as a marine permeable barrier between Europe and Africa The results are interesting because it would be more logical for North African wild boars to share their haplotypes with those of Egypt, Sudan, or even with those of the Near East, due to connectivity by land. However, all their haplotypes are European, except for CR182. According to Manlius and Gautier [57], the so-called wild boars from Sudan are feral pigs, and the modern wild boars from Egypt were probably introduced by humans in the Neolithic period, like sheep and goats. Even so, presence of native wild boar at low densities in the past cannot be ruled out, and natural colonisation through Egypt is logical and supported by the existence of fossils found in North Africa. Accordingly, if the only contacts had been made by land through Near East countries, the African wild boar would form part of the Near Eastern clade, or would at least differ from European populations. Therefore, isolation between populations from Egypt and North Africa in the past (in the Late Pleistocene) and contacts with the European wild boar across the Strait of Gibraltar, the most likely route, could have had a strong effect on the mtDNA of the African populations. Obviously, we cannot rule out previous contacts to the Late Pleistocene during former glacial periods. The absence of a genetic footprint could be due to the genetic flow not being enough to have endured [49]. The CR182 haplotype found in Tunisia should be the result of either genetic introgression with the Near East wild boar or a trace from the past. Indeed this haplotype is older than the rest by at least 81,000 years. Contacts across the Strait of Gibraltar could have been possible during glacial periods when sea level fell and it was easier to cross. The Last Glacial Maximum, the maximum extent of glaciation during the Last Glacial period, occurred during the interval between 25,000 and 18,000 years ago [17,19], and the Last Glacial period finished 11,700 years ago. The Late Glacial, the beginning of the modern warm period, began approximately 13,000 years ago [58], but the rise in temperatures was gradual. During cold periods, the currents in the strait would have been minimal or inexistent, which would have facilitated crossing the strait [2]. As for the causes of movements, if they took place approximately from 90,000 onward, it is difficult to know if contacts were made by natural colonisations, by human action, or by a combination of both without having further data [48,49]. This is due to several events occurring at the same time. In addition, it would appear that the South region of the Iberian Peninsula and North Africa form part of a refugial subcentre denominated Atlanto-Mediterranean, from which species could have recolonised areas in North of Europe at the beginning of the post-glacial periods the [5,11,17]. In fact the existence of human contacts across the strait during the period between 12,000 and 10,500 years ago has been proven, which was when sea levels were still rising [13,59]. Since then human contacts have happened. The existence of cattle of one characteristic haplotype of African breeds in the Iberian Bronze Age suggests that contacts over the Strait of Gibraltar were caused by interactions among communities, their culture and livestock in prehistory [14]. Besides, other animals like genet (Genetta genetta), barbary ape (Macaca sylvanus) or egyptian mongoose (Herpestes ichneumon) are accepted as introductions between Europe and North Africa [24]. Movements of people between both continents, who took pigs with them from the 15th century onward on exploratory or commercial routes, and with the consequent genetic introgression, is another possible explanation for similarity [20,21]. In our case it is more likely that migrations occurred naturally. Accessible information suggests that people took domestic varieties of livestock, such as cattle or pigs, but no information on transporting wild boar is available. Similarly, the arrival of the wild boar from mainland Asia to the Ryukyu islands in the Late Pleistocene seems to be a fact [51]. A much closer geographic proximity between the two regions when sea levels were lower could account for this dispersion. Finally, a strong gene flow might have been persistent over time, and a population decline or a displacement of the original population, followed by the expansion of new haplotypes, could have occurred. As a result, the African wild boar forms part of the European clade, at least it does according to its mtDNA. Therefore, we suggest that the Strait of Gibraltar acted as a bridge for wild boar (S. scrofa) to disperse in the Late Pleistocene. In this case, the dispersion of specimens accompanied by a strong gene flow would have been occurred from at least 90,000 years onward. Genetic analyses, the history of S. scrofa, and the fact that the Last Glacial period finished 11,700 years ago, all suggest natural dispersion, but we cannot rule out the contact by human action. Supporting information S1 Table. Tables with information about the sequences obtained from GenBank and those from this study. Tables with information about cytochrome b (Table A), the control region (Table B) and the Y-chromosome (Table C)
2018-04-03T01:22:50.170Z
2017-07-25T00:00:00.000
{ "year": 2017, "sha1": "d223b6af53fc0a64f80dee21017502b2edf04b66", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0181929&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d223b6af53fc0a64f80dee21017502b2edf04b66", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
119318405
pes2o/s2orc
v3-fos-license
Dynamical Equations and Lagrange--Ricci Flow Evolution on Prolongation Lie Algebroids The approach to nonholonomic Ricci flows and geometric evolution of regular Lagrange systems [S. Vacaru: J. Math. Phys. \textbf{49} (2008) 043504 \&Rep. Math. Phys. \textbf{63} (2009) 95] is extended to include geometric mechanics and gravity models on Lie algebroids. We prove that such evolution scenarios of geometric mechanics and analogous gravity can be modelled as gradient flows characterized by generalized Perelman functionals if an equivalent geometrization of Lagrange mechanics [J. Kern, Arch. Math. (Basel) \textbf{25} (1974) 438] is considered. The R. Hamilton equations on Lie algebroids describing Lagrange-Ricci flows are derived. Finally, we show that geometric evolution models on Lie algebroids are described by effective thermodynamical values derived from statistical functionals on prolongation Lie algebroids. Introduction The Ricci flow theory [1,2] became attractive for research in mathematics and physics after G. Perelman successfully carried out his program [3,4,5] which resulted in proofs of Thurston and Poincaré conjectures, see reviews of results in Refs. [6,7,8]. The profound impact of such results on understanding the topology and geometric structure of curved spacetime and fundamental properties of classical and quantum interactions was used as a motivation to study the geometric evolution of regular Lagrange systems [9,10] on tangent bundles and nonholonomic (pseudo) Riemannian and Einstein manifolds. We developed Ricci flow theories for classical and quantum solutions of Einstein equations, generalizations to noncommutative, Finsler, diffusion, fractional spaces etc, see [11,12] and references therein. Effective Lagrange and Hamilton models, Lie algebroid and almost Kähler and Dirac structures are considered, for instance, in quantum gravity and modified gravity theories when Ricci flows on parameters are derived from a renormalization procedure with running/evolution of physical parameters etc [13,14,15]. One of the important tasks in modern geometry and physics is to elaborate and analyze flow evolution of more complex geometries and physical systems with nontrivial topology, generalized symmetries, nonholonomic constraints etc. So, it is not an academic exercise and "pure" geometric interest to perform generalizations of the Ricci flow theory derived for Lagrangians/Hamiltonians on Lie algebroids. Fundamental properties of spacetime topology seem to be related to a series of important questions on dimensions of real mechanical systems and physical interactions, analogous gravity modelling, renormalizability of certain quantum theories, possible modifications of gravity derived from modern cosmology observations etc. We need rigorous studies on evolution of theories with rich geometric structure, generalized and deformed symmetries and symplectic structure and nonholonomic constraints. Specifically, the goal of this paper is to elaborate a model of geometric evolution of Lagrange mechanics and analogous gravity theory on Lie algebroids using certain constructions proposed and developed in Refs. [16,17,18,19,20]. The key idea considered in our works is that physical theories can be encoded into the geometry of generalized nonholonomic spaces (defined by corresponding classes of non-integrable constraints on fundament dynamical and evolution equations) via "standard", or analogous, geometric objects like metrics, (almost) symplectic forms, nonlinear and linear connections, related curvatures and torsions, and their geometric flows evolution. A subclass of evolution scenarios are uniquely determined following geometric principles for entropy type functionals derived for families of generating Lagrange functions L(x, y, χ). 1 Hopefully, such assumptions on geometric evolution mechanics allows us to formulate an alternative and very different approach to and provide us new possibilities to explore properties of Lagrange systems using methods in geometric analysis. Theories of Lagrange and Hamilton systems on Lie algebroids (and various discrete analogs on Lie groupoids, Poisson structures and algebroids etc) were proposed [21,22] and actively developed during last ten years, see original contributions and reviews of results in Refs. [13,23,24,25]. On Lie algebroid gravity and gauge interactions models, we cite [26,18,19] and references therein. The inclusive nature of Lie algebroid formalism allows us to describe very different situations in mechanics and physics such as Lagrangian systems with symmetry and nonholonomic constraints, theories with semidirect products and/or evolving on Lie algebras and generalizations. It is possible in such cases to derive some Lagrange/Euler -Poincaré, or Euler-Lagrange equations and geometrize such systems as generalized Poisson geometries etc. New tools have been introduced and new understanding have been provided, for instance, by the multi-symplectic formalism and Poisson-Nijenhuis Lie algebroid theory etc. Nevertheless, we have to consider additional and alternative constructions for above mentioned algebroid models of geometric mechanics and classical/quantum field theories if we wont to study the Ricci flow evolution of systems and spaces with "rich" geometric and physical structure keeping certain analogy with the Hamilton-Perelman theory. It is not clear how the standard formalism elaborated for Ricci flows of (semi) Riemann and (almost) Kähler geometries can be extended to describe directly flow evolution of models of Lie algebroid mechanics developed in Refs. [21,22,23,24,25]. Our proposal is to use J. Kern's [16] constructions on Lagrange spaces (the term is due to that article developing in a "nonhomogenous" manner the M. Matsumoto results on Finsler connections [17], see references therein; further developments and applications, for instance, in modern classical and quantum gravity [11,14,15]. In such an approach, the nondegenerate Hessian of a regular Lagrangian can be treated as a metric structure for fibers on T M which can be extended on total space using the so-called Sasaki lifts [28]. It is involved also a corresponding semi-spray structure inducing a canonical nonlinear connection (in brief, N-connection; the global definition is due to [27], see historical remarks and applications in modern mechanics and gravity in [14]). For such geometric data, a model of Lagrange-Ricci certain "dual" and, in some sense, more general approaches, we can consider families of Hamiltonians H(x, p, χ), (almost) symplectic and/or Poisson structures with associated co-tangent bundle T * M etc. Here we also note that in our works there are used left low/up indices as labels for some geometric objects and/or spaces. flow theory [9,10] can be formulated in N-adapted form, via corresponding generalizations of Perelman's functionals, on tangent bundles and/or nonholonomic 2 (semi) Riemannian manifolds. The paper is organized as follows. In section 2, we survey the geometry of Lie algebroids and prolongations and geometrization of Lagrange mechanics on such spaces following approach from [13,23,24,25]. We summarize necessary tools from the geometry of N-connections on prolongation Lie algebroids in section 3. The constructions are performed in metric compatible form which allows us to formulate an analogous N-adapted gravity model on Lie algebroids. An alternative geometrization of regular Lagrange mechanics and analogous modeling of gravity following Kern-Matsumoto ideas extended for prolongation Lie algebroids is provided. Section 4 is devoted to Main Theorems for Lagrange-Ricci flows on prolongation Lie algebroids. Lagrange Mechanics and Lie Algebroids We outline basic concepts and definitions for Lie algebroids and geometric mechanics with regular Lagrangians on prolongations of Lie algebroids on bundle maps, see in Refs. [13,23,25] and references therein. Linear connections and metrics on Lie algebroids A Lie algebroid E = (E, ⌊·, ·⌋ , ρ) over a manifold M is a triple defined by 1) a real vector bundle τ : E → M together with 2) a Lie bracket ⌊·, ·⌋ on the spaces of global sections Sec(τ ) of map τ and 3) the anchor map ρ : E → T M defined as a bundle map over identity and constructed such that the homorphism ρ : Sec(τ ) → X (M ) of C ∞ (M )-modules X induced this map satisfies the condition For a Lie algebroid, the anchor map ρ is equivalent to a homomporphysm between the Lie algebras (Sec(τ ), ⌊·, ·⌋) and (X (M ), ⌊·, ·⌋) . In local form, the properties of a Lie algebroid E are determined by the local functions ρ i α (x k ) and C γ αβ (x k ) on M, where x = {x k } are local coordinates on a chart U ⊂ M, with ρ(e α ) = ρ i α (x)∂ i and ⌊e α , e β ⌋ = C γ αβ (x)e γ , satisfying the following equations A linear connection D on E is defined as a R-bilinear map D : Sec(E) × Sec(E) → Sec(E) such that ∀ f ∈ C ∞ (M ) and ∀ X, Y ∈ Sec(E) this covariant derivative operator satisfies the conditions Locally, D is given by its coefficients Γ γ βγ when D where X = X α e α and Y = Y α e α for a local basis {e α } ∈ Sec(E). A curve a : I → E given by a function a(τ ) = a α (τ )e α on a real parameter τ, is said to be an auto-parallel of D if D a a = 0. The exterior differential on E can be defined in standard form using the operator d on E, when d : Sec( k τ * ) → Sec( k+1 τ * ), d 2 = 0, where is the antisymmetric product operator, see details in Refs. [29,23,25,13]. The local contributions of a N-connection can be seen from such formulas for a smooth formula f : M → R, df (X) = ρ(X)f, for X ∈ Sec(τ ), when dx i = ρ i α e α and de γ = − With respect to any section X, we can define the Lie derivative using the cohomology operator d and its inverse i X , see details in [30,25,13]. A metric ̟ on E is defined as a map Locally, ̟ = ̟ αβ (x)e α ⊗ e β . We shall use also the inverse matrix/ metric, There is a "preferred" linear connection ̟ ∇ on E (the analog of the Levi-Civita connection in Riemannian geometry) completely defined by a metric ̟. This connection is uniquely determined following two conditions: ∀X, Y, Z ∈ Sec(E). The curvature of ̟ ∇ on E (the analog of the Riemannian tensor on standard manifolds) is defined in standard from Introducing in above formulas X = e α , Y = e β , Z = e γ , for we compute the coefficients of torsion and curvature of the Levi-Civita connection, respectively, In standard form, we define the Ricci tensor contracting respective indices, ̟ Ric = { ̟ R βγ := ̟ R α βγα }, and the scalar curvature, ̟ R := ̟ αβ ̟ R αβ . Such formulas are very similar to those for (pseudo) Riemannian geometry formulated in nonholonomic bases satisfying anholonomy relations for some nontrivial coefficients C ϕ γδ . For the case of Lie algebroids, the fundamental geometric objects on the space Sec(E). The above formulas on metrics, connections and Ricci tensors can be used for elaborating a Ricci Lie algebroid evolution theory. Nevertheless, there are necessary a number of additional assumptions and constructions in order to include in such a scheme models of Lagrange mechanics and classical and quantum field theories. The prolongation of Lie algebroids and Lagrange mechanics In Refs. [23,25,13], a geometric formalism for Lagrange mechanics on Lie algebroids was developed using the concept of prolongation of a Lie algebroid over a fibration (in brief, prolongation Lie algebroid). Let us briefly outline some basic constructions. Consider a Lie algebroid E = (E, ⌊·, ·⌋ , ρ) and a fibration π : P → M both defined over the same manifold M. We denote local coordinates in the form (x i , u A ) ∈ P write {e α } for a local basis of sections of E. For our purposes, we can consider that P = E. The anchor map ρ : E → T M and the tangent map T π : T P → T M, can be used to construct a subset T E s P : Globalizing the construction, we obtain another Lie algebroid, T E P := s∈S T E s P, which is called the prolongation of E over π. Equivalently, T E P is called the Etangent bundle to π, which is also a vector bundle over P , with projection τ E P just onto the first factor, It is possible to define also the projection onto the second factor (i.e. a morphism of Lie algebroids over π), T π : define a local basis of sections of T E P. In explicit form, such bases can be parametrized in the form X α = X α (p) = e α (π(p)), ρ i α ∂ i|p and V A = 0, ∂ A|p where partial derivatives are taken in a point p ∈ S x . The Lie algebroid structure of T E P is stated by the anchor map ρ π (Z) = ρ i α Z α ∂ i + V A ∂ A acting on sections Z with associated decompositions of type z and by the Lie brackets ⌊X α , Using dual bases X α , V B , we can perform an exterior differential calculus following formulas We shall write the absolute differential, Let us consider P = E for T E P when the prolongation Lie algebroid is for a bunde projection τ : E → M, we can formulate a mechanical model for a Lagrangian function L ∈ C ∞ (E) and chose a vertical endomorphism S : . A model of Lie algebroid mechanics for a Lagrangian L can be geometrized on T E E in terms of three geometric objects, the Cartan 1-section: the Cartan 2-section: the Lagrangian energy : where the Lie derivative (2) is considered in the last formula. Using these variables, the dynamical equations derived for L can be geometrized as Such geometric equations define equivalently a regular Lagrange mechanics if ω L is regular at every point as a bi-linear form, i.e. it is a symplectic section. For configurations with regular L, and ω L , there exists a unique solution Γ L and a form Ω L satisfying the condition i Γ L Ω L = dE L . From equations (7), we obtain i SΓ L ω L = i △ ω L . This states that S(Γ L ) = △ (equivalently, T τ (Γ L (a)) = a, ∀a ∈ E) which constraints Γ L to be a SODE (second order differential equation) section, or semispray. Taking Ω L = ω L , for P = E, we can write the last equation as a symplectic equation for Γ L ∈ Sec(T E E). The above geometric objects (6) and equations (7) can be written in coefficient forms. Introducing local coordinates (x i , y α ) ∈ E, for Lie algebroid structure functions (ρ i α , C γ αβ ), and choosing a basis As a vertical endomorphism (equivalently, tangent structure) can be used the operator S := X α ⊗ V α . The Euler-Lagrange section associated with L is given by Γ L = y α X α + ϕ α V α , when functions ϕ α (x i , y β ) solve this system of linear equations The condition of regularity is equivalent to non-degeneration of the Hessian For regular configurations, we can express the semi-spray vector as where ̟ αβ is inverse to ̟ αβ . If the condition [△, Γ L ] E = Γ L is satisfied, the section Γ L transforms into a spray which states that the functions ϕ β are homogenous of degree 2 on y β . A curve c(τ ) = (x i (τ ), y α (τ )) ∈ E for a real parameter τ defines a solution of the Euler-Lagrange equations for L if Similarly to the model of Lagrange mechanics on Lie algebroids defined by equations (7) and (12), it is possible to elaborate Hamilton/symplectic geomerizations, see details in [25,13]. However, in both cases, it is not clear how some versions of Perelman functionals for geometric flows should be performed if we restrict our constructions only to Cartan's symplectic forms and Lagrangian energy (6) and related equations (7). Lagrangians on Lie Algebroids & N-Connections In order to elaborate Lagrange-Ricci evolution models on T M and nonholonomic manifolds, we used [9, 10] a geometrization of mechanics in terms of canonical nonlinear and linear connections defined by a regular Lagrangian L. This section is devoted to a brief introduction into the geometry of nonlinear connections on Lie algebroids, see former constructions [16,18,19]. N-connections and prolongations of Lie algebroids A nonlinear connection, N-connection, structure for a vector bundle P [27] can be defined as a Whitney sum N : T P = hT P ⊕ vT P . A couple P := (P, N) is called a nonholonomic vector bundle (equivalently, vector N-bundle, with conventional horizontal, h, and vertical, v, splitting/decomposition). 3 N-connections can be similarly introduced on prolongation Lie algebroids via a corresponding h-v-splitting, Such a bundle, and Lie algebroid, morphism N : T E P →T E P, with N 2 = id, defines an almost product structure on P π : T P → P for a smooth map on T P \{0}, were {0} denotes the set of null sections. A N-connection induces h-and v-projectors for every element z = (p, b, v) ∈ T E P, when h(z) = h z and v(z) = v z, for h = 1 2 (id + N ) and v = 1 2 (id − N ). These operators define, respectively, the h-and v-subspaces, hT E P = ker(id − N ) and vT E P = ker(id + N ). Such structures on T P and T E P are compatible if N A α = N A i ρ i α . Using N A α , we can generate sections δ α := X α − N A α V A as a local basis of hT E P. In general, this allows us to define a N-adapted frame structure and its dual where the "overlined" small Greek indices split in the form α = (α, A) if an arbitrary vector bundles P is considered, or α = (α, α) if P = E. The N-adapted bases (15) In these formulas, which (by definition) is considered to be the curvature of N-connection N A α . It should be noted that for P = E, the above formulas for Lie d-algebroid T E E mimic on sections of E the geometry of tangent bundles and/or nonholonomic manifolds of even dimension, endowed with N-connection structure (on applications in modern classical and quantum gravity, with various modifications, and nonholonomic Ricci flow theory, see Refs. [14,9,11]). If P = E, we model nonholonomic vector bundle and generalized Riemann geometries on sections of T E P. Linear connections and metrics on T E P The Levi-Civita connection ̟ ∇ (4) on E is not adapted to a N-connection structure on T E P. We have to introduce into consideration another classes of linear connections which would involve the h-v-splitting for T E P. Definition 3.2 A distinguished connection, d-connection, D on T E P is a linear connection preserving under parallelism the N-connection (13). Using rules of absolute differentiation (5) for N-adapted bases e α := {δ α , V A } and e β := {X α , δ B } and the d-connection 1-form Γ γ α := Γ γ αβ e β , we can compute the torsion and curvature 2-forms on T E P : Let us consider sections x, y, z of T E P, were (for instance) z = z α e α = z α δ α + z A V A . The torsion of d-connection D, T (x, y) = D x y − D y x + ⌊x, y⌋ π considered as a 2-form is defined as T α := De α = de α + Γ α β ∧ e β . Following a straightforward N-adapted differential form calculus, we prove The curvature of D, R(x, y)z := D x D y − D y D x − D ⌊x,y⌋ π z, also can be considered/ computed as a 2-form, where R α βγδ = e δ Γ α βγ − e γ Γ α βδ + Γ ϕ βγ Γ α ϕδ − Γ ϕ βδ Γ α ϕγ + Γ α βϕ W ϕ γδ . This results in a proof of We note that in above first two formulas the terms L α εϕ C ϕ βγ and L A Bϕ C ϕ βγ , respectively, transform in zero for a trivial Lie algebroid commutator structure when C ϕ βγ = 0. In such a case, the geometry of T E P endowed with N-connection structure N C γ mimics a similar one for the associated vector bundle P with a nontrivial N α i . Using prolongations of Lie algebroids on fibration maps, we model tangent bundle geometries but not in a complete equivalent form because there are differences in chosen nonholnomic structures and torsions and curvatures of d-connections. Proof. The formulas for h-v-components (20) are respective contractons of the coefficients (19). Definition 3.3 A metric structure on T E P is defined by a nondegenerate symmetric second rank tensor g = { g αβ }. Such a tensor is called a distinguished metric, i.e. a d-metric, if its coefficients are defined with respect to tensor products of N-adapted frames (16), We can define the inverse d-metric g αβ and inverse N-adapted h-metric, g αβ , and v-metric, g AB , by inverting respectively the matrix g αβ and its bloc components, g αβ and g AB . The scalar curvature s R of D is by definition Using (20) and (22), we can compute the Einstein tensor E αβ of D, Such a tensor can be used for modeling effective gravity theories on sections of T E P with nonholonomic frame structure [14,15,18,19]. Proof. It follows from a straightforward computation when the coefficients of d-metric g αβ (21) are introduced into D y g = 0, for y = y α e α = y α δ α + y A V A . In this paper, we shall work with two "preferred" linear connections completely defined by a d-metric structure g on T E P: There is a canonical d-connection D for which Dg = 0 and h-and v-torsions (17) are prescribed, respectively, to be with coefficients T α βγ = C α βγ and T A BC = 0 computed with respect to N-adapted frames. Proof. We can check by straightforward computations that the conditions of this theorem as satisfied if and only if D is taken with N-adapted coefficients The nontrivial values of torsion of D, i.e. N-adapted coefficients T α βγ , T α βA , T A βα and T A Bα , are computed by introducing the canonical d-connection coefficients (24) into formulas (17). Theorem 3.4 (-Definition) There is a metric compatible Levi-Civita connection ∇ which is completely defined by a d-metric structure g on T E P following the condition of zero torsion, A BC can be defined with respect to N-adapted frames for the same d-metric structure g which is used for constructing D (24), but with additional constraints that all torsion coefficients (17) are zero. We can verify via straightforward computations with respect to (15) and (16) that the condition of theorem is satisfied by a distortion relation where the distortion tensor Z = { Z γ αβ } is given by N-adapted coefficients The distortion coefficients (26) are such linear algebraic combinations of coefficients of torsion of D that the condition T α βγ = 0 is equivalent to Z α βγ = 0, and inversely. So, we can find a h-v-decomposition when Γ α βγ = Γ α βγ even, in general, ∇ = D; such connections are subjected to different rules of frame/coordinate transforms on T E P. We emphasize that ∇ is not a d-connection and does not preserve under parallelism the N-connection structure. Nevertheless, all geometric data for g, ∇ can be transformed equivalently into similar ones for g, D, N when g and N define a unique N-adapted splitting ∇ = D + Z. 5 Corollary 3.2 Any metric g on T E P can be represented equivalently as a d-metric g αβ (21) or, with respect to a local dual base dz β := {X α , V B }, in generic off-diagonal form, g =g αβ dz α ⊗dz β , with "non-boldface" coefficients Additionally, one should be considered some quadratic relations between .. ± 1] fixing a local signature for metric on T E P. A metric (27) is called generic off-diagonal because it can not be diagonalized by coordinate transforms. (24) into formulas (19), (20) and (22), we compute respectively the coefficients of curvature, R α βγδ , Ricci tensor, R αβ , and scalar curvature, s R. The distortions K = Γ + Z (25) allows us to compute the distorting tensors ( Z α βγδ , Z αβ and s Z) resulting in similar values for the (pseudo) Riemannian geometry on T E P determined by (g, K) , i.e. to define R α βγδ , R βγ and s R. We do not present all technical details and component formulas for geometrical objects outlined in above Remark. As an example, we provide the distortion relations for the Ricci tensor, Such values are defined with respect to N-adapted bases (15) and (16). Using fame transforms (28) and their dual ones computed as inverse matrices (e α ′ α ) −1 , we can re-define the coefficients with respect to coordinate bases. Coordinate formulas are important in the theory of Ricci flows (allowing simplified proofs of a number of important results on geometric evolution) and for constructing, in explicit form, exact solutions in geometric mechanics and analogous gravity. Finally, we note that all values on prolongation Lie algebroids are uniquely determined by a d-metric g αβ (21) (equivalently, by a generic off-diagonal g αβ (27)) for a prescribed N B α (13). Elaborating a physical dynamical/ evolution model for g, ∇ , the same theory can be described in terms of data g, D, N . This property allows us to simplify, for instance, the proofs of main results for Ricci flows on Lie algebroids using similar results for (pseudo) Riemannian metrics and then nonholonomically transforming the constructions into evolution of N-adapted values. An extension of Kern-Matsumoto approach for algebroid mechanics & gravity Let us consider an alternative (second) approach to geometrization of regular Lagrange mechanics [16,17] on Lie algebroids. All constructions are described in terms of generalized metrics, adapted frames and N-and d-connections. This is different from Cartan variables (6) and equations (8) considered in section 2. 6 The goal of this section is to show that it is possible a setting when canonical N-and d-connections and d-metric on T E E, for P = E, considered in previous section, are derived from a regular Lagrangian as a solution of Euler-Lagrange equations (12). There is a N-connection q N := −L q S defined by a semispray q = y α X α + q α V α and Lie derivative L q acting on any X ∈ Sec(T E) following formula q N (X) = − ⌊q, SX⌋ π + S ⌊q, X⌋ π . Theorem 3.5 Any regular Lagrangian L ∈ C ∞ (E) defines a canonical Nconnection on prolongation Lie algebroid T E E, determined by semi-spray configurations encoding the solutions of Euler-Lagrange equations (12). Proof. It is a straightforward consequence of above Lemma and (11). The geometric data and dynamics of symplectic equations (8) for Cartan variables (6) can be encoded equivalently into a metric compatible geometry on prolongation of Lie algebroid. N , g, D) on T E E, for π : E → M, with prescribed algebroid structure functions ρ i α (x k ) and C γ βα (x k ), is canonically determined by a regular Lagrangian L ∈ C ∞ (E). Proof. It follows from such key steps in definition of fundamental geometric objects. Using L(x, y), we construct the canonical N-connection N = { N γ α } (30) and induced N-adapted frames (15) and (16), respectively, At the next step, we construct a total metric of type (21), g → g, as a Sasaki lift of Hessian ̟ αβ (10), where Introducing the coefficients of d-metric (32) into formulas (24), we compute the coefficients of canonical D, induced by L. In general, we can use arbitrary frames of reference on T E E, when e γ ′ = e γ γ ′ẽγ for anyẽ γ (31). Any N-connection and/or metric structure (N ,g) can be related to some canonical data ( N , g) determined by a regular Lagrangian It is necessary to solve an algebraic quadratic system of equations in order to define e α α ′ from some prescribed data g α ′ β ′ andg αβ . • inversely, any off-diagonal metric g αβ (27) can be transformed via N-adapted frame transforms (28) into a d-metric g αβ (21) (for a prescribed L, parametrized in a form g (32)); we can model analogous gravity theories on algebroids as effective Lagrange models. Following different approaches, algebroid models for analogous gravity and matter field interactions are studied in Refs. [14,15,18,19,26]. One of the most important problems for such theories is to provide a physical motivation for the type of linear connection which should be chosen for con- For certain classes of smooth functions, such a Claim can be proven using theorems on decoupling and integration of the Einstein-Yang-Mills-Higgs equations, [14,31]. Nevertheless, this Claim can not be proven for all possible types of Lie algebroid configurations. The gravitational and matter field equations on different curved spaces, including constructions with Lie algebroids, are very sophisticate nonlinear systems of partial differential equations. In general, such systems may have various stochastic, fractional, chaos etc properties. This gives us a reason to argue that following our experience a chosen class of Cauchy type and/or stochastic etc flows can be modelled by a corresponding effective Lagrange dynamics/ evolution of Lie algebroid configurations. We can not prove that all physically important cases can be described via such models and it is not possible to state some uniqueness criteria, completeness of solutions etc. Lagrange-Ricci Evolution and Lie Algebroids Following Kern-Matsumoto geometrization of regular Lagrange mechanics and analogous gravity models on Lie algebroids, we can consider the problem of geometric flow evolution of such system as an explicit example of a theory of Ricci flows on nonholonomic manifolds as we stated in Refs. [9,10,11]. The goal of this section is to prove that Lagrange-Ricci flows on T E E can be encoded into a model of gradient nonholonomic flows. We can formulate an evolution model for a family of geometric data g(τ ), ∇(τ ) on T E E induced by a family of regular L(τ ) ∈ C ∞ (E) with a flow parameter τ ∈ [−ǫ, ǫ] ⊂ R, when ǫ > 0 is taken sufficiently small. Let us introduce on the space of Sec(E), for π : E → M, dim E = n + m and dim M = n ≥ 2, the functionals where the volume form dV and scalar curvature R are determined by an offdiagonal metric g αβ (27). The integration is taken over V ⊂ T E E, dim V = 2m, corresponding to sections over a U ⊂ M. We can fix V dV = 1, with µ = (4πτ ) −m e −f , considering necessary classes of frame transforms and a parameter τ > 0. The Ricci flow evolution derived from (34) in variables g, ∇ is a standard theory for Riemann metrics [1,2,3,4,5] but restricted to the conditions that such metrics are induced by regular Lagrangians. The evolution in such variables is not adapted to a N-connection structure (13). It is possible to elaborate N-adapted scenarios if above Perelman's functionals are re-defined in terms of geometric data ( g, D) and the derived flow equations are considered in N-adapted variables. Both approaches are equivalent if the distortion relations ∇ = D + Z (25) are considered for the same family of metrics, g(τ ) = g(τ ) computed for the same set L(τ ). The theory of Lagrange-Ricci flows on T E E is formulated as a model of evolving nonholonomic dynamical systems on the space of equivalent geometric data L : g, ∇ and/or (L : g, D) when the functionals F and W are postulated to be of Lyapunov type. Ricci flat configurations (the Ricci tensor can be computed for one of the connections ∇ or D) are defined as "fixed" on τ points of the corresponding dynamical systems. We useτ = h τ = v τ for a couple of possible h-and v-flows parameters, τ = ( h τ, v τ ), and introduce a new functionf instead of f. The scalar functions are re-defined in such a form that the "sub-integral" formula (34) under the distortion of Ricci tensor (29) is re-written in terms of geometric objects derived for the canonical d-connection, For the second functional, D = (hD, vD), we re-scale τ →τ and write for some Φ and Φ 1 for which V ΦdV = 0 and V Φ 1 dV = 0. This provides a proof for 7 Lemma 4.1 Considering distortion relations for scalar curvature and Ricci tensor determined by ∇ = D + Z (25), the Perelman's functionals (34) are defined equivalently in N-adapted variables (L : g, D), where the new scaling functionf satisfies Vμ dV = 1 forμ = (4πτ ) −m e −f andτ > 0. In this section, we omit details and proofs which are straightforward consequences of those presented in [3,4,5,6]. For our constructions, we consider operators defined by L via ∇ on T E E. Using distortions to D with Z completely defined by g, we can study Lagrange-Ricci flows on prolongation Lie algebroids as canonical nonholonomic deformations of Riemannian evolution on associated vector/tangent bundles. We can construct the canonical Laplacian operator, ∆ := D D determined by the canonical d-connection D, a "standard" Laplace operator ∆ = ∇∇, and consider parameter τ (χ), ∂τ /∂χ = −1. For simplicity, we shall not include the normalized term. The distortion (25) results in ∆ = ∆ + Z ∆, Z ∆ = Z α Z α + [ D α ( Z α ) + Z α D α ]; (37) R βγ = R βγ + Zic βγ , s R = s R + g βγ Zic βγ = s R + s Z, s Z = g βγ Zic βγ = h Z + v Z, h Z = g αβ Zic αβ , F v Z = g AB Zic AB ; where, for convenience, capital indices A, B, C... are for distinguishing vcomponents even the prolongation Lie algebroid is constructed for P = E. Using such deformations and a proof similar to that in Proposition 1.5.3 of [6], we obtain Theorem 4.1 The Lagrange-Ricci flows for D preserving a symmetric metric structureg and Lie algebroid structure for prolonated T E E can be characterized by this system of geometric flow equations: and the property that Proof. For distortions (37), we can redefine the scaling functions from above Lemma in different form. Similarly to [9,10] we can construct on T E E the corresponding system of Ricci flow evolution equations for D, which can be from the functional F( g, D, f ) = V ( s R + | D f | 2 )e − f dV. The conditions R αA = 0 and R Aα = 0 must be imposed in order to model evolution only with symmetric metrics. We note that under Ricci flows the N-adapted frames also depend on parameter χ following certain evolution formulas. For T M, such a Corollary is proven in Ref. [10]. Re-defining indices for T E E, those formulas can be used for flow evolution of frames of type (15) and (16). Finally, we discuss the statistical model which can be elaborated for Ricci flows of mechanical systems. By definition, the functional W is analogous to minus entropy [3] and this property was proven for metric compatible nonholonomic and Lagrange-Finsler Ricci flows [9,10] with functionals W written for D. Similar constructions can be performed on T E E. Let us consider a partition function Z = exp(−βE)dω(E) for the canonical ensemble at temperature β −1 being defined by the measure taken to be the density of states ω(E). The thermodynamical values are computed for average energy, E := −∂ log Z/∂β, entropy S := β E + log Z and fluctuation σ := (E − E ) 2 = ∂ 2 log Z/∂β 2 . Theorem 4.2 Any family of Lagrangians under Ricci evolution on T E E is characterized by thermodynamic values Proof. Similar computations, in not N-adapted, or N-adapted forms, are given in [6,9,10]. On prolongation Lie algebroids, we have to use the partition functionZ = exp V [−f + m]μdV .
2018-05-27T22:20:56.000Z
2011-08-22T00:00:00.000
{ "year": 2011, "sha1": "fa0867713cfaeb90cb97b1c98f1038ac8f8324ed", "oa_license": null, "oa_url": "https://tspace.library.utoronto.ca/bitstream/1807/93362/1/cjp-2018-0158.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "463274e7308c96f6a29ccea93d9d0b7f2cd5d074", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
88519290
pes2o/s2orc
v3-fos-license
A sufficient criterion for control of generalised error rates in multiple testing Based on the work of Romano and Shaikh (2006) and Lehmann and Romano (2005) we give a sufficient criterion for controlling generalised error rates for arbitrarily dependent p-values. This criterion is formulated in terms of matrices associated with the corresponding error rates and thus it is possible to view the corresponding critical constants as solutions of sets of certain linear inequalities. This property can in some cases be used to improve the power of existing procedures by finding optimal solutions to an associated linear programming problem. Introduction Consider the problem of testing n hypotheses H 1 , . . . , H n simultaneously. A classical approach to dealing with the multiplicity problem is to control the familywise error rate (FWER) i.e. the probability of one or more false rejections. However, when the number n of hypotheses is large, the ability to reject false hypotheses is small. Therefore, alternative type I error rates have been proposed that relax control of FWER in order to reject more false hypotheses (for a survey, see e.g. Dudoit and van der Laan (2007)). One such generalised error rate is the k-FWER, i.e. the probability of k or more false rejections for some integer k ≥ 1, k-FWER ≤ α considered by Hommel and Hoffman (1987) and Lehmann and Romano (2005). For k = 1 the usual FWER is obtained. Alternatively, instead of controlling the absolute number of false rejections it may be desirable to control the proportion of false rejections amongst all rejected hypotheses. This ratio is called the false discovery proportion (FDP). More specifically, if R denotes the number of rejected hypotheses and V the number of falsely rejected hypotheses, then FDP = V /R (and equal to 0 if there are no rejections). For the FDP, mainly two types of control have been considered in the literature. One aim might be to control the tail probability P (FDP > γ) ≤ α for some user-specified value γ ∈ [0, 1). This error measure has been termed γ − FDP by Lehmann and Romano (2005) and tail probability for the proportion 2 A SUFFICIENT CRITERION of false positives (T P P F P (γ)) in Dudoit and van der Laan (2007). Instead of controlling a specific tail probability, the false discovery rate (FDR) requires that FDR = E(FDP) ≤ γ, i.e. control in the mean. As Romano and Wolf (2010) point out, probabilistic control of the FDP allows one to make useful statements about the realized FDP in applications, whereas this is not possible when controlling the FDR. Recently, a number of methods have been proposed that control these generalised error rates under various assumptions. In this paper we focus on multiple testing procedures that are based on marginal p-values and are valid for finite sample sizes under no assumptions on the type of dependency of these p values. For k-FWER and γ − FDP, step-up and step-down methods have been obtained in Romano and Shaikh (2006a,b); Lehmann and Romano (2005). For FDR, Benjamini and Yekutieli (2001) have shown that a rescaled version of the original step-up procedure of Benjamini and Hochberg (1995) controls the FDR under arbitrary dependencies. Guo and Rao (2008) have extended these results and have also given corresponding upper bounds for stepdown FDR procedures (see Guo and Rao (2008) and the references cited therein for more details). The aim of this paper is two-fold. First, we present a sufficient condition for control of k-FWER and γ − FDP based on matrices that are associated with a specific error-rate and direction of stepping. This result is mainly a rephrasing of results obtained by Romano and Shaikh (2006a,b); Lehmann and Romano (2005). In the second step we show how the rescaled procedures introduced by Romano and Shaikh (2006a,b); Lehmann and Romano (2005) can in some cases be improved. In particular, we introduce a linear programming approach which uses the above-mentioned matrices. The paper is organized as follows. First, we introduce some terminology and assumptions that will be used in what follows. In section three we will state the main theoretical results which will be used in the following section to define new modified FDP controlling procedures. Section 5 contains the proof of the main theorem. In section 6 we investigate the power of the new modified procedures in a simulation setting and in section 7 we apply them to the analysis of empirical data. The paper concludes with a discussion. Notation, Definitions and assumptions In this section we introduce some terminology and assumptions that will be used in the sequel. When testing hypotheses H 1 , . . . , H n , we assume that corresponding p-values P V 1 , . . . , P V n are available. For any true hypothesis i we assume that the distribution of the p-values P V i is stochastically larger than a uniform rv, i.e. 2.1. Generalized error rates. In the following definition we introduce the sets of k-FWER-and FDP-controlling procedures weconsider in this paper. (a) For 1 ≤ k ≤ n define the set of step-up procedures that (strongly) control the k-FWER: For γ ∈ [0, 1) define the set of step-up and step-down procedures that (strongly) control the FDP: In order to formulate the main results of this paper we introduce subsets of C that are defined by where A ∈ R n×n + and ||x|| ∞ = max 1≤i≤n |x| i denotes the maximum norm. The elements of F(A) can be interpreted as the set of feasible points given by a set of linear constraints (inequalities). We will show in Theorem 1 that for each error rate k-FWER and FDP and direction of stepping we can define an associated matrix A, such that any procedure in α · F(A) controls the correponding error rate at level α. else. (b) and else. Main results First we state the main results of this paper which will serve as the starting point for mofifying some existing MTPs. As the proof in section 5 shows, it is actually a rephrasing of results of Romano and Shaikh (2006a,b) in terms of the associated matrices introduced in section 2. The theorem provides generic sufficient conditions for control of generalised error rates, i.e. if d ∈ F(A) then α·d controls the corresponding error rate at the desired level. Since the sets F(A) from the theorem are convex, it follows immediately that for any matrix A from theorem 1 and level α ∈ (0, 1) the set of procedures α · F(A) is also convex. Guo et al. (2012) have introduced the γ − kFDP = P (kFDP > γ) where kFDP = V /R (V and R defined as in the introduction) if V ≥ k and 0 else. Under the assumption that P V i ∼ U (0, 1) under any true hypothesis i they obtain linear bounds for the γ − kFDP in the proofs of their Theorems 4.1 and 4.2. These bounds can again be used to define appropriate associated matrices and establish a result similar to the above theorem for the γ − kFDP but we do not pursue this any further here. One immediate consequence of the theorem is the following corollary. Thus we can always achieve control of generalised error rates by using the rescaling approach. The proof of the above theorem relies on two key tools. The first is the following generalised Bonferroni inequality due to Lehmann and Romano (2005). (i) Then it holds (ii) As long as the right-hand side of (4) is ≤ 1, the bound is sharp in the sense that there exists a joint distribution for the p-values for which the inequality is an equality. The second step uses the observation that the generalised error rates considered here can all be bounded by probabilities of the type where |I| is the number of true hypotheses, M (|I|) ∈ {0, . . . , n} and t i (|I|) ∈ {0, . . . , n} is an increasing sequence in i (depending on |I|) and the P V in (5) are taken under the null hypotheses. Then the probability in (5) can be bounded using lemma 1. We call the resulting bound the LR-bound of the corresponding error rate. For the procedures considered here, adjusted p-values can be defined in the generic way decribed in (Dudoit and van der Laan, 2007, Proce- and step-down p-values In what follows we will focus on FDP controlling procedures. Modified FDP-controlling procedures In addition to providing an easily verifiable condition for FDP controlling procedures, theorem 1 can be used to construct new or modify existing procedures. In this section we describe an approach based on linear programming. Our focus in this section is on improving classical procedures based on rescaled constants as considered in Romano and Shaikh (2006a,b). First we define new modified FDP procedures as the solutions of a linear programming problem. Definition 8. Let A ∈ R n×n + and c ∈ F(A). Define the modified procedure ξ = ξ(c) as the solution to the following linear programming problem (P): where ξ 0 = 0 and a j = n i=1 A ij . Note that the third constraint in (P) implies that ξ ≥ c while the first and second constraints guarantee that ξ ∈ F(A). Note also that if c = 0 then F(A) is identical with the feasible points of the optimisation problem, so that this approach could be used to find optimal solutions within the whole class F(A) instead of F(A) ∩ {ξ ≥ c}. Since we are primarily interested in improving existing procedures we do not pursue this any further. For problems like (P), standard numerical methods like the simplex algorithm (Dantzig, 1963) are available. From a statistical viewpoint, it would be desirable to optimise the power of the MTP (defined in a suitable sense, see also section 6), subject to the given constraints. The rationale for using the objective function F is the following: Let b i = n j=1 A ij · ξ j , so that by Theorem 1 under |I| = i the error rate is bounded by b i and the sum b 1 + · · · + b n = F (ξ) can thus be interpreted as the sum of the maximum significance levels of the procedure. Since we are aiming for a powerful procedure it seems plausible to optimise this objective function in the sense that the best we can do without violating the bounds from lemma 1 is F (ξ) = n. Thus F (ξ) may be thought of as a surrogate-measure of power. It can also be interpreted in a Bayesian framework by observing that optimising it is equivalent to optimising the mean maximum level of significance if the number of true hypotheses |I| is distributed uniformly on {1, . . . , n}. Thus, if prior knowledge is available for the distribution of |I|, we could also use the weighted objective function Using theorem 1 we immediately obtain the following result. as defined in definition 8. Then ξ ∈ F(A) and therefore the procedure α · ξ controls the FDP for any α ∈ (0, 1). This procedure is at least as powerful as procedure α · c. Clearly, if F (ξ) > F (c), then ξ > c. This means that this approach will always find a strict improvement over c whenever one exists and we may thus expect a gain in power. Since, by construction, ξ can not be improved uniformly within class F(A), α · ξ can be seen as an optimal procedure within the subset α · F(A) of all α-controlling FDP procedures. We now consider two specific types of critical constants in more detail. (a) The Benjamini-Hochberg constants: Romano and Shaikh (2006a,b) normalising constants were introduced for c BH and c RS for step-up and step-down procedures. These constants were defined (in our notation) by and due to corollary 1 the rescaled procedures α · c/D(γ) all control the γ-FDP at level α. Example. Figure 1 illustrates the possible gains resulting from the optimisation approach for n = 50 and γ = 0.05. In all cases the modified procedures are strictly better than the rescaled procedures. To investigate where the gains come from, we consider the BH-SU procedure in more detail. For the rescaled procedure c = c BH /D BH−SU (γ), (A FDP-SU (0.05) · c) |I| = 1 for |I| = 32. The column entries for row 32 of matrix A FDP-SU (0.05) are strictly greater than zero for columns 19 to 50 and therefore the associated critical constants c 19 , . . . , c 50 can not be improved upon (any increase would violate the constraint max(A · c) |I| ≤ 1). However, since A 32,1 = · · · = A 32,18 = 0 there is some potential for increasing the remaining critical constants c 1 , . . . , c 18 . This is exactly what the optimisation in the linear program (P) accomplishes. Ideally, this would result in a new procedure ξ with (A · ξ) |I| = 1 for all |I|, yielding a completely unimprovable procedure within class F(A). This happens e.g. for A = A k-FWER-SD , when ξ is the vector of Lehmann-Romano constants, see section 5.2. However, due to the structure of the matrix A, this is usually impossible. In the case of BH-SD we obtain A · ξ 32 = · · · = A · ξ 50 = 1 (see uppermost right panel in figure 1). Figure 1 suggests that the gains derived from the modifications are considerably larger for the BH than for the RS procedures. This is also supported by the numerical values in table 1. If we follow the arguments given above for justifying the choice of objective function we would expect the modified BH-SD procedure to be the most powerful procedure (indicated by the highest values of F (ξ)), followed closely by the modified RS-SD procedure. This is also consistent with the simulation results in section 6. Proofs In this section we prove the statements of the theorem. Actually, the main work is to rephrase the results of Romano and Shaikh (2006a,b) in terms of the matrices introduced in section 2. The structure of the proofs is the same in all cases. 5.1. Proof of theorem 1, part (a). • For |I| ≥ k and l = n the coefficient equals 1 as seen from equation (8). Proof. To prove that α · F(A k-FWER-SD (k)) ⊂ S k-FWER-SD (α, k) let d ∈ F(A k-FWER-SD (k)), define c = α · d and let I ⊂ {1, . . . , n} be the set of true hypotheses. From the proof of Theorem 2.2 in Lehmann and Romano (2005) it follows that and by lemma 1 with t = |I|, m = k and 0 =c 0 = · · · =c m−1 ,c m = c n−|I|+k this probability can be bounded by To prove that S k-FWER-SD (α, k) ⊂ α ·F(A k-FWER-SD (k) we use the optimality property of the Lehmann-Romano procedure. Let c ∈ S k-FWER-SD (α, k). By Theorem 2.3 (ii) in Lehmann and Romano (2005) so that d LR ∈ F(A k-FWER-SD (k)) and the claim is proved. 5.3. Proof of theorem 1, part (c). The following lemma is a rephrasing of Lemma 4.1 in Romano and Shaikh (2006b) and states that the event {FDR > γ} is a subset of the union of sets of the type Lemma 2. Let the notation from definition 4 be given. Consider testing n null hypotheses, with |I| ≥ 1 of them true. Let P V (1) , . . . , P V (|I|) denote the sorted p-values under the null hypotheses and let γ ∈ [0, 1). Then it holds for the step-up procedure based on the constants c 1 ≤ · · · ≤ c n ≤ 1 given at the bottom of p. 1861 in Romano and Shaikh (2006b). With = n − j the index set is now = 1, . . . , n with m( ) ≤ |I| and so we have where the last equality follows from the definition of g |I| (see definition 4), defined on {1, . . . , M (|I|)}. Clearly, g |I| is non-decreasing. Since g |I| ( + 1) − g |I| ( ) ≤ 1 and g |I| (1) = 1 it follows that g |I| ( ) ≤ and from the definition of . To see this, let ∈ g −1 |I| ({k}). By the definition of t k it follows ≤ t k (|I|). We thus obtain since ≤ t k (|I|). Altogether this yields Proof of theorem 1, part (c). Let d ∈ F(A FDP-SU (γ)), define c = α · d and let I ⊂ {1, . . . , n} be the set of true hypotheses. Be lemma 2 we have and by lemma 1 this probability can be bounded by where A = A FDP-SU (γ). Equality (11) can be verified by considering the following two cases: • If M (|I|) = 1, the above upper bound equals |I| · c t 1 (|I|) which is identical with (11) due to the second case in definition 5. Proposition 1. Let the notation from definition 6 be given and let c ∈ C. For 1 ≤ |I| ≤ n define β = β (|I|) = c k |I| (l) , = 1, . . . , γn + 1. Then it holds Proof. Note that N (i) from definition 6 is identical to (3.11) in Romano and Shaikh (2006a), k i corresponds to k(s, γ, m, |I|) on p. 42 there, and β defined above agrees with β in (3.15) in Romano and Shaikh (2006a). As noted by Romano and Shaikh (2006a), the arguments used in the proof of Theorem 3.4 do not depend on the specific form of the original constants. This implies, as in the proof of Theorem 3.4 (bottom of p. 40 and top of p. 41) that where the last bound is obtained by lemma 1. Proof. For (a) we have where in the first equality of the second row the convention ∅ A i = 0 was used. For part (b) note that if ||A FDP-SD · c|| ∞ ≤ α then this means by part (a) that max( A 1· · β(1) t , . . . , A n· · β(n) t ) ≤ α for β m (i) := c k i (m) and the claim then follows from corollary 3. 5.5. Comments. For the step-up k-FWER and FDP procedures, Romano and Shaikh (2006b) have proved that the choice D = ||A · c|| ∞ (with associated matrix A) is the smallest possible constant one can use for rescaled procedures of the form c/D and still maintain control of the corresponding error rates. The key ingredient to their proof is part (ii) of lemma 1. For k-FWER step-down procedures Lehmann and Romano (2005, Theorem 2.3 (ii)) show that none of the Lehmann-Romano constants c i = k n+k−i for i > k can be improved without violating the k-FWER. For FDP step-down procedures, Romano and Shaikh (2006a) give an example that suggests that D = ||A · c|| ∞ is very nearly the smallest possible constant d such that c/d still controls FDP, but no proof is given that this constant possesses the same optimality property as in the step-up case. The modified FDP procedures introduced in section 4 by construction can not be improved without violating the LR bounds, i.e. without leading to ||A · ξ|| ∞ > 1. However, it is unclear whether this can also imply P (FDP > γ) > α. In the step-up case, the arguments given by Romano and Shaikh (2006a) depend crucially on considering only linear modifications of the original procedures. Therefore these arguments do not seem applicable to investigating whether the modified procedures from section 4 can be improved any further. Simulation study In this section we investigate the power of the different FDP procedures in a simulation study. We consider the following procedures: • FDP-BH-SU and its modified variant FDP-BH-SU (mod), • FDP-RS-SU and its modified variant FDP-RS-SU (mod), • FDP-BH-SD and its modified variant FDP-BH-SD (mod), • FDP-RS-SD and its modified variant FDP-RS-SD (mod). The goals of the study are three-fold: (1) to compare the power of the modified procedures with their original counterparts, (2) to compare the power between the modified procedures, (3) to compare the best FDP procedure (if it exists) with FDR controlling procedures. To make the last comparison more consistent, we use for the step-up direction the Benjamini and Yekutieli (2001) procedure FDR-BY-SU with critical constants n which controls the FDR under arbitrary dependence. For the stepdown direction we use the rescaled BH constants obtained by Guo and Rao (2008), i.e. We denote this approach by FDR-GR-SD. Similarly to Romano et al. (2008) we control the median FDP as an alternative to controlling the FDR. We do this at the .05-level, i.e. P (FDP > 0.05) ≤ 0.5, while the FDR procedures control the expectation E(FDP) ≤ 0.05. As Romano et al. (2008) point out, the median FDP is a less stringent measure than the FDR in the sense that the probability of the FDP exceeding 0.05 can be much bigger when the median FDP is controlled than when the FDR is controlled. For MTPs there are several ways to measure power, see e.g. (Dudoit and van der Laan, 2007, Section 1.2.10). We use average power, i.e. the average proportion of rejected false hypotheses, for comparing procedures. We assume equicorrelated multivariate normal test statistics, i.e. T = (T 1 , . . . , T n ) ∼ N(µ, Σ) with µ i = 0 for i = 1, . . . , |I|, µ i = d for i = |I| + 1, . . . , n and Σ ij = 1/2 for i = j and Σ ij = 1 else. For the parameter d, three nonzero values were used: d = 0.1, 1 and 3, reflecting small, moderate and large deviations from the null hypotheses. For each simulated vector of test-statistics p-values were calculated for the gaussian test of the null hypotheses H 0 i : µ i = 0 (two-sided). The number of tests performed was set to one of the values 10, 50, 100 and 500 reflecting small, medium and (moderately) large multiplicity of tests. We used 20000 simulations in the simulation study which gives a uniform upper bound for the standard errors of 0.0035. Figure 2 depicts the gains in average power of the modified FDP procedures over the original (rescaled) variants. For most constellations, the gains in power are considerably larger for the BH-type procedures than for the RS-type procedures. Put differently, the RS procedures perform so well that in many situations none or only little improvement is possible. Figure 3 presents a comparison of the four modified FDP-controlling procedures. The FDP-BH-SD procedure usually performs best and is followed closely by FDP-RS-SD and FDP-BH-SU. Figure 4 compares the modified procedure FDP-BH-SD (mod) with the FDR procedures BY-SU and GR-SD. The median FDP-BH-SD posesses the highest power for all constellations while FDR-GR-SD and FDR-BY-SU perform very similarly. Altogether we conclude that • modifying the rescaled FDP procedures resulted in increased power for all four procedures. The largest gains were achieved for the BH-type procedures, • for the constellations considered here, FDP-BH-SD (mod) performed best, with FDP-SR-SD (mod) or FDP-BH-SU (mod) usually coming in a close second, • the best modified median FDP procedure outperformed the FDR-controlling procedures that were rescaled in order to account for general dependence. Empirical applications In this section we compare the performance of the FDP and FDR approaches from the previous section for some empirical data. 7.1. Benjamini-Hochberg data. We revisit the data analysed in Benjamini and Hochberg (1995), consisting of 15 p-values from a study on myocardial infarctation. Table 2 gives the results of applying the median FDP and FDR procedures at levels q = 0.05 (note that in this case γ − FDP = FWER) and q = 0.10, i.e. P (FDP > q) ≤ 0.5 and E(FDP) ≤ q. For q = 0.05, the step-down procedures performed best, Table 2. Number of rejected hypotheses for the Benjamini-Hochberg data. followed by the step-up FDP-BH and FDP-RS methods. The FDR procedures rejected the fewest hypotheses. Note that the FDP-RS-SU procedure rejects fewer hypotheses at level 0.10 than at level 0.05. This behavior is due to the fact that both the original constants and the scaling constant D depend on the parameter γ. In this special case it means that c 0.05 i ≤ c 0.10 i only for i ∈ {10, . . . , 14}. For the FDP-BH procedures this can not happen, since the original constants do not depend on the parameter γ and the scaling constants are increasing in γ. 7.2. Westfall-Young data. Westfall and Young (1993) use resampling methods to analyze data from a complex epidemiological survey designed to assess the mental health of urban and rural individuals living in central North Carolina. The data consists of 72 raw p-values (see Westfall and Young (1993, table 7.42)), with 25 of them < 0.05 and 9 of the adjusted p-values < 0.05. Table 3 displays the number of rejections when using the median FDP and FDR controlling procedures introduced above. All procedures reject at least one additional hypothesis. For level q = 0.05, all median FDP procedures except RS-SU perform better than the FDR procedures; the modified BH-SD procedure is the only procedure that rejects three additional hypotheses. For q = 0.10 the step-down FDP procedures seem to work best. 7.3. Hedenfalk data. The data come from the breast cancer cDNA microarray experiment of Hedenfalk et al. (2001). In the original experiment, comparison was made between 3,226 genes of two mutation Table 3. Number of rejected hypotheses for the Westfall-Young data. types, BRCA1 (7 arrays) and BRCA2 (8 arrays). The data included here are p-values obtained from a two-sample t-test analysis on a subset of 3,170 genes, as described in Storey and Tibshirani (2003). Table 4 gives the results of applying the median FDP and FDR procedures at levels q = 0.05 and q = 0.10, i.e. P (FDP > q) ≤ 0.5 and E(F DP ) ≤ q. Again, the step-down FDP procedures perform better than their Table 4. Number of rejected hypotheses for the Hedenfalk data. step-up counterparts, the modified median BH-SD procedure rejecting the most hypotheses. While all FDP procedrues except BU-SU reject more hypothese than both FDR approaches, we might hope for more powerful procedures. One alternative idea could be to use resampling methods in order to account for dependencies. However, as Pounds (2006) points out, the power of these methods "will be severely limited, when the sample size is small". When the dependency between the pvalues is assumed to be strong and extensive he tentatively recommends the FDR-BY-SU procedure. Discussion In this paper we have used results from Romano and Shaikh (2006a,b) to obtain sufficient criteria for generalised error rates under general dependence in terms of systems of linear inequalities. These systems of linear inequalities describe the set of feasible points of a suitable linear optimisation problem. This property can be used to obtain modified multiple testing procedures which can improve on the rescaled procedures introduced in Romano and Shaikh (2006a,b). In a simulation study we have observed that these modified procedures can posess considerably more power than the original procedures. While the focus of this work was on developing more powerful multiple testing procedures, Hommel and Bretz (2008) have formulated additional desirable properties for such pocedures. Even though all methods considered here satisfy the property of coherence they are not particularly simple to describe and to communicate to non-statisticians. Since the modified procedures are obtained from a computationally complex numerical optimisation technique, the resulting sequence of critical constants will generally not exhibit any aesthetic mathematical patterns like e.g. the Bonferroni-Holm procedures. Another potential drawback from an aesthetical perspective may be related to what Hommel and Bretz (2008) describe as monotonicity properties of multiple testing procedures. While all procedures considered here yield monotonic decisions with respect to the corresponding type 1 error, it could be pointed out that additional monotonicity properties are conceivable that are not satisified by some of them. As a case in point, reconsider for n = 15 the RS-procedures for γ = 0.05 and γ = 0.10 (see section 7.1). A numerical evaluation shows that c 0.05 i < c 0.10 i holds true only for i = 10, . . . , 14. Thus it may happen that more hypotheses are rejected for γ = 0.05 than for γ = 0.1 (at the same level of type 1 error) even though one would expect that the requirement FDP ≤ 0.05 is more stringent than FDP ≤ 0.10. The reason for this behavior is that both the original critical constants and the scaling constant D depend on the parameter γ. For the FDP-BH procedures the original critical constants do not depend on γ. Numerical computations suggest that the scaling constants for FDP-BH are increasing in γ and so this effect seems to be avoided by the FDP-BH procedures. Another issue is the computational complexity of solving the linear programming problem which is needed to obtain the modified procedures. As a case in point, the calculation of the modified procedures used for analysing the Hedenfalk data with n = 3170 in section 7.3 took approximately nine hours on a Intel Xeon 5620 processor using the R-function Rglpk_solve_LP. For multiple testing problems where the number of tests is significantly larger we thus expect run-time problems depending on the software and hardware available. Finally, concerning other error rates like the FDR it seems natural to ask whether there are similar ways of modifying existing procedures under arbitrary dependence of the p-values. Recall that the key to modifying FDP procedures in section 4 was the observation that an improvement is possible whenever the |I| -th row of matrix A (with |I| = arg max |I| (A · c) |I| ) contains at least one zero entry. Guo and Rao (2008) have obtained bounds for the FDR that are similar to the bounds in theorem 1, i.e. with FDR(c) ≤ A · c ∞ for a suitable matrix A. However, as their Theorems 4.2 and 5.2 show, the corresponding step-up and step-down matrices do not contain any zero elements. Therefore, while the linear optimisation approach could still be used to define new procedures e.g. via an unconstrained linear program, it will not be possible to attain strict improvements along the lines of section 4.
2013-07-16T19:59:41.000Z
2013-07-16T00:00:00.000
{ "year": 2013, "sha1": "baeefa44ef0db8fc203161f6b24919e3069d9171", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "baeefa44ef0db8fc203161f6b24919e3069d9171", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
52072946
pes2o/s2orc
v3-fos-license
Effects of Model-Based Interventions on Breast Cancer Screening Behavior of Women: a Systematic Review Background: Breast cancer is a great concern for women’s health; early detection can play a key role in reducing associated morbidity and mortality. The objective of this study was to systematically assess the effectiveness of model-based interventions for breast cancer screening behavior of women. Methods: We searched Scopus, PubMed, Web of Science, Science Direct, Cochrane library and Google scholar search engines for systematic reviews, clinical trials, pre- and post-test or quasi-experimental studies (with limits to publication dates from 2000-2017), Keywords were: breast cancer, screening, systematic review, trials, and health model. In this review, qualitative analysis was used to assess the heterogeneity of data. Results: Thirty six articles with 17,770 female participants were included in this review. The Health belief model was used in twenty three articles as the basis for intervention. Two articles used both the Health belief model and the Health Promotion Model, 5 articles used Health belief model and The Trans theoretical Model, 2 used Hthe ealth belief model and Theory planned behavior, 2 used the Health belief model and the Trans theoretical Model, 2 used the Trans theoretical Model, 1 used social cognitive theory, and 1 used Systematic Comprehensive Health Education and Promotion Model. The results showed that model-based educational interventions are more effective for BSE and CBE and mammography screening behavior of women compare to no model based intervention. The Health belief model was the most popular model for promoting breast cancer screening behavior. Conclusions: Educational model-based interventions promote self-care and create a foundation for improving breast cancer screening behavior of women and increase policy makers’ awareness and efforts towards its enhancement breast cancer screening behavior. Introduction Breast cancer is a prevalent disease of women (Abolfotouh et al., 2015) and also a public concern that threaten lives of women (Nergiz-Eroglu and Kilic, 2010). It is anticipated that more than one million new cases of breast cancer occurs annually worldwide (Shiryazdi et al., 2014). Early detection of women's breast cancer leads to increase their survival rates after diagnosis and reduces the related mortality (İz and Tümer, 2016;Ardahan et al., 2015). So, promotion of breast cancer screening behavior decreases breast cancer morbidity and mortality through early diagnosis of the disease (Arrospide et al., 2015). There are three ways for breast cancer screening including: breast self-examination, clinical examination by medical personnel, and mammography (Calonge et al., 2009). Several factors including: type of medical insurance of women and women's employment status (Tsunematsu Effects of Model-Based Interventions on Breast Cancer Screening Behavior of Women: a Systematic Review Marzieh Saei Ghare Naz 1 , Masoumeh Simbar 2 *, Farzaneh Rashidi Fakari 1 , Vida Ghasemi 1 et al., 2013), history of breast disease and familial history of BC (breast cancer) (Allahverdipour et al., 2011), low knowledge and breast cancer literacy (Talley et al., 2016), are shown to be effective on breast cancer screening behavior of women. Health beliefs of women impact on their breast cancer screening approach (Ersin et al., 2015) such as concerns about breast cancer (Hay et al., 2006), low perceived susceptibility (Petro-Nustas et al., 2013), low motivation, perceived benefits and self-efficacy (Hajian-Tilaki and Auladi, 2014), lack of perceived benefit, low motivation for performing breast cancer screening (Veena et al., 2015, Dündar et al., 2006 are known to be barriers of screening behaviors (Tavafian et al., 2009), Overcoming these barriers and increasing perceived self-efficacy and motivation are important to promote breast cancer screening behavior among women (Noroozi and Tahmasebi, 2011). Theoretical models identified the factors that underlie health behaviors (Noar and Zimmerman, 2005), comprehensive integrative psychosocial models, are an essential first step for enhancing health behavior (Reid and Aiken, 2011). Some evidence indicated that interventions which used for promotion of health based on behavioral theories are more effective than those without a theoretical base (Glanz and Bishop, 2010). Different models for health change behavior were the base of interventions to promote breast cancer screening behaviors (Ashing-Giwa, 1999). Educational cancer prevention program is very cost-effectiveness program which empower people to give preventive behaviors (Changizi and Kaveh, 2017). Evidence showed that education about breast cancer prevention methods can improve BCS behavior of women (Levano et al., 2014). There is lack of any review on model based educational interventions for promoting breast cancer screening behavior (O'Mahony et al., 2017). This study aims to review the application of health behavior model-based educational interventions for promoting breast cancer screening behavior of women. Hopefully, the review could help to plan effective model based future strategies to improve screening behavior of women and consequently reduce mortality and morbidity of breast cancer among women. Search Strategy This study is a systematic review to determine the effects of model-based interventions to improve Breast cancer screening behavior (BCS) of women. All published articles (RCT, pre-and post-test design or quasi-experimental) were assessed from July 2000 to March 2017 in English language. We searched from databases including Scopus, PubMed, Science Direct, Cochrane library and Google scholar search engine. The search was based on the following keywords: breast cancer, screening, health belief model, health promotion model, social cognitive theory, theory of planned behavior, Trans theoretical Model, PRECEDE-PROCEED model, Systematic Comprehensive Health Education and Promotion Model. Therefore, articles were limited to date 2000 -2017. Criteria for considering studies for this review Selection of studies Two authors reviewed the eligibility of all included articles and also evaluated the risk of bias and the data for included articles such as country of origin, information on demographic characteristics of participants of the study, the number of participants in each group, aim of study, design and duration of study, measurement tools, adverse effect of each intervention and the type of educational intervention and main results of study were extracted. All studies based on different models for their educational programs for breast cancer screening were considered as the inclusion criteria for the study. All the trials used a standard, valid and reliable questionnaire for measuring the breast cancer screening behavior of women. Types of Participants All clinical trials (RCT, pre-and post-test or quasi-experimental) with inclusion criteria of all women without diagnosis a previous breast cancer. Types of Interventions All clinical trials (RCT, pre-and post-test or quasiexperimental) involving educational program based on health models versus no intervention or versus another educational intervention. Types of Comparator/control Another intervention or No intervention. Types of Outcome measures Educational interventions based on different health behavior models' adverse outcomes related to false positive findings of symptoms assessed by any validated scale. Risk of Bias The EPHPP is a tool used to evaluate of intervention design studies. This tool evaluates six domains: study design, blinding, selection bias, data collection method, confounders and dropouts. In this tool each domain is rated as weak (1), moderate (2) and strong (3) and total score provided by average of domain scores.. Based on total score, quality of studies is rated as weak (1.00-1.50), moderate (1.51-2.50) or strong (2.51-3.00) and the maximum total score is three (Thomas et al., 2004;Deeks et al., 2003;Armijo-Olivo et al., 2012). Two researchers was performed Search in databases; the abstracts were first assessed and then some articles underwent final assessment according to EPHPP and inclusion criteria and exclusion criteria. According to these criteria, articles achieving a score of 1.51 or more were included in the study. Data analysis The qualitative analysis was used in this review due to the heterogeneity of the data. Results Thirty six articles with 17,770 female participants in different contraries and Continents of world included in this twenty three article utilized a Health belief model (HBM) and 1 articles used both HBM and HPM, 5 articles used HBM and The Trans theoretical Model (TTM), 2 used HBM and ( Theory planed behavior) TPB , 3 used TTM, 1 used Social cognitive theory (SCT) and finally 1 used Systematic Comprehensive Health Education and Promotion Model (SHEP). The results of our study showed that several health behavior models were influencing on BSE, CBE and mammography screening behavior of women. Health belief model The results of the present review showed HBM-based educational intervention increases the women's health motivation about BCS. Individuals' behaviors and message group lead to women in this group were 6 times more likely to obtain a mammogram performance. Özgül et al., (2009) reported that peer education based on HBM increased BC knowledge and improved the BSE performance. Gozum et al., (2010) mentioned that after peer training based on HBM had positive effect on promoting practice, beliefs and knowledge, of women. Secginli and Nahcivan, (2011) reported that for the intervention group, significant changes were seen in perceived susceptibility,benefits of BSE and mammography, and confidence (all increased), but perceived barriers to mammography decreased. Results of Cohen and Azaiza (2010) Study shows that culturebased intervention based on HBM effective in BCS behavior of women. Gursoy et al., (2009) indicated that the HBM based education from daughter to mother enhance women's knowledge about BSE.Lu et al., (2001) stated that the program significantly increased BSE accuracy, BSE frequency, perceived benefit of BSE, perceived competence in BSE and decreased perceived susceptibility to breast cancer and perceived barriers to practice BSE. The Trans theoretical Model (TTM) According this model behavioral changes occur through a process of different stages (precontemplation, relapse, relapse risk, contemplation, action and maintenance stages). Farajzadegan et al., (2016) and Ghahremani et al., (2016) reported that educational interventions base on TTM improve BSE performance of women. Lin and Judith (2010) mentioned that tailored intervention group based on TTM had a better outcome and higher mean posttest scores relative to the Standard in group (Lin and Effken, 2010). Lin and Wang (2009) in their study reported that complete tailored intervention had significantly higher scores on intention to have a mammogram relative to the standard intervention group. Characteristics of TTM model based studies showed in Table 2. Social cognitive theory (SCT) This theory demonstrate that a multifaceted causal operate together with goals, outcome and perceived environmental barriers and facilitators in the regulation of behavior (Bandura, 2004). Goel and O'Conor, (2016) showed that a brief, pre-visit video based on SCT significantly increased mammography referrals. Characteristics of SCT model based studies showed in Table 2. Systematic Comprehensive Health Education and Promotion Model: (SHEP) SHEP is an innovative developmental method in the health promotion system, this model based on the of "Knowledge Management" theory (Mirzaii et al., 2016). Mirzaii et al, (2016) mentioned that education based on SHEP had positive effect on attitudes and BCS behavior of women. Characteristics of SHEP model based studies showed in Table 2. Mixed model In this study eight articles used different mix decisions related to general health conditions such as breast cancer can be evaluated using HBM (Aşcı and Şahin, 2011), According to this model, a woman decide to perform the screening while she perceives susceptibility to BC and severity of BC and perceives benefits and barriers of breast cancer screening behavior (Dündar et al., 2006). Characteristics of Health belief model based studies showed in Table1. Kocaöz et al., (2017) indicated that education program based on HBM increased the attitudes and BCS behaviors of women. Parsa et al., (2016) study, showed that HBM based intervention with GATHER (Greet, ASK, TELL, Help, Explain, Return) consultancy technique could help to improve the knowledge and beliefs about BCS and BSE performance. Heydari and Noroozi, (2015) reported that group education and multimedia education based on HBM lead to raise BC knowledge and participation in mammography. Kolutek et al., (2016) reported that HBM based intervention significantly increased rate of performing BSE. Akhtari-Zavare et al., (2016) in their study indicated that in the intervention group only three subscales score of HBM (benefits, barrier, and confidence of BSE) were significantly improved. Peterson et al., (2012) reported that in women with mobility impairments the HBM based education was not effective on mammography screening behavior. In Eskandari-torbaghan et al., (2014) study, after HBM based intervention in the intervention group the awareness, perceived susceptibility and benefits, barriers and behavior were significantly higher than control. Farma et al., (2014) reported that HBM based education have significant impact on improving BCS behavior. Rezaeian et al., (2014) reported that small group education based on HBM increased the knowledge and health beliefs about BC and mammography. Hall et al., (2005) reported that HBM based education were effective on knowledge and beliefs about breast cancer. Moodi et al., (2011) reported that HBM based education improved attitude and knowledge of female university students regarding BSE. Ceber et al., (2010) reported that the mean score of BC knowledge of women in the experimental group were higher than the control. The experimental group significantly more likely motivated and to feel confident, but there were no significant differences in perceived susceptibility, seriousness of BC, benefits and barriers to BSE. Avci and Gozum, (2009) study showed that both video and model methods intervention based on HBM were effective in changing BCS health beliefs of women. Aghamolaei et al., (2010) reported that health education program based on HBM promote BSE in women. Wang et al., (2012) reported that in the cultural and generic video interventions based on HBM modified mammography screening attitude of Chinese immigrant women. Sadler In both groups 3 month After intervention, there was significant difference between the mean scores of perceived benefits and barriers, health motivation, self-sufficient, and doing the screening. in the intervention group there was no significant difference between the mean score of perceived susceptibility and severity Moderate Akhtari-Zavare et al, The intervention group received ,16, 2-h workshops The result showed that 6, 12 months after intervention the mean total HBM score in intervention group was significantly higher than CON. Heydari et al, (2016)/ Iran RCT/N=120( n=60group education/n=60multimedia education) In group education two sessions lasting 45-60 min. In multimedia education planned based on HBM through CD, and educational SMS to their telephone Result showed that in group education health motivation and perceived benefit were higher than the multimedia group. (93.33%) of group education and (83.33%) of multimedia group had intention of mammography. Moderate Kolutek et al, (2016)/ Turkey quasi-experimental/N=153 "Training practices were conducted using lecturing, demonstration, and question and answer techniques. The Telephone Reminder Intervention " After the training practices mean scores of the seriousness, benefits of BES and self-efficacy, susceptibility, barriers to BES, and mammography and benefits of mammography under the HBM Scale for BC Screening significantly increased. Moderate Rezaeian et al, (2014)/ Iran "Population-based controlled trial / N=290 Control=145 Intervention=145 " The intervention group received educational program (PowerPoint presentation, educational film, group discussion, brain storming, question and answer and pamphlet )/the control group received no intervention After intervention in intervention group the mean scores of perceived susceptibility, severity, benefits, barriers and self-efficacy of mammography and health motivation significantly higher than the CON group. Strong Eskandari-torbaghan et al, (2014)/ Iran " Interventional design N=130 (65 intervention,65 control) " Intervention group received Lectures ,questions and answers , PowerPoint presentation , video and educational booklet/ Control group received no any intervention After intervention awareness, perceived sociability and benefits, barriers and behavior in the intervention group was significantly higher than CON. After the first 6 months of the program's operation, women in the BC intervention group significantly greater frequency engaged in mammography screening relative to CON group. Consistent with the HBM, women in the BC intervention showed a shift in behaviors and increased BC screening. the experimental group received the educational program (small group educational presentations , group videotapes on how to perform BSE, miniature lump model demonstration and practice in BSE and CBE) / the control group received no any intervention Moderate The mean score of BC knowledge of women in the experimental group were higher than the control group. The experimental group significantly more likely motivated and to feel confident, also their total score on the health belief scale was much better than that of the CON. in the experimental group The application percentage of CBE and mammography was higher Educational sessions by lecture In the intervention group all HBM subscale significantly higher than CON. Aydin Avci et al, (2009)/ Turkey "Pretest-posttest /N=51 in model group and 42 the video group " The video group received a videotape explaining BSE, CBE and mammography and was 20 min in duration/the scale model group, was shown and oral information about BSE, mammography and CBE. After the education in the video group, were showing increasing changes of susceptibility perceived self-efficacy and knowledge of BSE, and Perceived benefits of mammography. In the model group, susceptibility and perceived benefits of mammography and perceived self-efficacy of BSE, increased. DeFrank et al, (2009)/ Carolina "RCT 1) N=847 Enhanced usual care reminder.2)N=1355Automated telephone reminder. N=1345Enhanced letter reminder" The EUCRs, delivered as mailed letters, The mailed ELR was a full-color, four-page booklet with a quilt graphic on the cover./The ATRs were delivered as automated telephone calls by TeleVox Software Women assigned to ATRs were significantly more likely to have had mammograms than women assigned to EUCRs. Gursoy et al, (2009)/ Turkey "quasi-experimental design/ N=200 students , 168 mothers " University students were trained by the School of Health students about BSE through group training methods. Then, these trained university students were asked to train their mothers about BSE The results show that after training the women's knowledge level increased 2-fold, also the perceived benefits and confidence significantly increased. loss-framed telephonic message Mammogram performance in the loss-framed message group women n were 6 times more. Secginli et al, (2011)/ Turkey "RCT/ N=190(intervention=97, control =93) " The intervention group received Education with Booklet, Film, Calendar, Card/ The control group received No intervention In the intervention group, significantly increasing were seen from preto posttest in perceived susceptibility, benefits of mammography and BSE, and confidence but perceived barriers to mammography were decreased. In the intervention group were seen No significant changes for perceived barriers to BSE. Jane Lu et al, (2001)/ Taiwanese "quasi-experimental design/N=198 women " "The monthly telephone reminders received BSE pamphlets " The results of the study showed that the program significantly increased BSE accuracy and frequency, perceived benefit and competence of BSE, and decreased perceived susceptibility and barriers. "The one-to-one education by peer trainers posters " After peer education mean knowledge scores significantly increased. The rate of regular BSE significantly increased, perceived benefits and confidence of BSE increased and perceived barriers significantly decreased. 60-70 minutes educational program In the intervention group the mean score of Susceptibility, Benefits and barriers of mammography and BSE and Confidence were significantly higher than CON. Tuzcu et al, (2016) demonstrated the rates of mammography; BSE and CBE in the intervention group based on HBM-HPM were significantly higher than control group. In the intervention group, the self-efficacy perceptions benefit and health motivation, increased but perceptions of barriers and susceptibility decreased. Farhadifar et al., (2016) reported that HBM and TPB based interventions had positive effect on mammography screening behavior. Lee-Lin et al., (2015) mentioned that the culturally targeted educational based HBM-TTM program significantly increased mammogram screening in women. Taymoori et al., (2015) reported that educational intervention based on HBM and TPB improved mammography screening of women. The Results of Cohen and Azaiza (2010) Study shows HBM-TTM culture based intervention reduced the berries and improved BCS behavior. Champion et al., (2006) demonstrated that tailored HBM-TTM based education program is more effective than targeted messages (print or video format) in mammography screening behavior of low income African American women. Champion et al., (2007) reported that all interventions based on HBM-TTM had positive effect on mammography screening behavior of women. In Champion et al., (2003) study tailored interventions based on HBM and TTM lead to increase mammography screening in older women. Characteristics of Mixed model based studies showed in Table 2. Discussion This review provides new insight into the effectiveness of model based interventions on breast cancer screening behavior of women; our result showed that health behavior models could help to enhance BCS behavior of women. About three fourth of the studies were included in our study were about the HBM based interventions with different educational intervention (including: GATHER consultancy technique, multimedia education, brain storming, pamphlet, video and educational booklet, mailed letters, telephone reminder, reminder cards etc.). Almost all of HBM based studies showed positive In the intervention group subscale of TTM (self-efficacy, stages of change, decisional balance) significantly higher than CON. Lin and Judith (2010)/ Taiwan "pretest-posttest design/ N=128 (64 in each group) " The Trans theoretical Model The tailored intervention group received a feedback, personal testimonies, and role modeling/The standard intervention group received mammography brochures. after intervention the tailored intervention group had signifi-cantly more positive perceptions of mammography and more intention to have mammography than the standard intervention group The result showing CTI earned a higher mean (60.56) post in-tervention score than that of both TMI (57.95) and SI (53.26). Mirzaii et al, (2016)/ Iran "randomized quasi-experimental N=120 " SHEP model SHEP-based educational intervention was implemented in the form of workshops and two four-hour sessions (total: 8 h) After the intervention, in the experimental group women had significantly higher attitude scores relative to the CON. Also in the experimental group significant increase was seen in the BSE scores. HBM-HPM "65minute training Consultation by telephone Reminder cards " In women in the intervention group after intervention the rates of mammography; BSE and CBE were significantly higher than women in the CON. -Lin et al, (2015)/ America "RCT N= 300 Intervention=147 Control=153 " HBM/TTM "Educational intervention class A scripted verbal presentation accompanied by PowerPoint slides Control group received No intervention " The result showed, that intervention group compared the CON was 9 times more likely to complete mammograms. Moderate Cohen& Azaiza (2010)/ Israeli "Pretest posttest,/ N=66 " HBM/TTM the intervention group received tailored telephone / The control group received any intervention After intervention 47.6% of women in intervention group and 12.5% of women in CON group scheduled or attended a CBE (p<0.05), 38% of the intervention group and 75% of the CON group had only irregularly attended or never CBE. (3) tailored print, or (4) tailored print and telephone counseling For contemplators, the combination of telephone and print was clearly the most effective intervention for promoting mammography; it appears that adding the printed material to the phone messaging hah and additive effect. Champion et al, (2006)/ America "prospective randomized intervention N=344 " HBM/TTM 1) pamphlet only (2) culturally appropriate video(3) interactive computer-assisted instruction program The result showed that adherence to mammography in the interactive computer-assisted instruction program group was greater than two other intervention groups. Moderate "Champion et al, (2003)/ America " "RCT N=773 " HBM/TTM 1)standard care(2)tailored telephone counseling, (3) tailored in-person counseling, (4) non tailored recommendation letter signed (by scanned signature) by their primary care physician(5) tailored telephone counseling plus non tailored physician recommendation letter(6) tailored in-person counseling plus non tailored physician recommendation letter ,usual care All intervention groups have higher odds of mammography relative to the usual care group. Women receiving a combination of physician recommendation and in-person counseling have a higher odds of mammography adherence relative to the physician recommendation group (OR = 1.84) and telephone counseling group (OR =1.78). HBM/TPB "There were 8 sessions for the HBM and TPB interventions that focused on perceived threat (lecture, Reminder card, small groups discussion, consulting) the CON group received pamphlets " Moderate In the intervention groups women perceived severity and susceptibility of BC and perceived benefits and self-efficacy of mammography use increased but perceived barriers about mammography use decreased . Women in intervention groups have greater perceived control and higher levels of positive subjective norms regarding mammography. The screening in women in the HBM group have significantly increased compare to CON group due to greater susceptibility, perceived control, and self-efficacy, and women in the TPBgroup have greater odds of performance mammograms compare to CON lead to increased self-efficacy and much reductions in barriers. The Trans theoretical Model, Systematic Comprehensive Health Education and Promotion Model, Health Promotion Model, theory of planned behavior are the other models used in different educational program such as workshops, tailored mail and telephone intervention, apartment billboards, peer education, which review in our study. Health models are useful on health perceptions of women (Ergin et al., 2012). The Health Promotion Model categorizes the factors that influencing human behaviors, this model survey the behavioral and situational factors and interpersonal relation (Galloway, 2003). The theory of Planned Behavior demonstrate attitude toward the behavior, and perceived behavioral control and subjective norm (Asare, 2015). The purpose of the systematic comprehensive health education and promotion model was increasing health literacy and mentoring peer health educators (Mirzaii et al., 2016). Trans theoretical Model helps planners programs based on an individual' motivation, and ability (Glanz et al., 2008). Moderate Our review showed that the trans-theoretical model combined with the other models were successful than other models because this model is based on the stages of behavioral change but other models show the creating behavior mechanism , so for this reason, combined this model with the other models will lead to greater success program. Structure of the TTM is more complex than other models , and this model is effective to promote both individual and population level health behavior change programs (Taylor, 2007).Cancer education and Health Behavioral Counseling based on TTM can promote healthy lifestyles (McLaughlin et al., 2010). In a meta-analysis study reported that Health models improve breast cancer screening in women (Ergin et al., 2012). In review article with title "Applying the Trans-theoretical Model to Cancer Screening Behavior" concluded that Stage of change and decisional balance appear to use to mammography performance (Spencer et al., 2005). Lawal et al., (2016) in their narrative review article reported that among four health behavioral theories and models (the HBM, TBP, TTM, and the theory of care seeking behavior), the theory of care seeking behavior uses broader constructs and is affective in participation of women in mammography screening. Ahmadian and Samah, (2013) in their article reviews several cognitive theories and models associated with BC screening and they reported that in Asian women a few empirical study were about the application of health theories in promoting to the BC prevention programs and a few studies addressed the individual cognitive factors that are likely to motivate women to protect against BC in Asia. Cancer education interventions lead to increasing constructive health behaviors (Booker et al., 2014). Breast cancer screening education is a low cost program with high benefit for women's health in worldwide (Kennedy et al., 2016). Education about breast cancer can increase BCS practice and knowledge of women (Gözüm et al., 2010). The strength of the study is that our study is based on experimental studies which performed in different area of world. Further recommendations for research would include studies specifically In conclusion, the educational model-based interventions promote self-care and create a foundation for improving breast cancer screening behavior of women and increase policy makers' awareness and efforts toward enhancing breast cancer screening promotion. Model-based interventions are more successful than interventions that are not based model because these programs are based on understanding the mechanism of health behavior change and researchers with accurate understanding of the mechanism or process of behavior change programs are more likely to succeed plan. Limitations In this review due to the heterogeneity of the data, we cannot do meta-analysis. The other limitation of our study is that we only use common educational behavior change models so another studies needs to review the impact of other models on breast cancer screening behavior of women. Conflict of Interest None.
2018-08-25T21:43:14.175Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "799fbfe0fa51108d9e306982a503a8a3153cc093", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "b05208d1b0db448508d062bd97131dae37385905", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
76668283
pes2o/s2orc
v3-fos-license
Synthesis and selected transformations of 2-unsubstituted 1-(adamantyloxy)imidazole 3-oxides: straightforward access to non-symmetric 1,3-dialkoxyimidazolium salts Adamantyloxyamine reacts with formaldehyde to give N-(adamantyloxy)formaldimine as a room-temperature-stable compound that exists in solution in monomeric form. This product was used for reactions with α-hydroxyiminoketones leading to a new class of 2-unsubstituted imidazole 3-oxides bearing the adamantyloxy substituent at N(1). Their reactions with 2,2,4,4-tetramethylcyclobutane-1,3-dithione or with acetic acid anhydride occurred analogously to those of 1-alkylimidazole 3-oxides to give imidazol-2-thiones and imidazol-2-ones, respectively. Treatment of 1-(adamantyloxy)imidazole 3-oxides with Raney-Ni afforded the corresponding imidazole derivatives without cleavage of the N(1)–O bond. Finally, the O-alkylation reactions of the new imidazole N-oxides with 1-bromopentane or 1-bromododecane open access to diversely substituted, non-symmetric 1,3-dialkoxyimidazolium salts. Adamantyloxyamine reacts with glyoxal and formaldehyde in the presence of hydrobromic acid yielding symmetric 1,3-di(adamantyloxy)-1H-imidazolium bromide in good yield. Deprotonation of the latter with triethylamine in the presence of elemental sulfur allows the in situ generation of the corresponding imidazol-2-ylidene, which traps elemental sulfur yielding a 1,3-dihydro-2H-imidazole-2-thione as the final product. Introduction Imidazole N-oxides constitute a practically valuable class of five-membered aromatic N-heterocycles [1][2][3][4][5]. The subclass of 2-unsubstituted imidazole N-oxides 1 with diverse substituents located at N(1), C(4), and C(5) is of special interest as so-called 'nitrone like' reagents for the synthesis of more complex, imidazole containing systems (Scheme 1) [6]. These imidazole N-oxides are easily accessible via heterocyclization reactions comprising condensation of α-hydroxyiminoketones 2 with formaldimines 3. The latter are known to exist in monomeric form in the case of sterically crowded parent amines such as 1-aminoadamantane or tert-butylamine or, alternatively, as trimeric hexahydro-1,3,5-triazines 3' in the case of sterically less crowded amines [7]. In analogy to other azoles and in contrast to six-membered aromatic N-heterocycles, N-oxides 1 cannot be prepared via oxidation of the parent imidazoles by treatment with an oxidizing agent, e.g., with a percarboxylic acid [6]. Imidazole N-oxides are versatile substrates for the preparation of diverse imidazole derivatives and the most characteristic feature is their 1,3-dipolar reactivity analogous to aldonitrones, which enables their conversion via [3 + 2]-cycloadditions with dipolarophiles such as activated ethylenes [8,9], activated acetylenes [10,11] or isocyanates [11,12]. The initially formed [3 + 2]-cycloadducts undergo spontaneous secondary conversions leading to re-aromatization of the imidazole ring. The same mechanism governs sulfur-transfer reactions with cycloaliphatic thioketones yielding the corresponding imidazole-2-thiones [13]. The isomerization of N-oxides 1 into the corresponding imidazol-2-ones can be easily performed by treatment with acetic anhydride at room temperature [14]. An important reaction of 1 is the O-alkylation leading to alkoxyimidazolium salts, which display in some cases properties of 'room temperature ionic liquids' [15,16]. Finally, straightforward deoxygenation by treatment with Raney-Ni is also worth mentioning [17]. Condensations presented in Scheme 1 occur smoothly with formaldimines 3 derived from aliphatic amines, but in the case of primary aromatic amines harsher reaction conditions are required [6]. To date, no synthesis of 1-alkoxyimidazole N-oxides derived from alkoxyamines ('hydroxyamine ethers') has been reported. An alternative approach to these products comprises the alkylation (in most cases methylation) of 1-hydroxyimidazole N-oxides. However, the products of the monomethylation could not be obtained, but symmetric N,N-disubstituted imida-zolium salts resulting from double O-alkylation were isolated [18]. On the other hand, alkoxyamines are known not only as bioactive compounds [19,20] but also as important initiators of polymerization processes that have been extensively studied in the recent decade [21]. Among alkoxyamines, adamantyloxyamine (4) occupies a prominent position, and for that reason it was selected for experiments aimed at the preparation of the new group of 2-unsubstituted imidazole N-oxides bearing the adamantyloxy residue at N(1). The goal of the present study was the synthesis of some representatives of this type and comparison of their properties with those of 1-adamantyl analogues. Results and Discussion The starting adamantyloxyamine (4) was prepared from 1-bromoadamantane, which in the presence of an equimolar amount of AgBF 4 in boiling dimethoxyethane (DME) reacts with N-hydroxyphthalimide to give the O-alkylation product 5 [20] (Scheme 2). The latter was converted to 4, obtained in 86% yield, after hydrazinolysis with N 2 H 4 . H 2 O in slight excess. The crude product was reacted with formaldehyde in boiling MeOH, and after 1 h, crystalline N-(adamantyloxy)formaldimine (6a) was isolated in 90% yield. The spectroscopic data indicate that this imine exists in solution in monomeric form exclusively. Thus, the 1 H NMR spectra confirmed this structure by the presence of two doublets located at 6.39 and 7.01 ppm with J = 12 Hz, characteristic for the =CH 2 group. In addition, the 13 C NMR spectrum showed the absorption of this group at 136.1 ppm. Having imine 6a in hands, syntheses of a series of 2-unsubstituted 1-(adamantyloxy)imidazole 3-oxides 7a-e were performed in glacial acetic acid at room temperature. The crude products were transformed into their hydrochlorides by treatment with conc. hydrochloric acid, and subsequent neutralization over solid NaHCO 3 led to crystalline products (procedure A). Following this procedure, the required N-oxides 7a-e were obtained in good to excellent yields (Scheme 3). In addition, based on the earlier described protocol [7], two imidazole Scheme 2: Preparation of adamantyloxyamine (4) and its conversion into N-(adamantyloxy)formaldimine (6a); Ad = 1-adamantyl. N-oxides 7f and 7g, bearing the adamantan-1-yl moiety attached to N(1), were also obtained in high yields starting with (adamantyl)formaldimine (6b, Scheme 3). In this case, crude products were isolated as hydrochlorides not by treatment with hydrochloric acid but with gaseous hydrogen chloride (procedure B). Both compounds obtained by this method were fully characterized and described in an earlier publication [7]. In all cases the 1 H NMR spectra of new products confirmed the structures 7a-d by the presence of the diagnostic HC(2) signal between 8.76 and 7.93 ppm. In the 13 C NMR spectra, absorptions of the three imidazole C-atoms were found between 120 and 130 ppm. In addition, the signals of OC(1) of the adamantyl skeleton appeared in a narrow range at 89.0-86.9 ppm. Unex-pectedly, in the case of 7e, the same procedure led to the final, crystalline product as a hydrochloride. Apparently, hydrogen bonding in this molecule is strong enough to bind an HCl molecule, which cannot be removed by treatment with solid NaHCO 3 in the methanolic solution under standard conditions. The first question was whether the new imidazole N-oxides 7 can be deoxygenated with Raney-Ni without cleavage of the N(1)-OAd bond. The test reaction with 7a demonstrated that the reaction performed in MeOH solution at room temperature was completed after 2 h, and the main product isolated in 39% yield was the expected 1-(adamantyloxy)imidazole 8a (Scheme 4). In the product, the diagnostic HC(2) signal in the 1 H NMR spectrum shifts high-field and appears at 7.34 ppm. Analogously, deoxygenation of N-oxides 7b-d led to the required imidazoles 8b-d in 66%, 62%, and 39% yield, respectively. As pointed out in the introduction, one of the most typical reactions of 2-unsubstituted imidazole N-oxides is their isomerization to imidazol-2-ones. In many reported cases, this transformation can be performed by treatment with acetic anhydride, heating in a high boiling solvent or photolytically [6]. In a test experiment with 7b, the reaction with Ac 2 O in CHCl 3 solution at room temperature overnight led to a crystalline product iden- tified as the expected imidazole-2-one 9 in 36% yield (Scheme 4). In that case, the 1 H NMR singlet located at 11.02 ppm belongs to HN(3). The 13 C NMR spectrum confirmed the structure of 9 by the signal of the C(2)=O group found at 153.0 ppm. The corresponding band in the IR spectrum appeared at 1702 cm −1 as the strongest absorption. Thermal isomerization of 7b was also tested. However, boiling in bromobenzene for 15 min resulted in decomposition of the starting material, and in this case, the expected imidazole-2-one 9 could not be detected in the crude reaction mixture. Sulfur-transfer reactions offer an attractive approach to imidazole-2-thiones starting with 2-unsubstituted imidazole N-oxides [13]. The importance of this procedure is reflected in the multistep reactions applied for the preparation of some bioactive imidazole derivatives [22,23]. However, the preparation of imidazole-2-thiones with N-alkoxy groups, starting with the corresponding 2-unsubstituted imidazole 3-oxides, has not yet been reported. It turned out that the imidazole N-oxide 7b can smoothly be converted into 1-(adamantyloxy)imidazole-2thione 10b in the presence of 2,2,4,4-tetramethylcyclobutane-1,3-dithione (11a) as the sulfur-donating reagent. The reaction was carried out in CH 2 Cl 2 at room temperature, and after 1 h, the desired product 10b was isolated in 55% yield (Scheme 5); its structure was confirmed spectroscopically. For example, in the 1 H NMR spectrum a singlet located at 12.15 ppm was attributed to the HN(3) unit. Moreover, the 13 C NMR spectrum revealed the C=S absorption at 159.6 ppm. Analogous transformations were achieved with 7a and 7c,d. A plausible explanation of the sulfur-transfer mechanism via the intermediate [3 + 2]-cycloadduct A is presented in Scheme 5. The eliminated monothione 11b enters an analogous reaction with the starting imidazole N-oxide 7, leading to a second molecule of 10 and 2,2,4,4-tetramethylcyclobutane-1,3-dione. In the present study, imidazole N-oxides 7 bearing either an adamantyloxy or adamantyl moiety at N(1) were smoothly alkylated with 1-bromopentane (12a) or 1-bromododecane (12b) in CHCl 3 at room temperature. In the case of 7a and 7b, both reactions were completed after 24 h and a crystalline product was isolated in each case. The NMR spectra confirmed the expected structure of 1-adamantyloxy-3-alkoxyimidazolium bromides 13a and 13b, respectively (Scheme 6). In the 1 H NMR spectra of both compounds, the most characteristic signal of HC(2) was shifted significantly to lower field and appeared at 11.46 and 12.12 ppm, respectively. In the 13 C NMR spectra of 13a, two signals at 83.9 and 91.5 ppm were attributed to CH 2 -O and C-O of the pentyloxy and adamantyloxy residues. In order to compare the course of the alkylation reactions of 7a and 7b with those of the structurally analogous 1-adamantylimidazole 3-oxides 7f and 7g, the latter were treated with 12a or 12b under the same conditions. Remarkably, these alkylations occurred more slowly and their completion was established only after 3 d leading to 1-adamantyl-3-alkoxyimidazolium bromides 13d-g. These results indicate that the 1-adamantyloxy substituent enhances the nucleophilicity of the N-oxides and the alkylation requires shorter reaction times. In extension of this study, the alkylation of 1-adamantyloxy-4,5-diphenylimidazole (8b) with 12b was also attempted under the same conditions. In that case, however, the expected N-alkylation did not occur at room temperature and even after 2 d the starting materials were found unchanged in the reaction mixture. On the other hand, the attempted N-alkylation of 8b upon MW irradiation led to the formation of a mixture of starting materials and some unidentified decomposition products. Due to the great importance of both carbocyclic and heterocyclic systems functionalized with the adamantyl group [27], different types of adamantylation reactions (C-, N-, or O-adamantylation) attract attention, and typically 1,3-dehydroadamantane [28] or 1-haloadamantanes [29] are applied as alkylating reagents. Adamantan-1-yl carboxylates are also known as efficient adamantylating reagents [30]. In spite of the fact that adamantylations of some benzimidazoles and imida-zoles have already been reported [31], similar reactions with heterocyclic N-oxides have not been published yet. For that reason, in the final part of the study, a preliminary experiment aimed at the O-adamantylation of imidazole N-oxide 7a was carried out under typical conditions (CHCl 3 solution, rt) using 1-bromoadamantane as an alkylating agent. However, formation of the expected 1,3-bis(adamantyloxy)imidazolium salt was not observed neither in the absence nor in the presence of AgBF 4 . Based on this observation, 1-bromoadamantane was replaced by adamantan-1-yl trifluoroacetate (Scheme 7). Unexpectedly, this test experiment, performed with 7a at room temperature, led after 24 h not to the expected, symmetric 1,3di(adamantyloxy)imidazolium salt 13h but to the trifluoroacetate of the starting material, i.e., compound 14, formed side by side with adamantan-1-ol. Apparently, the initially formed 13h underwent spontaneous hydrolysis in the presence of air moisture, leading to the mixture of both isolated products. Finally, in extension of the study focused on the preparation of new, symmetric 1,3-dialkoxyimidazolium salts, the synthesis of 1,3-di(adamantyloxy)imidazolium salt, starting with glyoxal hydrate, adamantyloxyamine (4) and formaldehyde, using these reagents in a ratio 1:2:1, in the presence of hydrobromic acid was attempted. The reaction performed overnight in acetic acid at room temperature led to the expected imidazolium bromide 15 in 41% yield (Scheme 8). Its structure was confirmed by spectroscopic data; for example, in the 1 H NMR spectrum the characteristic singlets of H(2) and H(4)/H(5) appeared at 11.90 and 7.68 ppm, respectively, and the ratio of the intensities was 1:2. On the other hand, the 13 C NMR revealed the absorptions of C(2) and C(4)/C(5) at 137.0 and 120.2 ppm, respectively. Conclusion The present study demonstrates that the heterocyclization reaction with α-hydroxyiminoketones and formaldimines leading to 2-unsubstituted imidazole 3-oxides can efficiently be performed with N-(adamantyloxy)formaldimine, and 2-unsubstituted 1-(adamantyloxy)imidazole 3-oxides were obtained in high yields. In general, they react similarly to their 1-alkyl analogues and undergo deoxygenation without removal of the adamantyloxy fragment. The sulfur transfer reactions provide access to the new group of 1-(adamantyloxy)imidazole-2thiones, which potentially are of interest for medicinal chemistry. The acetic acid anhydride assisted isomerization reaction of a new N-oxide leads to the corresponding imidazol-2-one. However, the attempted thermal isomerization of a 1-(adamantyloxy)imidazole N-oxide resulted in decomposition of the starting material. The alkylation experiments performed with 1-bromopentane and 1-bromododecane showed that 1-(adamantyloxy)imidazole 3-oxides are more reactive than their 1-adamantanyl analogues, and the corresponding 1,3dialkoxyimidazolium bromides were obtained in high yields. Attempted adamantylation of a 1-(adamantyloxy)imidazole 3-oxide with 1-bromoadamantane or adamantan-1-yl trifluoroacetate was unsuccessful. Nevertheless, the corresponding symmetric 1,3-di(adamantyloxy)imidazolium bromide was obtained in the reaction of adamantyloxyamine with glyoxal, formaldehyde and hydrobromic acid. By treatment with triethylamine, it was deprotonated and formed the corresponding imidazol-2-ylidene, which reacts with elemental sulfur yielding the expected 1,3-dihydro-2H-imidazole-2-thione. Thus, the elaborated protocols provide straightforward access to diverse 1-(adamantyloxy)imidazole 3-oxides as well as to 1,3-dialkoxyimidazolium salts, which are attractive substrates for syntheses of other imidazole derivatives, including a new group of 1-alkoxy and 1,3-dialkoxyimidazol-2-ylidenes. In all described cases, it is of interest to probe the influence of the alkoxy residues on the stability and reactivity of the hitherto unknown nucleophilic carbenes bearing this groups at N(1) and/or N(3) atoms. In addition, it is worth mentioning that 1,3-dibenzyl-4,5dimethylimidazolium chloride as well as its 2-methyl-substituted analogue are well known imidazole alkaloids, which found wide application in some regions as food-stuff and medical supply [33]. The method described in the present study opens a straightforward access to their benzyloxy analogues, potentially bioactive compounds, which have not been known yet. General procedure for the preparation of imidazole 3-oxides 7 (procedure A): A solution of 5 mmol of 6 and 4.2 mmol of the corresponding α-hydroxyiminoketone 2 in 15 mL of glacial acetic acid was stirred magnetically overnight at rt. Then, 4.5 mL of conc. hydrochloric acid were added in small portions. Stirring was continued over 15 min, and after this time the acetic acid was evaporated. The semi-solid residue was triturated with a portion of diethyl ether and the crystalline material was separated by filtration. Next, the colorless crystals were dissolved in a portion of MeOH (10 mL) and while magnetically stirring, sodium hydrocarbonate was added in small portions until the evolution of carbon dioxide ceased. The mixture was stirred overnight, filtered and the filtrate was evaporated to dryness. The crude product was triturated again with diethyl ether and after few minutes filtered off. In some instances, prod-ucts obtained thereby were dried over freshly prepared molecular sieves 4 Å to give hydrate-free samples. Preparation of non-symmetric 1,3-dialkoxyimidazolium bromides 13a-g -general procedure: The corresponding imidazole N-oxide 7 (1 mmol) dissolved in 2-3 mL of chloroform was treated with alkylating reagent 12a (227 mg, 1.50 mmol) or 12b (262 mg, 1.05 mmol), which were added drop-wise at rt. The obtained reaction mixtures were stirred magnetically at rt and the progress of the reaction was monitored by TLC (SiO 2 , CH 2 Cl 2 ). When the starting 7 was completely consumed, the solvent was evaporated and the residue was triturated with diethyl ether. In most cases, solid, colorless or beige colored products were formed. In the case of 13e, a semi-solid product was formed; no crystallization was observed even after several days at rt. Similar properties were displayed by product 13d, which did not crystallize at all and was analyzed as an oily material. Preparation of symmetric imidazolium salt 15: To a solution containing 145 mg of glyoxal (40% aqueous solution), adamantyloxyamine (4, 334 mg, 2 mmol), and paraformaldehyde (30 mg, 1 mmol) in 4.5 mL of glacial acetic acid, hydrobromic acid (2 mmol, 0.36 mL (32% solution in AcOH)) was added in one portion and the mixture obtained thereby was stirred magnetically overnight at rt. Next day, the solution was evaporated to dryness and the residue obtained as a thick oil was purified chromatographically on a short silica gel column using CH 2 Cl 2 /MeOH (97:3 mixture) as an eluent. The required product was isolated as a crystalline material. Additional crystallization from diisopropyl ether/CH 2 Cl 2 mixture afforded analytically pure imidazolium salt 15. Supporting Information Supporting Information File 1 Experimental and analytical data and copies of NMR spectra.
2019-02-19T18:21:35.091Z
2019-02-19T00:00:00.000
{ "year": 2019, "sha1": "ccf0583654645238b6ede4c7258dcfb8f3c4ce54", "oa_license": "CCBY", "oa_url": "https://www.beilstein-journals.org/bjoc/content/pdf/1860-5397-15-43.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "90b48e57c3d7b34e7dbe7f989ed8f37bcdbe493c", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
210157463
pes2o/s2orc
v3-fos-license
Statistical Optimization of Culture Conditions for Protein Production by a Newly Isolated Morchella fluvialis Morchella fungi are considered a good source of protein. The ITS region was used to identify Morchella isolated in the northern region of Iran. The isolated fungus was very similar to Morchella fluvialis. M. fluvialis was first isolated in Iran. Dried biomass of M. fluvialis contained 9% lipids and 50% polysaccharides. Fatty acid profiles of lipids of M. fluvialis are mainly made up of linoleic acid (C18:2) (62%), followed by palmitic acid (C16:0) (12%). Testosterone (TS) was also detected (0.732 ng/dry weight biomass (DWB)) in the hormone profile of this new isolated species. Then, various protein and carbon sources as variable factors were applied to identify the key substrates, which stimulated protein production using the one-factor-at-a-time method. Key substrates (glucose and soybean) were statistically analyzed to determine the optimum content of the protein and DWB accumulation using response surface methods. The highest protein content (38% DWB) was obtained in the medium containing 80 g/l glucose and 40 g/l soybean powder. Total nutritionally indispensable amino acids and conditionally indispensable amino acids constitute 55.7% crude protein. That is to say, these adequate quantities of essential amino acids in the protein of M. fluvialis make it a good and promising source of essential amino acids for human diet. Introduction Morchella are edible fungi belonging to the class Ascomycetes and are closely related to simpler cup fungi in the order Pezizales. Ridges with pit frameworks of fungi cap creating a honeycomb structure were frequently considered to recognize these distinguished fungi. These precious and delicious fungi were discovered in China, India, Turkey, the Himalayas, and Pakistan. For years, the number of species was the subject of taxonomic controversy, while current phylogenetic trees indicated that seventy species of Morchella have been recognized all over the world. Various research studies have been conducted on phylogeny, biogeography, taxonomy, and nomenclature of this genus to identify new species all over the world. In spite of the fact that the primary trait of Morchella species is the high continental endemism and provincialism [1,2], transcontinental species were also discovered [3][4][5][6]. In addition, the inconsistent ecological potential of Morchella species produced symbiotic, endophytic, and saprotrophic abilities [7][8][9][10][11]. Morchellaceae family has featured a wide diversity of bioactive components with curative properties [12]. High protein content along with unique flavor and medicinal properties [13] is the main observable characteristic to consider this species as a famous edible mushroom [14]. These fungi were found to have antiviral, antioxidative, and anticancer properties [15][16][17]. Submerge fermentation is advantageous due to cost-effectiveness, low temperature requirement, effective contamination control, and shorter fermentation time. With these promising factors, submerge fermentation is frequently used to enhance the vital components of Morchella [14,18,19]. That is to say, various research studies on Morchella were conducted in submerge fermentation (SMF) to produce vital components such as antioxidants and polysaccharides [14,20,21]; SMF has rarely been applied to optimize protein production. After the fungi were isolated, they were identified with the PCR method, and fatty acid profiles, hormone profile, total protein content, and polysaccharide content of the isolated fungi were analyzed. SMF was used to produce proteins; the protein content was optimized using the one-factor-at-a-time method and RSM. Finally, the amino acid profile of Morchella was analyzed at the optimal condition, which had the highest protein content. Materials and Methods 2.1. Fungi Culture. The mushroom was found at the lower elevations of mountainous areas in the northern region of Iran, Gorgan, Golestan Province. The fungus fruit was split using a sterilized surgical blade, and then a patch of the fruit body which had the least connection with its surrounding was removed and put in an enriched PDA medium containing mineral elements. Molecular Identification. To identify the fungal species, the sample was first incubated in a seed culture medium. Then, DNA was extracted using a DNA extraction kit (K721, Thermo, USA). Quantity and quality of the extracted DNA were determined by using a spectrophotometer and agarose gel, respectively. The ITS region (ITS1, 5.8S, ITS2) was amplified with the universal primers ITS1F (5′-GCA-TATCAATAAGCGGAGGAAAAG-3′) [22] and ITS4 (5′-TCCTCCGCTTATTGATATGC-3′) [23], with an initial denaturation for 5 min at 94°C and then 30 cycles consisting of denaturation at 94°C for 30 sec, annealing at 55°C for 30 sec, and extension at 72°C for 1 min, and a final extension at 72°C for 5 min was applied. Sterilized distilled water was used as a negative control. PCR products were electrophoresed in a 1.2% (w/v) agarose gel. In order to assure the accurate identification, the elongation factor EF1-α gene using 1577F (5′-CARGAYGTBTACAAGATYGGTGGG-3′), 1567RintB (5′-ACHGTRCCRATACCACCRAT-3′), and 2212R (5′-CCRAACRGCRACRGTYYGTCTCAT-3′) primers [1,24] was also served. PCR conditions were identical to previously mentioned method, with the exception of slight fall in annealing temperature at 62°C. Sequencing was done by Takapouzist Company (http://www. takapouzist.com). Blast searching of ITS sequences was done, and the sequences were aligned using Mega 6.0 software. The clustering method UPGMA was used to draw the phylogeny with the aid of the Mega 6.0. 1000 bootstrap replicates. The consensus tree was also drawn by Mega 6.0. Analytical Methods. To extract polysaccharides, mashed dry mycelia were immerged into hot water (1 : 20, W/V ratio) at 60°C for 3 h. The polysaccharides were analyzed by the method of phenol-sulfuric acid [25]. Fatty acids were analyzed by gas chromatography (Unicam 4600, England) with a flame ionization detector (FID) [26]. Amino acids were analyzed and determined by ion-exchange chromatography with postcolumn derivatization with ninhydrin. Amino acids were oxidized with performic acid, which was neutralized with Na metabisulfite. Amino acids were liberated from the protein by hydrolysis with 6 N HCl for 24 hr at 110°C and quantified with the internal standard by measuring the absorption of reaction products with ninhydrin at 570 nm [27,28]. Freeze-dried biomass of the fresh fungi tissue was used to extract hormones, and a completely robotized immunochemical analyzer Cobas e 411 was used to recognize the quantity and quantity of fungi hormone profiles. Results The low growth rate and particularly a thick mycelium structure were the main characteristics of Morchella that were used to differentiate them from other fungal contaminations. EF1-α and ITS region genes were used for microbial species identification, and the results indicated that the isolated fungus had the highest similarity (99%) to M. fluvialis ( Figure 1). Sequences of the ITS region of the isolated fungi were registered in the NCBI database with the accession number MK011022; this new isolate of M. fluvialis has been first introduced from Iran. Hormone Analysis of M. fluvialis Tissue. The chemical productions of testosterone (TS) was chemically synthesized from androst-4-ene-3,17-dione (AD) [30]. Some varieties of microorganisms including yeasts [31][32][33][34] and filamentous fungi [35] were able to enzymatically convert AD to TS. Among various microbial resources of TS production, fungal species are able to produce a wide variety of enzymes which engender high quantity of sterane skeleton [35]. Table 1 indicates that the fruit of this fungus was potentially a good source of various hormones. TS was observed in M. fluvialis hormone profile, so this species could be used as a good and reliable source of this vital component. Nitrogen Sources. Various protein sources were used as variable factors. The highest DWB content was obtained in the medium containing soybean protein, while the lowest amount of DWB was obtained in the medium that had the inorganic nitrogen sources like ammonium nitrate and urea ( Figure 3). Various researchers revealed the fact that the protein source had a significant effect on DWB accumulation in fungal species [36,37]. Soybean protein had greater impact on DWB than the yeast extract media ( Figure 4). The results were in agreement with those of Park et al. [36], who reported that soy protein was ranked as an appropriate medium for secondary metabolic productions. Park et al. [36] reported that gradual consumption of the low-soluble soybean powder protein was the main factor, which stimulated secondary metabolic productions. Carbon Sources. Carbon sources as variable factors were examined, and in each medium, 20 g/l soybean as the best protein resource inducing more protein production was added. The starch substrate supported the highest DWB accumulation. Zhang et al. [38] reported that Morchella esculenta was the good source of enzymes, which in optimal conditions could assimilate starch by reducing it from 64.5% to 23.5%. The least amount of DWB was observed in the medium containing glucose ( Figure 5). However, the high amount of protein accumulation was observed in this medium. It could be concluded that glucose substrate stimulated M. fluvialis to produce high content of protein instead of DWB accumulation (Figures 5 and 6). The results of the one-factor-at-a-time method revealed that the soybean protein and glucose substrate were of vital importance to induce M. fluvialis for the highest protein production. Optimization by RSM. Optimization is regarded as a scientific trend to attain a mathematical model for predicting the correlation between responses and independent variable factors. The statistical method RSM has less experimentation than a complete factorial design [39]. The face-centered central composite design (FCCCD) of the RSM served to determine the appropriate quantity of the each aforementioned factor and analyze their interactions on protein and DWB accumulation. A wide range of these two key factors (glucose and soybean powder) was applied according to the previous studies to investigate the effect of each of the factors and their interactions on the quantity of the responses. The central composite design of the response surface method is provided in Table 2. The results indicated that the soybean protein had a great impact on the DWB and protein accumulation than glucose. Thus, with a slight increase in the protein content, biomass sharply increased (runs 6 and 9). Any increase in glucose content at a given constant level of soybean also produced a rise in DWB and protein content of DWB (runs 2 and 9). The results obtained by FCCCD were then surveyed by the analysis of variance (ANOVA), and the results were applied to fit a second-order polynomial equation. As shown in (Tables 3 and 4) the linear effects of soybean powder and glucose on the amount of the DWB and protein were significant (P < 0.01). The interactions of glucose and soybean powder were not significant for both responses (P > 0.01). The quadrant effect of the carbon source on the quantity of protein production was significant (P < 0.05) ( Table 4). The numerical value of the coefficient of determination (R 2 ) for both protein and DWB was 0.98, indicating the degree of matching the data in the regression model. It could be concluded that the regression models were able to well calculate and predict the correlations between culture conditions (glucose and soybean powder) and responses (protein and DWB content). Also, the lack of fit of the final model was nonsignificant, which indicated a good fit of the model. The fitted equation of DWB (Y1) and protein production content (Y2) over the level of glucose and soybean powder was indicated as follows: (1) Figure 7 shows the effect of glucose and soybean powder on the quantities of DWB. The results showed that an increase in glucose and soybean powder stimulated DWB production. The highest amount of DWB was obtained at the high quantity of glucose (60-70 g/l) and soybean powder (40 g/l). Jin et al. [37] reported that the protein substrate had a positive effect on DWB production [37]. Figure 8 shows the effect of different levels of glucose and soybean powder on protein accumulation. Increase in soybean powder (40 g/l) and glucose content (68 g/l) produced a high accumulation of protein in DWB. Research conducted by Reihani and Khosravi-Darani [40] showed that the nitrogen source had a significant effect on protein production in single-cell protein fungi. The optimal predicted values for variable factors were 68 g/l glucose and 40 g/l soybean powder to produce the highest amount of the protein (36.9%) content in DWB. Verification of Optimal Conditions. In order to verify the model, the optimum values of the DWB and protein productions predicted by RSM were experimentally verified. RSM predicted that the appropriate culture condition of the protein production was 68 grams per liter of glucose and 40 grams per liter of soybean powder, and an appropriate culture condition for DWB production was 69.58 and 40 g/l glucose and soybean powder, respectively. After 5 days of fermentation, the actual protein content in the mentioned media was 38% and the DWB content was 2.2%. With comparison of these two predicted values, the error rate was 2% and 1% for protein and DWB production, respectively. Anupama and Ravindra [41] indicated that the best proportion for the maximum protein production was 1.38 parts carbon to 1 part nitrogen; the ratio was 1.75/1 in the present study. Discussion The phylogenetic tree revealed that the isolated fungus was M. fluvialis belonging to Morchellaceae family, which was first reported and disassembled by Clowez et al. [42]. This fungus is similar to M. esculenta [42] which was first isolated in Spain. In spite of the similarity between M. fluvialis and M. esculenta, research studies have rarely been done on protein production using this fungus. Various research studies indicated M. esculenta had a high potential for protein production [43]. In line with the present research, LeDuy et al. [44] showed that the amount of protein in this fungus reached 32.7% of the DWB. The digestibility of the edible fungi protein ranged between 72% and 84%, which has been the main feature of this edible fungi species [45]. Substrate components, fermentation conditions, and the fungal species were among vital factors impacting the amino acid profile of protein and quantity of protein [46]. Roy and Samajpati [47] reported that the amount of protein in M. esculenta and M. deliciosa was 34.7% and 29.16%, respectively. The crude protein of the fungus was lower than that of the meat, while it was higher than that of most of the food, including milk [48]. The protein content of M. esculenta was typically between 19% and 35% compared with rice (7.3%), wheat (12.7%), corn (9.4%), and soybeans (38.1%) [49,50]. Protein accumulation reached 39% of DWB at optimal conditions, which constituted a significant proportion of M. fluvialis. In edible fungi, lipid content is generally lower than carbohydrate and protein content [51]. Research has shown that lipids obtained from edible fungi have more structural unsaturated fatty acids [29], with linoleic acid, a structural fatty acid [52], being predominant. Yilmaz et al. [53] reported that unsaturated fatty acids are predominant in edible fungi. Heleno et al. [29] reported that the unsaturated fatty acids were higher in M. esculenta species than saturated fatty acids. Linoleic, oleic, and palmitic were the predominant fatty acids in the lipid content of M. esculenta. In comparison with research studies done, fatty acids of M. fluvialis had high similarity to M. esculenta. Furthermore, linoleic acid was the predominant fatty acid in both fungi. Hasan [54] and Fernández Cabezón et al. [55] reported that various fungi species like Aspergillus flavus, A. ochraceus, Gibberella zeae, Cladosporium cladosporioides, Penicillium funiculosum, and P. rubrum were capable of producing high amount of hormones. Many research studies indicated that gut microflora had a great role in estrogen and phytoestrogen production [56][57][58]. This research indicated that the fresh tissue of M. fluvialis was a good source of TS. Verma et al. [59] reported that edible fungi had the high quantity of essential amino acids with higher similarity to meat protein. In 1976, Hayes and Haddad [51] reported that the essential amino acids explored in fungi species were of vital importance to use as dietary supplements. At optimal conditions, the DWB of M. fluvialis was made up of 38% protein. Considering the 77.38% of the total protein including NH 3 , the total nutritionally indispensable amino acids and conditionally indispensable amino acids comprised 43.1% crude protein, which make M. fluvialis as a good source of essential amino acids. Conclusion The appropriate amount of hormones, polysaccharides, and valuable proteins was the main feature of this new isolated fungus. Phylogenetic tree revealed that this species was M. fluvialis which was first isolated in Iran. The protein content of M. fluvialis was 36% DWB. The nutritionally indispensable amino acids and conditionally indispensable amino acids made up 28.7% and 14.4% of the total protein, respectively, making this fungus a vital source of essential amino acids. Data Availability The original data used to support the findings of this study are included within the article. Conflicts of Interest The authors declare that they have no conflicts of interest.
2020-01-02T21:47:04.991Z
2019-12-23T00:00:00.000
{ "year": 2019, "sha1": "694579398c6cc510c48393550ed69819f895c5d0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2019/7326590", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4c9f27b432f905961c65b9e57eac3bbace10865f", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
7130651
pes2o/s2orc
v3-fos-license
Association between Lameness and Indicators of Dairy Cow Welfare Based on Locomotion Scoring, Body and Hock Condition, Leg Hygiene and Lying Behavior Simple Summary Lameness is a major welfare issue in dairy cows. Locomotion scoring (LS) is mostly used in identifying lame cows based on gait and postural changes. However, lameness shares some important associations with body condition, hock condition, leg hygiene and behavioral changes such as lying behavior. These measures are considered animal-based indicators in assessing welfare in dairy cows. This review discusses lameness as a welfare problem, the use of LS, and the relationship with the aforementioned welfare assessment protocols. Such information could be useful in depicting the impact on cow welfare as well as in reducing the occurrence of lameness in dairy herds. Abstract Dairy cow welfare is an important consideration for optimal production in the dairy industry. Lameness affects the welfare of dairy herds by limiting productivity. Whilst the application of LS systems helps in identifying lame cows, the technique meets with certain constraints, ranging from the detection of mild gait changes to on-farm practical applications. Recent studies have shown that certain animal-based measures considered in welfare assessment, such as body condition, hock condition and leg hygiene, are associated with lameness in dairy cows. Furthermore, behavioural changes inherent in lame cows, especially the comfort in resting and lying down, have been shown to be vital indicators of cow welfare. Highlighting the relationship between lameness and these welfare indicators could assist in better understanding their role, either as risk factors or as consequences of lameness. Nevertheless, since the conditions predisposing a cow to lameness are multifaceted, it is vital to cite the factors that could influence the on-farm practical application of such welfare indicators in lameness studies. This review begins with the welfare consequences of lameness by comparing normal and abnormal gait as well as the use of LS system in detecting lame cows. Animal-based measures related to cow welfare and links with changes in locomotion as employed in lameness research are discussed. Finally, alterations in lying behaviour are also presented as indicators of lameness with the corresponding welfare implication in lame cows. Introduction Intensive farming systems are now common practice to meet the increasing demand for milk in different parts of the world. This has led to the introduction of dairy cows to an environment arbitrarily different from the cows' natural habitat, thereby triggering a range of welfare consequences. An animal is said to be in good welfare when it is able to express its innate behavior, free from distress and fear, in the absence of pain, and in good health [1]. However, these fundamentals of optimal welfare are often lacking with the advent of confining cows and persistent demands for high milk yield. As a result of these practices, outcomes such as chronic pain, discomfort, increased susceptibility to infectious disease and metabolic or physical fatigue are now common in dairy cows within intensive farming systems [2]. Lameness is a multifactorial condition and the most important welfare problem in dairy cows. Lameness is also regarded as a cause of economic loss owing to a reduction in milk yields, lowered reproductive performance and an increased risk of culling [3,4]. Farmers are often reported to underestimate the prevalence of lameness, thereby prompting a low perception of its impact on cow welfare, health and production [5,6]. With the rising occurrence of lameness in dairy herds globally, attempts to reduce the impact on welfare and production are needed. Locomotion scoring (LS) is widely used in detecting lame cows, in which gait properties are described to classify the severity on a numerical scale [7]. Events such as the small amount of time for farmers to observe lame cows, inadequate knowledge and inconsistencies in LS applications have encouraged the exploration of automated systems in lameness detection [8,9]. Nevertheless, the practical applications of the LS system on farms are limited. However, certain animal-based measures such as body condition scoring (BCS), hock condition and leg hygiene have been employed in assessing cow welfare, with recent findings suggesting vital associations with lameness. For instance, thin cows with low BCS (defined as BCS < 2 out of 5 scale) and poor hock condition have been reported to have a higher likelihood of becoming lame [10,11]. In some other studies, infectious causes of lameness and claw horn lesions were related to poor leg hygiene [12,13]. Amongst the behavioral alterations used in assessing cow welfare, resting or lying down activities have been described as potential indicators of lameness in dairy cows [14]. This review gives a brief introduction to lameness and gait changes that are used in detecting lame cows and the application of LS. The association between lameness and the aforementioned animal-based measures are discussed in relation to cow welfare. Lying behavioral changes and their potential role as indicators of lameness were also highlighted. In each section, factors that could influence the practical application of these measures as indicators of lameness and welfare were discussed. Lameness in Dairy Cows Lameness is a production-limiting disease and is regarded as the third most likely cause of the culling of dairy cows after mastitis and infertility [15]. Accordingly, lameness is an essential welfare problem, as studies have reported symptoms of distress and pain in affected dairy cows [16,17] as well as a negative impact on intrinsic behaviors such as lying down [18]. In lame cows, economic losses accrue in respect to reproductive performance. Milk yields might also be affected but remain undetected except where farm records are effectively monitored. Moreover, concerns such as under-diagnosis and effects on high-producing cows further complicate the problem of detecting ongoing loss [19,20]. The welfare implication is the likelihood for lame cows to be in pain, stress and unhealthy conditions in the herd without being detected. In addition, farmers' awareness of the welfare implications of lameness problems have been generally reported to be low [6]. On occasions where farmers perceive lameness as a problem, another contributing factor to the underestimation of lameness was the adaptive behavior of cows to conceal signs of pain by restricting gait changes until the condition becomes severe [21]. In this regard, the search for techniques and indicators for a timely diagnosis of subclinical to clinical lameness in dairy herds becomes plausible. However, there is a need to present the definition of lameness and to understand normal locomotion performance in sound cows to appreciate any alteration. Definition of Lameness Lameness is defined as the clinical presentation of impaired locomotion [22]. Olechnowicz and Jaskowski [23] described lameness as any condition characterized by alteration of gait caused by injury to the hoof or limb. However, a more elaborate definition is a resultant inclusion of the aforementioned features as the clinical manifestation of painful disorders, either as impaired mobility or abnormal gait and posture that are connected to problems in the locomotor system [21]. The degree of severity varies based on the type and location of the injury. Possible outcomes of the injury include stiff or asymmetrical limb movement to non-weight bearing presentation. Nevertheless, severe cases could result in lateral recumbency and increased lying down duration in the affected cow [18]. Hence, gait changes arising from pain and behavioral changes are important manifestations of lameness. Description of Sound Locomotion In order to identify lame cows, it is pertinent to understand the parameters that define a normal gait. Measures involving association between limbs, stride movement in footfall patterns and limb-to-claw movement have been used in describing animal gait. A stride incorporates three major features, which are walking, decisive steps and specific direction [21]. Hence, stride can be seen as a vector quantity based on its distance and directional components. In cows, stride ends up in the shortening of the limb and flexion of the joint when the hip, knee, hock and digital flexors are lifted above the ground. Philips et al. [24] divided strides as seen in cows' locomotion into swing, support and suspension phases. The former entails the lifting of the limb above the ground, leading to gradual extension of the joints. At the supporting phase, the limb makes contact with the ground and a further exertion force on the solar area before the next swing phase. The suspension phase is the moment in which all the limbs make contact with the ground; hence, for a cow to walk, there cannot be a suspension phase with support provided by only two or three limbs. In addition, a normal locomotion, according to Hildebrand [25], should display less duration of support as compared to the swing time. Additional descriptions of gait include the duration of stance and swing phases during one stride [25] as well as the inclusion of time intervals between successive movements of the rear or fore limbs [26]. Telezhenko et al. [27] designated a spatial association between the limbs in the form of track-way diagrams. The system incorporated measures of movement rate such as stride length and tracking, coordination of the limbs and maintenance of balance. However, even in sound cows, there are certain cow level factors suggested to affect locomotion, such as lactation stage, age and cow height [21,27]. These factors need to be considered when applying any system of scoring to assess gait properties. Alteration in Gait Presentation Various gait characteristics such as stride length, asymmetrical steps, speed, and weight distribution during the cows' locomotion have all been employed in lameness detection. Accordingly, severely lame cows were reported to walk slower, and displayed a shorter stride and a reduced step angle and step length [27]. Flower et al. [28] demonstrated that, in addition to slow movement, lame cows exhibited longer stride times as weight distribution over the limbs was unequal compared to sound cows. In another study, lame cows displayed step overlap and negative tracking distance [29]. Step overlap is either a reduced or increased extension of stride between the limbs, where the hind limbs fail to be placed at the same position as the fore limbs immediately after the previous stride. In the same vein, lameness was depicted by increased abduction as seen in the lateral distance between the fore claw imprint and corresponding presentation of the rear claw [28,29]. Gait feature changes such as asymmetry in step length, width and time between the right and left limbs during locomotion were also reported in lame cows. In this context, Van Nuffel et al. [21] found that inconsistent gait manifested as a progression from initial swapping from short to normal strides before persistent shorter strides as the severity of lameness increases. Alterations in Posture and Presentation of Body Movements Several postural changes are common in lame cows, including the presentation of the limbs when standing, back presentation and the position of specific parts of the body during locomotion. As shown in Figure 1, the two cows display the typical stance of a non-lame (right) and a lame cow (left). The presence of the hocked posture in the cow on the left is suggestive of lameness, as such a stance is adapted to relieve the pain present in the lateral claw [30]. Van der Tol et al. [31] showed that the lateral claw bears the majority of the body weight compared to the medial part during locomotion. Nevertheless, the hocked posture might be absent if either the fore limbs, medial claw or multiple claw lesions are present. The presentation of an arched back posture either at still or during locomotion has been associated with lameness in dairy cows [6]. The reason for such a posture was linked to the attempt to annul uneven weight distribution, depending on the limbs affected. Also, head bobbing-in the form of either nodding or vertical movement-in consonance with the moment the claws touch the ground was reported as a typical feature of lameness [28]. Alterations in Posture and Presentation of Body Movements Several postural changes are common in lame cows, including the presentation of the limbs when standing, back presentation and the position of specific parts of the body during locomotion. As shown in Figure 1, the two cows display the typical stance of a non-lame (right) and a lame cow (left). The presence of the hocked posture in the cow on the left is suggestive of lameness, as such a stance is adapted to relieve the pain present in the lateral claw [30]. Van der Tol et al. [31] showed that the lateral claw bears the majority of the body weight compared to the medial part during locomotion. Nevertheless, the hocked posture might be absent if either the fore limbs, medial claw or multiple claw lesions are present. The presentation of an arched back posture either at still or during locomotion has been associated with lameness in dairy cows [6]. The reason for such a posture was linked to the attempt to annul uneven weight distribution, depending on the limbs affected. Also, head bobbing-in the form of either nodding or vertical movement-in consonance with the moment the claws touch the ground was reported as a typical feature of lameness [28]. Alteration in Weight Bearing Non-lame cows normally display even weight distribution as a result of the balance between the claws and ground reaction force [32]. However, lame cows, in an attempt to reduce pain, redirect their body weight to the unaffected limbs [33]. Hence, while standing, the measurement of ground reaction force and weight bearing could be crucial in the assessment of lameness. According to Pastell et al. [8], more weight is often transferred to the healthy hind limbs if lameness occurs symmetrically in the front limbs. Conversely, weight is rarely directed to the front limbs when the cause of lameness is present in the hind limbs [8]. This was further established in later studies after quantifying weight distributions and leg weight ratios between sound and lame limbs [34]. Another important aspect is that weight bearing on unaffected limbs could also be induced as cows tend to kick. For a cow to kick, support needs to be provided by one rear limb, bearing most of the weight in the process. Chapinal et al. [34] and Chapinal and Tucker [35] found that lame cows exhibited increased step and kick behavior during milking compared to non-lame cows. However, there have been conflicting reports on the inclusion of the increased frequency of kick behavior as an indicator of lameness, as a similar event could be linked to presence of teat or udder injuries. Alteration in Weight Bearing Non-lame cows normally display even weight distribution as a result of the balance between the claws and ground reaction force [32]. However, lame cows, in an attempt to reduce pain, redirect their body weight to the unaffected limbs [33]. Hence, while standing, the measurement of ground reaction force and weight bearing could be crucial in the assessment of lameness. According to Pastell et al. [8], more weight is often transferred to the healthy hind limbs if lameness occurs symmetrically in the front limbs. Conversely, weight is rarely directed to the front limbs when the cause of lameness is present in the hind limbs [8]. This was further established in later studies after quantifying weight distributions and leg weight ratios between sound and lame limbs [34]. Another important aspect is that weight bearing on unaffected limbs could also be induced as cows tend to kick. For a cow to kick, support needs to be provided by one rear limb, bearing most of the weight in the process. Chapinal et al. [34] and Chapinal and Tucker [35] found that lame cows exhibited increased step and kick behavior during milking compared to non-lame cows. However, there have been conflicting reports on the inclusion of the increased frequency of kick behavior as an indicator of lameness, as a similar event could be linked to presence of teat or udder injuries. Locomotion Scoring in Dairy Cows Locomotion scoring (LS) is a useful assessment tool in the study, monitoring and prevention of lameness in dairy herds [36]. Locomotion scoring entails the observation of well-described gait and postural features as a cow walks on a flat surface. The five-point LS method developed by Sprecher [7] is one of the most frequently employed methods in lameness research. The presence or absence of an arched back is an essential feature assessed in the system [37]. Generally, the fundamental and consistent signs used in detecting lameness when applying LS include stride length, steps (asymmetrical gait), back presentation (presence of arched back), and the transfer of weight to the unaffected limbs, depending on the severity of lameness in dairy cows [38]. The first detailed LS system in cattle was described by Manson and Leaver [39] by using a nine-point scoring scale with specific features including tenderness, abduction and walking or rising ease. Subsequently, LS in cows was categorized into five classes by focusing on features such as gait asymmetry and locomotion difficulty [40]. The inclusion of head bobs as a gait indicator of lameness was made by Breuer et al. [41] prior to the modification by Flower and Weary [42] by introducing tracking up and joint flexion as additional measures. The LS system developed by the Welfare Quality ® Assessment Protocol for Cattle [43] entails the observation of irregular steps, the rhythm between successive claw placements and the different time of weight borne on each of the four feet. These aforementioned gait indicators were recently employed by Van Nuffel et al. [44] in categorizing cows into non-lame, mildly lame and severely lame. However, LS methods are mostly applied in free-stalls and are limited to the assessment of gait changes in response to pain. Ultimately, the diagnosis of lesions causing lameness requires the proper examination of the limb. In tie-stall herds, stance features such as the rotation of feet away from the body midline, foot resting, repetition of weight imbalance between limbs and uneven weight bearing during side movement are mostly assessed in lameness detection. According to Leach et al. [45], two or more of the listed gait indicators need to be present for a cow to be considered lame. Reliability of LS Systems There have been reports of certain weaknesses inherent in the use of LS, such as the difficulty in identifying cows at early stages of lameness and the undetailed description of the specific gait changes in affected animals [46,47]. A vital limitation is the subjectivity of the technique owing to intra-observer and inter-observer agreement and reliability [37]. Reliability is dependent on the quality and homogeneity of the sample population in a herd and the capability of observers to distinguish between lame and non-lame cows during LS [48]. Agreement, on the other hand, is the capability of observers to assign similar locomotion scores to sampled cows [49]. According to Schlageter-Tello et al. [37], the lack of a gold standard test and the degree of training among observers are major factors influencing agreement in the application of manual LS. However, the most widely used statistical measure of agreement in manual LS is the proportion of agreement (PA), where the acceptance threshold for good PA estimates is 75% [44]. Improvement in PA estimates was reported when LS scales were reduced from five to two levels comprising lame and non-lame [37,43]. According to Channon et al. [50], one of the reasons for variability is the unspecific description of the criteria for LS systems. In this context, observers might find it difficult to differentiate between moderately lame and mildly lame as seen in some LS systems. Horseman et al. [5] suggested a similar reason for the variability in prevalence of lameness estimates between farmers and veterinarians while applying the LS system. To improve the reliability of LS in lameness studies, factors that need to be considered with the potential of influencing locomotion performance include parity [51], walking surface [27], anatomical conformation [52], claw trimming [53], and degree of udder distension and lactation stage [9]. Other approaches include periodic retraining in order to reach acceptable levels of inter-observer reliability [46]. Authors have also reported improvements in the sensitivity of LS methods through the addition of certain gait indicators such as stride length, asymmetrical steps, tracking up [27], head bobbing, tracking up and joint flexion [42]. In addition to manual LS systems, automated systems involving computerized kinematic techniques, sensors and accelerometers [9,34,54] have been developed to detect lame cows and the presence of specific claw lesions. As reviewed by Schlageter et al. [37], these advanced systems have been reported with higher diagnostic values, with specificity (Sp) and sensitivity (Se) as high as ≥80% and 39-90%, respectively. However, the use of automated systems for detecting lame cows is limited, as the validation entails the use of the LS system as the gold standard. From the reports of diagnostic values ranging from 39 to 90% Se and 80% Sp in a few studies, the Sp value shows that automated systems are mostly accurate in detecting non-lame cows in contrast to truly affected cows. Nevertheless, a higher diagnostic value (Sp of 91.7%) was reported in a recent study where accelerometers and sensors were employed to detect slight lameness (LS of 2.5) by assessing standing bout and walking speed [55]. Overall, there are limited studies on the agreement and reliability of most automated systems for lameness detection as only few studies report the diagnostic properties. Behavioral Features and On-Farm Practical Applications Over time, certain behavioral features in dairy cows have been shown to influence the reliability of LS systems. Cows have been recognised to hide notable signs of lameness in the presence of an observer as a behavior to evade predators [46]; gait changes might therefore only be presented when lameness is at an advanced level. Also, individual cows could adapt differently to potential causes of lameness, and therefore gait changes might be the product of the animal's capacity to withstand the ongoing pain. Flower et al. [9] showed that cows walked more soundly with longer strides after milking than before milking; the social behaviors of the cows or the reduced distension of the udder were the suggested reasons for such behavior. For practical application of LS, the aim is to assign scores to cows as they move undisturbed on a flat, non-slippery surface for a considerable duration to assess multiple strides. However, in most farms, provisions to accommodate the aforementioned criteria are often lacking. The presence of manure and floor designs might also influence the frictional and compressional forces that mediate mobility [31,44]. Stall designs might also limit the chance for observation of multiple strides. Accordingly, Flower et al. [28] reported a high variability (76%) in outcomes when only short strides were captured in assessing lame cows. Another factor that influences the practical use of either visual LS or automated systems is the farmers' preference. Van De Gucht et al. [56] reported that farmers who attach more importance to lameness are more willing to accept the use of automated systems, while visual LS was more preferred by all famers. Association between Walking Surfaces Types and Locomotion Performance Housing design is vital for the maintenance of good welfare in dairy cows. Floor type and its influence on locomotion performance in dairy cattle were first suggested by Albright in 1997. Subsequently, floor features such as abrasiveness and hardness leading to insufficient friction and traction-as present in concrete floors (CF)-were suggested to negatively impact the claw health and locomotion of dairy cows [31,57]. In this context, the use of cushioning floor surfaces, such as through the use of rubber flooring (RF), has been reported to improve gait properties. This includes reduced muscular activity in the hind limbs [58] and similar stride length when compared to locomotion on pasture [59]. Additionally, the influence of floor types on lameness occurrence has been reported extensively. This has been linked to prolonged standing and walking on hard or abrasive surfaces leading to sub-optimal claw health including claw horn lesions (CHL). Fjeldaas et al. [60] reported that the risk of higher LS was three times higher in cows on CF compared to RF. In a study evaluating locomotion performance in dairy herds on CF and straw yards, a total of 1% and 46% of all the observed gaits in cows on straw yards and those from the cubicle housed group, respectively, were scored as lame [61]. Similarly, following the comparison of locomotion in lame and non-lame cows on RF and CF, Telezhenko et al. [27] showed that the moderately lame cows walked with a significantly wider posture on the CF than RF, while a similar group on CF had a smaller step angle compared to the non-lame counterparts. However, there was no significant difference between non-lame cows and lame cows on RF [27]. Specifically, cows affected with digital dermatitis (DD) in a study on straw yard walked significantly better than the same group on CF, and about 81% and 1% of all observed gaits on a straw yard were scored as normal and clinically lame, respectively. However, 46% and 27% of dairy cows were scored as normal and lame on CF [62]. This suggests that differences exist in locomotion performance between lame and non-lame cows, and within lame cows, on various flooring systems. Rubber flooring offers better comfort to a cow's hoof. Invariably, improving mobility might mask the presence of claw lesions, especially at the subclinical stage where detectable changes in locomotion are absent. Body Condition Scoring and Association with Lameness Body condition scoring (BCS) has been described as a technique for assessing the condition of livestock at particular periods to achieve equilibrium between economic feeding, yield and adequate welfare [63]. The BCS technique is a manual assessment with a corresponding outcome that is recorded on a numerical scale as thin, good or grossly fat. Leach et al. [64] explained that the inclusion of body condition in evaluating welfare is to identify animals that are too thin or too fat, since body reserves in both cases are linked to increased likelihood of disease. The association between body condition and lameness has been studied extensively. Lame cows are believed to lose body condition class over time due to changes in feeding habits or intrinsic pain affecting feed conversion [65]. Recent findings have shown that cows with low body condition are more likely to become lame [11,66]. In relation to cow welfare, the association between BCS and lameness has been studied by considering the effect on measures of productivity. In a study, the BCS changes between cows with and without claw horn lesions (CHL) and their corresponding conception rate showed that cows with good BCS without CHL produced more milk and were more likely to conceive than those with low (thin) and high (fat) BCS with and without CHL [67]. Similarly, on a more widespread BCS scale (1-5), cows with BCS < 2.5 (thin) were associated with an increased risk of lameness in the subsequent zero to two months for all cases of lameness and two to four months for claw horn lesions (sole ulcer and white line disease) [68]. Accordingly, an important structure within the claw capsule that has been established to play a crucial role in the development of CHL is the digital cushion (DC) or fatty pad [69]. The DC serves as a shock absorber to the pedal bone (3rd phalanx) which bears most of the weight of the cow at the claw-floor interface. However, the pedal bone becomes unstable at the peri-parturient period due to hormonal changes, thereby predisposing the internal capsule to displacement injuries [69]. Also, the DC is not well developed in first heifers until the second and third lactations and is often depleted in thin cows with low BCS [70]. Lame cows affected with CHL have been characterized with thin DC, suggesting that the cows might have been in a low body condition prior to the onset of lameness, as the protective function of the DC to the sole and white line was compromised [11]. Randall et al. [11] reported that a low BCS at a specific period of 8-16 weeks was associated with an increased risk of lameness before repeated lameness, and three weeks of low BCS before the first lameness was noted. Similarly, findings from a study highlighted the importance of maintaining cows in good BCS to minimize the risk of developing CHL [67]. However, recent findings have suggested that the thinness of the sole tissue does not necessarily arise from the depletion of body fats and DC, but could also be due to other factors such as the integrity of the suspensory apparatus, calving, herd, and lesion presence [66]. In contrast, a study reported increased odds of lameness in cows with high BCS (≥4.25) [71]. Nevertheless, the basis for such an association needs to be more thoroughly investigated. Some authors have related the event to increased weight in the pelvic region that could be transferred to the hind limbs, thereby causing overload. Another likely pathway is the nutritional changes in fat cows as they approach calving, due to reduced appetite and low fibre intake, thereby increasing the susceptibility to ruminal acidosis and onset of laminitis [72]. In one study, European dairy cows affected with subclinical ketosis were reported to have increased odds of lameness [73]. Hock Condition and Lameness Occurrence Although claw lesions remain among the major causes of lameness in dairy cows [74], hock lesions and injuries are becoming a persistent problem in intensively managed dairy farms [75]. The term "hock lesion" is used to describe various anomalies such as hair loss, visible wounds, broken skin, and localized and general swelling of the hock [76]. In dairy cows, the absence of fatty tissues and muscles around the hock makes the region prone to trauma and damage to the skin. Consequently, the development of hock lesions is directly influenced by the nature of the lying surface of hard and abrasive [77]. In welfare assessment, the lateral aspect of the hock is often examined and suggested to be the most affected area. Poor hock conditions are often manifested as hair loss, swelling or ulceration [78]. The hock condition score (HCS) measures the severity of hock lesions on various scoring scales based on features ranging from normal to substantial injuries. The assessment is important in free-stalls and loose cubicle housing, as such provisions encourage movement and interaction with stall designs. One of the simplest hock scoring systems was described by Rutherford et al. [79] by using a two-scale scoring system divided into (1) no skin damage and (2) damaged skin with various levels. The advantages of such an HCS system is the repeatability and reduced inter-and intra-observer variability of the results, as found in the use of lower scoring scales in LS. However, the system lacks the ability to capture several manifestations of hock injuries. Selected HCS methods employed in assessment of cow welfare and their clinical descriptions are presented in Table 1. Several studies have demonstrated the inter-relationship between occurrence of hock lesions and lameness in dairy cows. An earlier study in the United States (USA) by Whay [47], suggested that 80% of the 53 dairy farms investigated needed to reduce hock lesions in order to minimize the incidence of lameness. By investigating the factors associated with hock lesions, a higher incidence was reported in inorganic herds (49.7%) and free-stalls (46.0%) as compared to organic herds (37.2%) and straw yards (25%) [79]. Nevertheless, housing cows in free-stalls with less access to pasture grazing was previously reported to increase the incidence of claw lesions causing lameness [2,85]. Such housing conditions might favour the occurrence of hock injuries and lameness. Chapinal et al. [34] found a positive correlation between lameness and hock injuries and suggested that reporting and monitoring the prevalence of both conditions could assist in improving cow welfare in dairy herds. Several authors have reported similar findings by showing that hock lesions ranging from hair loss to severe ulcers are associated with higher locomotion scores (LS > 3) and lameness occurrence ( Table 2). As highlighted previously for the other welfare assessment systems, certain environmental factors could influence the association between hock condition and lameness occurrence. For instance, the level of comfort from the lying surface might influence the severity of hock lesions as well as increase the risk of lameness [86]. Hence, the pathogenesis of hock lesions and the direction of the event as related to lameness need to be investigated. Severe hock lesions could initiate painful sensations leading to lameness, while a prolonged duration of lying down in lame cows on hard and abrasive surfaces might precipitate hock injuries. Another aspect that might contribute to the occurrence of severe hock injury is floor slipperiness. A notable technique for assessing the slippery index of floors in dairy housing was developed by Grandin [87] based on the frequency of slips and falls within a specific period. A recent study reported higher odds (Odds ratio, OR = 2.0) of cows being lame and with hock lesions (OR = 1.4) when reared on slippery floors compared to non-slippery floors [13]. Telezhenko et al. [88], in a recent study involving gait analysis and skid resistance of different flooring systems in dairy housing, showed that rubber mats had the highest coefficient of friction and skid resistance values compared to concrete and mastic asphalt floors. This further depicts lower slipping tendencies in cows when housed on rubber mats or floors. Overall, the aforementioned events show that preventive measures for hock lesions have the potential of reducing lameness incidence, contributing to general improvements in cow welfare. Table 2. Selected studies and findings involving the association between lameness and hock lesions in dairy cows. Reference Housing Type Association between Hock Lesions and Lameness Other Findings Zurbrigg et al. [12] Tie stall herds 317 farms in Ontario, Canada Prevalence of hock lesions and lameness based on ached back and rotation of hind claw were 44%, 3.2% and 23%, respectively Faulty design of stall dimensions Nash et al. [89] Tie stall herds 100 farms in Ontario (n = 60) and Quebec (n = 40), Canada Leg Hygiene Score and Lameness Occurrence Cleanliness is a significant aspect of animal welfare, through links with lameness and mastitis. In the assessment of cow welfare, Napolitano et al. [93] included the genital area, back of the udder, and lower part of the hind limbs for scoring cow cleanliness, also known as the cow hygiene score. Cook [94] described the leg cleanliness scoring system based on the level of manure contamination of the lateral aspect of the lower hind legs. Recent findings by Solano et al. [13] indicated that the assessment of leg cleanliness could enhance the understanding of the association between lameness and herd cleanliness. However, since most of the studies were cross-sectional, the role of leg hygiene either as a risk factor for lameness or consequent of lameness needs to be further investigated. Rodriquez-Lainz et al. [95] first pointed out that infectious causes of lameness, such as digital dermatitis (DD), could be attributed to unhygienic environments that enhance the growth of pathogenic organisms capable of invading the digital skin. DD is of greater significance in confined cows, especially in free-stalls where exposure to manure slurry is persistent, thereby predisposing cows to poor leg hygiene scores [96]. DD was also described as a lameness condition potentiated by unhygienic environments, dirtier herds and persistent exposure of the hooves to contagious agents [97,98]. Generally, the pathogenesis of DD and the role of leg hygiene are still investigated based on changes in claw traits either as risk factors for the development of DD or the resultant outcome of an ongoing problem [53]. However, authors have reported a positive relationship between leg cleanliness and prevalence of DD. Cows that had predominantly dirtier legs demonstrated higher risk (OR = 2.44) of developing DD [80]. Solano et al. [99] also found that poor leg cleanliness at cow level was associated with higher prevalence of active lesions of DD. In agreement with the multifactorial nature of lameness, environmental factors such as floor designs could influence hygiene and the risk of infectious claw lesions. In this context, grooved CF is likely to retain a more sufficient amount of manure slurry than textured CF following scraping. Hence, grooved CF was identified as a risk for high prevalence of DD in some UK dairy herds [100]. Similarly, cows on slatted floors with a manure scraper had lower odds of developing interdigital horn erosion than those on standard slatted floors. In addition to manure slurry, damp conditions leading to exposure of the cows' feet to moisture also increases the risk of DD [101]. Importance of Lying Behavior and Lameness Occurrence The ability of an animal either in its natural or artificially produced habitat to exhibit its natural behavior is of great welfare importance [1]. Accordingly, lying down is a behavioural need for the dairy cow. Ideally, a cow lies down for about 12-14 h per day and sleeps for 30 min within the stated timeframe. The importance of lying down ranges from adequate resting of the animal, efficient rumination, greater space for other cows' movement, and maintenance of claw health by drying off [102]. Additionally, a study reported increased blood flow to the mammary gland by 30% when cows lie down, thereby leading to higher milk yields [103]. Moreover, the duration allocated to resting (12-14 h/day) gives an insight into the significance of the natural behavior to the well-being of the cow. Lying and resting is an indication of welfare, and studies have suggested several ways of quantifying these behaviors. They include the ease of performing the activity [104], total lying time, number of lying bouts, duration of lying-down and getting up sequences [105]. Deviations from the budgeted time for lying down affect the allocated time for feeding and standing in the form of compensatory reaction [106]. A notable outcome is longer standing time, which could contribute to the development of claw lesions, especially on hard, wet and abrasive surfaces [107]. Studies have revealed the variation in lying time between lame and non-lame cows. In this regard, lame cows lie down for about 38 min to 0.6 h/day longer, and also with longer bouts [14,108]. In addition, high LS was reportedly associated with increased lying time and frequent bouts [109]. Few studies have demonstrated the impact of specific claw lesions on lying behavior. Lame cows affected with severe DD were observed to have spent more time lying down on CF compared to straw yards [61]. Lying down time was also reported to be highest in cows affected with DD, followed by sole ulcers [110]. In addition to being an indicator of lameness, lying behavioral changes could also be applied in assessing the risk of lameness. Consequently, automated systems used for measuring lying time with the ability to detect mild changes in lying behaviour have been employed in lameness studies [109]. In a recent study, cows presented with longer lying times and higher duration of bouts with 3.7 and 1.7 increased odds of being lame respectively compared to non-lame cows [105]. Necharitzky et al. [18] also reported that lame cows affected with claw horn lesions laid down significantly longer than healthy cows. Practical Applications and Limitations of Lying Behavior Assessment Well-known environmental factors that have been evaluated in association with lying behavior and lameness are bedding and stall designs. Dairy cows were reported to prefer lying down on softer surfaces, irrespective of the conditions of the limbs [111]. A study pointed out higher incidence of clinical lameness in cows on rubber mats (24%) compared to those on sand (11.7%) in a confined dairy herd. Also, the same study reported longer standing time in non-lame cows on rubber mats, thereby indicating discomfort in lying down compared to sand [105]. Similarly, in non-lame cows, lying down duration was greatest on rubber mats compared to sand and concrete [112]. Therefore, applications of lying behavior in lameness studies need to take into consideration factors such as comfort, stall designs, milking patterns and feeding management, as they could influence normal locomotion. The direction of the relationship between lying behavior and lameness needs to be further elucidated. There are indications that changes in lameness might induce changes in lying behavior, or the other way around [44]. In addition, there are other conditions that induce longer lying times and bouts in cows aside from lameness issues. Hence, the assessment of lying behavior might be useful as a tool for further examination of the cow in a similar manner to the LS system. This might entail routine claw assessment for the presence of ongoing lesions or injuries. There also seems to be a complex relationship in assessing the welfare implications of lameness on lying behavior in dairy cows. Higher duration of lying down in lame cows might predispose the hock area to infection depending on the hygiene of the lying surface and overall herd cleanliness. Furthermore, the proximity of the udder to a lying surface with persistent exposure to manure might contribute to secondary infections such as mastitis [113]. However, more research is needed to arrive at the relationship and direction of the event. Conclusions The application of LS for the identification of lame cows requires well-defined criteria of gait features to improve the reproducibility of results. Similarly, the practical applications are limited by the availability of adequate farm facilities to ensure accurate outcomes. However, the other welfare assessment protocols discussed herein are associated with lameness as a potential risk factor at cow level. Lying behavioral changes are also a potential indicator of lameness in the free-stall system. A better understanding and demonstration of the relationship between lameness and the assessment scoring systems could enhance farmers' awareness in appreciating the welfare implications of lame cows and promote the provision of good welfare. Conflicts of Interest: The authors declare no conflict of interest.
2018-04-03T03:35:58.359Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "dca3641687b4c80bf1ea9f16762a020b612719ef", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/7/11/79/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dca3641687b4c80bf1ea9f16762a020b612719ef", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
165298231
pes2o/s2orc
v3-fos-license
Translating Coetzee: a panel discussion It was my privilege to be asked to chair a panel on translating J.M. Coetzee at the very end of the ‘Reading Coetzee’s Women’ Conference held at the Monash Campus in Prato, Italy from 27 to 29 September 2016, and now, to present an account of that occasion which contains versions, either of the papers presented at the meeting itself by three of the four invited participants who were able to attend, or submitted to me by two others who could not.1 The text of the contributions has been very lightly edited by me on the few occasions where it seemed this might help to convey the writer’s meaning with maximum clarity in idiomatic contemporary English, and then submitted to the author for approval. In the same way a draft of the following essay as a whole has been submitted to all the participants for comment or suggested amendment. The result, I hope, is that this essay can be considered in large measure as collaborative work – or, indeed, a conversation. Inevitably, the range of languages covered here is very partial, and indeed, a matter of chance. Before the conference began we were worried that, as a result of participants accepting and then finding themselves forced to withdraw and send in messages to be read on their behalf, real live translators would be rather thin on the ground, reduced to stalwart representatives from Holland and Italy. But we were in luck, discovering as conference delegates arrived that among them were two more Coetzee translators. Furthermore, these were of special interest: representing China and South America respectively, they would enable us to move out beyond the Eurocentric focus originally envisaged. Since both of them kindly agreed to take part at very short notice, we could go a little further than we had hoped, offering a glimpse of the world reception of a writer who, unquestionably, already belongs in his lifetime to Goethean Weltliteratur. Sentence by sentence, my prose is generally lucid, in the sense that the syntactic relations among words, and the logical force of constructions, are as clear as I can make them. On the other hand, I sometimes use words with the full freight of their history behind them, and that freight is not easily carried across to another language. My English does not happen to be embedded in any particular sociolinguistic landscape, which relieves the translator of one vexatious burden; on the other hand, I do tend to be allusive, and not always to signal the presence of allusion. Dialogue comes with its own set of problems, particularly when it is very informal and incorporates regional usages, contemporary fashions and allusions, or slang. My dialogue is rarely of this kind. For the most part its character is formal, even if its rhythms are more abrupt than the rhythms of narrative prose. So hitting the right register ought not to be a problem for the translator. 2 After having translated 18 books by John Coetzee -the first, Waiting for the Barbarians, in 1983 -I can only concur with his analysis. But I must add that I am fortunate in that Coetzee speaks Dutch. From that very first book he has read my translations prior to publication and I have been able to ask him questions. So if I turn out to be insufficiently aware of the 'full freight of history' or overlook or misinterpret certain 'allusions', he is the first to point it out. Having said this, you might start to think that translating Coetzee is easy, at least for a Dutch translator. That is not the case at all. A novel consists, after all, not just of the 'full freight of history' and 'allusions'; there is also such a thing as style. And there is -as you well know -something special about Coetzee's style. Coetzee's English is rather like his mathematics; or, in another exaggerated comparison, it is something like what French became to Samuel Beckett. The peculiarity in Coetzee's case is that although he had been born into English (unlike Beckett whose French was acquired), its naturalness was gradually lost. Coetzee's is therefore an English shorn of the identity markers of Englishness. 3 Indeed, Attwell writes, 'Coetzee has speculated that he might not have a mother tongue.' 4 And that's not something you can just ignore as a translator. What David Attwell calls rather 'mathematical English' is so restrained and frugal that a translator has to do his or her utmost to maintain its peerless efficiency. What it comes down to is that you mustn't use a single superfluous word in the translation. In general people assume that Dutch translations will use around ten percent more words than the English original, because of the structures of the two languages. But in the case of John Coetzee, I think that kind of expansion would be verging on criminal. For a number of years now his books have first appeared in Dutch, six weeks before being released in English and, as a result, I have had the privilege of receiving his manuscripts in Microsoft Word, so I can see exactly how many words there are in the original English. In general I try not to exceed that number by more than two or three per cent. At first that required a process of endless cutting and reconstruction, nowadays I translate much more with that conciseness in mind and have -I hope -developed a variety of Dutch that does justice to Coetzee's English. But besides the style there is another problem that I encounter while translating Coetzee -and many other English-language authors as well -the 'formal and informal you'. Although the English forms 'thou' and 'thee' have long fallen into disuse, translators into languages that do still make a distinction between formal and informal pronouns -French, German, Spanish, and Dutch too -are constantly required to choose. When making this choice, two things are crucial. First, the character of the author. Is he or she someone who tends towards formality or informality? I am quite confident in my belief that John Coetzee has a tendency to formality. In fact, I have already quoted him about his own dialogue: 'For the most part the character is formal.' The second consideration is the rules of your own language. Dutch, for instance, is less formal than French or German, but more formal than Spanish. In Dutch you wouldn't start out by addressing a stranger over the age of 25 as 'jij', the informal 'you', but that can change in the space of ten minutes. In French and German, on the other hand, one sticks to formal terms of address for much longer. I'll give two examples that caused me some problems: the first is from the opening chapter of The Childhood of Jesus. When Simón arrives at the 'Centro de Reubricación de Novilla' with little David, he finds 'a young woman, who greets him with a smile' behind the desk. In Dutch it is to be expected that Simón, in this situation, will address the young Writers in Conversation Vol. 5 no. 1, February 2018. journals.flinders.edu.au AVAILABLE ONLINE AT JOURNALS.FLINDERS.EDU.AU woman, who is called Ana, formally as 'u'. Simón and Ana are in regular contact in Chapters 2 and 3, for instance when Ana arranges temporary accommodation for Simón and David, and in these circumstances too it is only normal for Ana and Simón to address each other as 'u'. But then, in Chapter 4, Ana invites Simón and Davíd to join her for a picnic in the park, an event that soon leads to an argumentative atmosphere with sexual undertones in which Simón, who clearly finds Ana attractive -despite realising that she is unattainable for him -addresses Ana by her first name for the first time: 'Are you one of those nuns, Ana, who have left the convent behind to live in the world? To take on jobs that no one else wants to do -in jails and orphanages and asylums? In refugee reception centres?' 5 As it would be completely incongruous to retain the formal 'u' here in Dutch, I had no choice but to switch -more or less abruptly -to the informal 'jij'. My second example is from the recently published Schooldays of Jesus and involves the relationship between Simón and the museum attendant Dmitri. Both begin by addressing each other with the formal 'u', but then, in Chapter 11, an enraged Simón confronts Dmitri in the museum and snaps at him: David tells me that you have been inviting children from the Academy into your room. He also tells me you have been showing him pictures of naked women. If this is true, I want you to put a stop to it at once. Otherwise there will be serious consequences for you, which I don't need to spell out. Do you understand me? 6 Even if Simón and Dmitri have systematically addressed each other formally in the preceding chapters, it would be ridiculous in Dutch to maintain that in this conversation, so here too I needed to change abruptly from the formal 'u' to the informal 'jij'. Again, I have the great advantage that John Coetzee knows Dutch and always reads my final version so that in both of these cases -and in many other similar instances in his other books -I was able to ask his advice. Fortunately he agreed with my decisions andequally fortunately -no Dutch reviewers have taken exception. To conclude, I would like to thank the organisers for inviting me to this highly instructive congress and giving me the opportunity to speak here today, even though I have barely touched upon the subject of the conference, 'gender'. And I would like especially to thank John Coetzee and his Dutch publisher, Cossee, for the faith they have shown in me as a translator for the last 33 and 15 years respectively. Reinhild Böhnke Reinhild Bohnke is another seasoned Coetzee translator enjoying close personal links with the author. She and her family are prominent members of the Leipzig intelligentsia, with a special link to Coetzee, as she explains, through a mutual love of Bach, which has now and again drawn the novelist to the annual Bach festival in Leipzig (no-one who has read The Schooldays of Jesus in particular can doubt that Bach is a major presence in Coetzee's work). Her husband Gunter is a noted writer and performer in Leipzig and Saxony: he and their son Dietmar have themselves rendered valuable service to literature in English through the discovery and publication of a cache of letters from Dickens to his Leipzig publisher Tauchnitz. The extent of Coetzee's faith in her work is his unbroken insistence that she be entrusted with the task of translating him at a time when by and large most translators trained in the GDR have found themselves neglected in favour of their West German counterparts, who benefit from closer links to the major publishing houses. Having initially hoped to attend the conference with her husband, Reinhild eventually found herself compelled to withdraw because of the pressure of other work. But she sent in the following text, which I read on her behalf: In the first letter John Coetzee wrote to me (in 1998 it was not unusual to write letters) he mentioned J.S. Bach in connection with my home town Leipzig. And I reacted with my opinion: 'B-A-C-H ist Anfang und Ende aller Musik' ('Bach is the beginning and end of all music'). My father having been a vicar at St. Thomas' Church in Leipzig, I may say that I grew up with Bach's music. And later the connection got even stronger, because my younger son was a member of the famous 'Thomanerchor'. Thus the genius of Johann Sebastian hovered over our cooperation from the beginning. The 'cooperation' was of course rather one-sided: John offered generously to read my translations before publication and give useful advice. And he answered all my queries promptly, which has been inestimable. It was a lucky coincidence that I got to know some of Coetzee's novels before I got the chance to translate his texts. Up to 1989 his books were not available in my part of Germany, but one of our English friends brought Life & Times of Michael K, In the Heart of the Country and Waiting for the Barbarians with him and I was fascinated by this literary voice from South Africa. Coetzee's spare prose and steely intelligence, his 'pregnant dialogue and analytical brilliance' (Swedish Academy for the Nobel Prize) have often been praised. I think it was an advantage that my first Coetzee translation was Boyhood -'a text without ado but complex below the surface' (as I wrote to him). I have often been asked if it is difficult to translate Coetzee's texts -and I must say, it has always been a pleasure to translate such lucid texts and to probe for things below the surface and to discover allusions. The quest Writers in Conversation Vol. 5 no. 1, February 2018. journals.flinders.edu.au AVAILABLE ONLINE AT JOURNALS.FLINDERS.EDU.AU for the precise word or phrase has often been a challenge and a cause of anguish, if it couldn't be found. The complexity of 'disgrace', for example, is not rendered in 'Schande', since 'grace' -'Gnade' -is not incorporated. It has also been a challenge to find the exact tone to convey the lightly ironic flavour of later texts (beginning with Youth). Some texts which came as a surprise to me because of their unusual expressive style ('Letter of Elizabeth, Lady Chandos', 'He and His Man') provoked the translator to become even more creative. Some of the heroes in Coetzee's literary cosmos (Cervantes, Wordsworth, Dostoevsky) belong to familiar terrain. But time and again, Coetzee's texts led me to discover authors hitherto unknown to me (e.g. W.G. Sebald). Coetzee's familiarity with philosophical concepts induced me to study philosophers from Plato to Thomas Nagel, which has been quite a study project. I also want to mention that I felt privileged to experience closely experiments with the genre of the novel, such as essayistic forms penetrating into the novel. And in Diary of a Bad Year the narrative element fighting its way back into the novel from below. Some of Coetzee's characters had the potential to irritate the reader/translator -I am thinking of Elizabeth Costello above others. The first thought often was: Not her again ... But after all, this irritation leads to prolonged reflection on certain topics and has a productive outcome. And on the whole I have to confess that Coetzee's texts tend to linger and keep growing in my thoughts, which is quite an achievement. It will be seen that there are a number of echoes of Peter Bergsma here. But Böhnke points to some other issues that are of importance for the Coetzee translator, and that crop up in the remarks of other participants. The first of these is the near-impossibility of finding exact equivalents in the target language for puns and the like, words in the original that convey two or more meanings at once. Her word for not being able to find an equivalent in German for 'Disgrace' in English is 'anguish': both Franca Cavagnoli and Jinghui Wang refer to the translator's 'trauma', the latter giving a fascinating account of her attempts to find a way of conveying a degree of double meaning in Chinese. She also raises the question of irony in Coetzee's later work, and the difficulties it presents for the translator. There are obvious pitfalls on every side -the translator may do it in a heavy-handed way, for instance, when the original is subtle and unobrusive, or fail to render it altogether (the relatively lukewarm reception of Jane Austen outside the Englishspeaking world, for instance, may in part have to do with the problem her essential ironic indirection poses for translators). But Böhnke puts on display her aptitude for the task in her witty remarks on Elisabeth Costello. I know of at least two devotees of Coetzee, besides Böhnke and myself, whose estimate of this particular character in his work lies well this side of idolatry. Miguel Temprano García. I come next to Miguel Temprano Garcia, a Spanish translator who was also unable to attend the conference. Although he has only translated one novel by Coetzee into Spanish, The Childhood of Jesus, he too is a thoroughly prolific and experienced translator who has produced excellent versions in Spanish of very varied literary classics in English. I first met him in Uruguay at a conference on W.H. Hudson's masterpiece The Purple Land, which he was then busy translating. His literary background pedigree is impeccable (his father a Professor of Literature expert on the important Spanish Modernist Ramon Gomez de la Serna), his linguistic prowess hardly less remarkable. Quite how he manages to combine his teaching at a school in Majorca whilst producing a steady stream of translations of demanding works (Martin Chuzzlewit and Whitman's recently discovered The Adventures of Jack Engle, for instance, both within the past year) I don't know. As you will see in what follows, Temprano Garcia takes off from the same perception of surface clarity and hidden difficulty as noted by both Bergsma and Böhnke, taking this a little further perhaps in the direction of paradox by suggesting that translating supposedly 'easy' writing is harder than translating difficulty. First of all, I would like to apologise for not having been able to attend this conference as I should have liked. Reinhild Böhnke, the German translator of several novels by John Coetzee, has described the style of one of his books as 'without ado but complex below the surface'. That is exactly the way I felt when I started translating The Childhood of Jesus: apparently it was going to be an 'easy' translation, the elegant sentences flowed and offered no apparent problems, and the story, although it might look a bit strange at times, seemed also quite clear. Of course, there is nothing I am more afraid than an 'easy' translation. Experience has taught me that in the end it is much more easy to translate an elaborate and even intricate style than the work of those authors with a so-called 'simple' style. I was to find very soon that this was precisely the case. One of the obvious difficulties of this particular work was that it is written in English, takes place in a Spanish-speaking country, but the characters don't speak English or Spanish, so sometimes it would be tricky to make it clear which language the characters were speaking in a way that didn't sound strange to the Spanish reader. But I had found similar problems in other novels before, and I had, more or less, coped with it. What I wasn't so prepared for were the subtle nuances hidden in an apparently crystal-clear text. I was lucky because Professor Coetzee, despite his fame of being a reclusive writer, is not one of those authors who hide behind their literary agent or refuse to cooperate with the translator and he also speaks Spanish and that proved to be really helpful. When I wrote to him to ask a couple of things that had come to my attention, he answered by pointing out several things that were not so evident or easy to translate as I had thought. For instance, some of the characters speak of 'washing clean of old ties' and as there is not an exact translation for that in Spanish, I had translated it as 'desembarazarse de los viejos vínculos', which means something like 'getting rid of old ties'. But when I asked Professor Coetzee, he insisted on my finding a verb with a 'watery content', to convey the idea that the two protagonists had crossed a sea or a river of forgetting. Finally, after some fruitful discussion, we decided to change 'ties' to 'memories,' for in Spanish you can be washed clean of your memories but not of your ties! This sort of nuance lay hidden everywhere in the text, making the translation I had thought so easy at the beginning similar to a minefield, although, with Professor Coetzee's help, I hope I was able to deactivate most of the mines. Böhnke also refers to the 'challenge and anguish of the quest for the precise word or phrase' , and again her remarks carry echoes for the Spanish translator (the translator of Disgrace into Spanish, for instance, had a very similar problem, because 'Desgracia' is a synonym of 'misfortune' and doesn't carry the meaning of 'dishonour' or 'shame'). I also found it difficult to find le mot juste in The Childhood of Jesus, a book that is so enigmatic and so full of linguistic and philosophical questions. It was something of a relief, however, to find that I shared this problem with the author: when I pointed out a very slight inconsistency in the text, the English version had already been published and he told me to translate the passage as it stood, adding that he 'would have to live with the contradiction the rest of his days'. I have the feeling that many critics, in Spain as elsewhere, took the view that this novel is 'minor' Coetzee. The reason for this, I think, is that most of them didn't notice the subtlety of nuance to which I have referred, which, in my opinion, makes The Childhood of Jesus a novel as Coetzeean as any of his other books. Sorry again for not having been able to attend the conference, which I am sure has been a great success. What particularly interests me here is how Temprano Garcia describes the process of translation, with the aid of the author's own comments, as a process of interpretation and discovery which involves a thoroughly active critical awareness. He is saying something simple but vital -that the translator can determine the way a writer is seen by the audience for whom he writes. To be alive to the riches of the text he works on is to enhance its capacity to reach new readers; to be insensitive to these is to pass up that opportunity. * * * * * Cristobal Perez Barra. It is of course logical to follow consideration of Miguel Temprano Garcia's work with discussion of that of Cristobal Perez Barra, the first of our serendipitously unearthed English and Spanish have in common the fact of being both global and transcontinental languages (although this has to be immediately qualified, for after 1973 the status of the Spanish language ceased to be official in the Philippines, meaning that today the language is still transcontinental, but, unlike English, only transatlantic). But then again, this transatlantic nature also has to be qualified, for the Spanish language is governed by the Asociación de Academias de la Lengua Española, of which the Real Academia de la Lengua Española and the 22 national academies are members. Together, they publish the Diccionario, the Gramática and the Ortografía.That is to say, if we allowed ourselves to play a Borgesian game and to do away with colloquialisms, a short story by a writer in Tierra del Fuego and in Catalonia could be written with identical words placed in identical order -which would not be the case if we took writers from, say, Maryland and Suffolk, and the English language, as an example. John Coetzee has said in interviews that he aims for a linguistically neutral English: his writing is not distinctively from South Africa or from England, from the United States or from Australia. Aiming to replicate this peculiarity I try to choose wherever possible words and constructions that would be instantly recognisable to Spanish speakers both in Spain and America (and by 'America' I mean the word used to express the cultural and linguistic community that existed between the Río Grande and Tierra del Fuego at the time of Imperial Spain). As I say, the institutional standardisation of the Spanish language enables me to do this. On the other hand, so that the narrative discourse shall not be interrupted, I keep footnotes to a minimum, even if the profusion of French expressions and references to other writers may well justify the inclusion of many more. Therefore, I only insert those I deem absolutely necessary for the adequate understanding of the text. There is one aspect of my translation that I knew from the beginning is bound to remain unsolved. When John Coetzee inserts expressions in Spanish into his English original the reader is filled with a feeling of eeriness, of otherness, of a time out of mind, of which the regrettable but inevitable footnote reading 'En español en el original' can only give a pale reflection. In this, as well as in many other matters, the translator is bound to betray the author (the traduttore is of course a traditore) and also to fail. However, this should not be disheartening to the practitioners of this difficult, often invisible but noble task. As Samuel Beckett said in Worstward Ho: 'Ever tried. Ever failed. No matter. Try again. Fail again. Fail better'. Once more, Perez Barra explores the translator's necessity of 'making strange', this time in a particularly interesting context. Instead of using the idiomatic colloquialisms of everyday Latin American Spanish -the language that would come naturally to him and his readershe employs a kind of Spanish equivalent of BBC English, an international standard version of I am impressed, not only by the penetrating grasp here of the cardinal issue of selecting the right linguistic register into which to translate a given text (particularly important, perhaps, in the case of languages where some version of the French Academy works to establish and police correct usage), but by the memorable Beckett quotation with which Barra concludes, providing us with what is perhaps the best brief description of the 'splendours and miseries' of the translator's role in the texts assembled here. And Perez Barra can presumably be enlisted among Elisabeth Costello's admirers, for his Coetzee translations thus far consist of versions of her lectures. * * * * * Jinghui Wang Our second miraculous find at the conference itself was Jinghui Wang of Tsinghua University in Beijing, China. At very short notice she provided us with the following precious contribution, delivered with inimitable humour and verve: Comparatively speaking, Coetzee's works are easy to translate, as his diction in the novels is always concise and clear, but this doesn't mean that there is no challenge in translating them. The particular challenge I would like to present to you might even be called a trauma, even if it can be seen to have positive as well as negative aspects. In Foe, the title itself functions as a pun. On the one hand, Foe, as a proper name, is associated with the writer of Robinson Crusoe: Daniel Defoe. History has it that the writer's original name was Daniel Foe, probably born in Fore Street in the parish of St. Giles Cripplegate, London. He later added the aristocratic-sounding 'De' to his name, claiming himself as a descendent of the family of De Beau Faux. So, seen from this perspective, Coetzee makes use of 'Foe' to deconstruct 'Defoe', and thus announces his intention of probing into the past in a new, more faithful way, a strategy that echoes his giving voice to the female character Susan, the 'real' story teller of Robinson Crusoe's story. On the other hand, 'foe', as a general noun, depicts the adversarial relationship between any number of binary pairs: writer and reader, master and servant, male and female, or even truth and fiction. Well, in English, the one word 'foe' can contain all these meanings. However, in Chinese the two sets of meanings of 'Foe' are expressed in two entirely different words. One is '福', the character left after the 'De' of 'Defoe' is taken off; and the other is '敌人', which means 'enemy'. So to translate the title into Chinese, I had to choose one or the other. I had to admit that this involved a kind of trauma. To choose one meaning means that the other meaning is lost forever. It is perhaps not too far-fetched to Faced with such a trauma, I thought of drawing on a particular aspect of Chinese culture to add something more to the translation of this title -a kind of subliminal element that would enrich the title to make up for the perpetual loss of its original meaning in English. In choosing the title '福', it turns out that besides the meaning of one's name, this character also means 'luck'. And in Chinese tradition, every new year, we stick a piece of red paper outside the door, on which there is the Chinese character 'luck' (福) placed upside down, symbolising 'the coming of good luck' (upside down it is pronounced 'dao', phonogram of 'coming'). When visiting Australia, I was able to discuss my idea with J.M. Coetzee himself, and he also thought it was a good idea. So I suggested to the Chinese publisher of the book Foe, that if the Chinese title was printed in this way, it would provide a parallel in its particular way to the polysemous nature of the original title. The Chinese publisher of this book liked my idea, but didn't think it workable, for it would apparently fall afoul of the official regulations of the Chinese publishing bureau, making the book difficult or perhaps impossible to catalogue. Her response was totally understandable: it was her job to ensure, not only that the translation was truthful to the original version, but that it could in fact be published. So I suggested that they might consider using that upside down character of 'luck' on the book cover as a background to the title. She said they would consider this, but in fact the final version of the Chinese Foe does not have the 'Luck upside down' character, because the book belongs to a series with a standard cover. Anyway, this process -the trauma I experienced and have shared with you -will be familiar to translators all over the world. Translation is a losing game: the moment one starts to translate, one starts to lose meaning, as I did with the title of Foe. But perhaps at the very moment when double meaning has to become single meaning, some new aspect of the meaning of the text is revealed. Here, of course, we encounter difficulties of a quite different kind and on a quite different scale from any mentioned heretofore. But in many ways the principle illuminated in Wang's wonderful vivid, hyperbolic account is the same as that faced by translators into European languages. That is to say, the translator cannot hope to preserve every feature of the original text, but must constantly attempt in some way to compensate for what is lost by insinuating it elsewhere in a different form. Here Wang had recourse (or attempted to have recourse, since her brainwave ultimately foundered on issues of publishing standardisation in China) to the powerful visual significances that the Chinese written character may convey. Translators using other scripts may lack this option, but they too perpetually seek to atone for the inevitable loss of meaning over which the translator presides. The most extended and ambitious of the contributions to this conversation will be published separately. Franca Cavagnoli reflects not only on the particular question of how to translate Coetzee but also on many of the general issues concerning the translator's impossible task that have arisen here. And, at the same time, this is the only contribution that gives detailed thought to gender issues in translated texts. Cavagnoli is a very experienced and established practitioner of the craft, who also teaches translation at the University of Milan. Cavagnoli's searching and probing meditation on the art of translation in general and the problems of rendering Coetzee's female narrators can be read in a forthcoming issue of Australian Literary Studies. Australia and France, most recently from 1987to 2002at UNSW and from 2002to 2007in Toulouse. From 2012
2019-05-27T13:22:08.680Z
2018-01-28T00:00:00.000
{ "year": 2018, "sha1": "27069d5ce6553bc60caf808abb0106641dcbb7b2", "oa_license": "CCBY", "oa_url": "https://journals.flinders.edu.au/index.php/wic/article/download/29/34", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "48274e97cf2867d001f9160ef1894443ace55e5b", "s2fieldsofstudy": [ "Art" ], "extfieldsofstudy": [ "History" ] }
253276376
pes2o/s2orc
v3-fos-license
Integrative Analysis of miRNA-mRNA in Ovarian Granulosa Cells Treated with Kisspeptin in Tan Sheep Simple Summary Neurons produce kisspeptin, a peptide hormone that stimulates the pituitary gland to produce gonadotropin and regulate reproductive development. Granulosa cells exist in the ovaries of female animals, secreting hormone receptors and regulating follicular maturation and hormonal balance. The mechanism of regulation of the function of granulosa cells by kisspeptin is still unclear. miRNA-mRNA sequencing was performed on ovarian granulosa cells treated with kisspeptin in Tan sheep to determine the molecular pathways involved. The sequencing results revealed that eight miRNAs significantly differed between the experimental and control groups. The results also indicated that several miRNAs and their target genes regulate steroid production and cell proliferation. This study’s findings will help further explore the molecular mechanism of kisspeptin in the regulation of the function of ovarian granulosa cells in Tan sheep. Abstract Kisspeptin is a peptide hormone encoded by the kiss-1 gene that regulates animal reproduction. Our studies revealed that kisspeptin can regulate steroid hormone production and promote cell proliferation in ovarian granulosa cells of Tan sheep, but the mechanism has not yet been fully understood. We speculated that kisspeptin might promote steroid hormone production and cell proliferation by mediating the expression of specific miRNA and mRNA in granulosa cells. Accordingly, after granulosa cells were treated with kisspeptin, the RNA of cells was extracted to construct a cDNA library, and miRNA-mRNA sequencing was performed. Results showed that 1303 expressed genes and 605 expressed miRNAs were identified. Furthermore, eight differentially expressed miRNAs were found, and their target genes were significantly enriched in progesterone synthesis/metabolism, hormone biosynthesis, ovulation cycle, and steroid metabolism regulation. Meanwhile, mRNA was significantly enriched in steroid biosynthesis, IL-17 signaling pathway, and GnRH signaling pathway. Integrative analysis of miRNA-mRNA revealed that the significantly different oar-let-7b targets eight genes, of which EGR1 (early growth response-1) might play a significant role in regulating the function of granulosa cells, and miR-10a regulates lipid metabolism and steroid hormone synthesis by targeting HNRNPD. Additionally, PPI analysis revealed genes that are not miRNA targets but crucial to other biological processes in granulosa cells, implying that kisspeptin may also indirectly regulate granulosa cell function by these pathways. The findings of this work may help understand the molecular mechanism of kisspeptin regulating steroid hormone secretion, cell proliferation, and other physiological functions in ovarian granulosa cells of Tan sheep. Introduction Kisspeptin, a protein encoded by the kiss-1 gene, activates the hypothalamic-pituitary axis to advance puberty and activate gonadotropin (GnRH) and luteinizing hormone (LH) 2 of 16 release, promoting follicular development. Kiss-1 and kiss-1 receptors (kiss-1R) are not only expressed in the central nervous system, but also in placenta [1], liver [2], pancreas [2], testis [3][4][5], uterus [2], and ovarian granulosa cells [6]. Ovary-derived kisspeptin regulates follicular development, oocyte maturation, and ovulation by either autocrine or paracrine signaling [7]. Ovarian granulosa cells are closely related to follicular development. Previous studies from this laboratory have shown that kisspeptin could promote progesterone, estrogen secretion, and ovarian granulosa cell proliferation in Tan sheep. However, the specific mechanism has not yet been reported. Studies have shown that miRNA plays a key role in various biological processes and can regulate ovarian granulosa cell proliferation, apoptosis, and steroid hormone secretion [8,9]. The expression of miRNA in tissues is significantly affected by hormones or cytokines [10]. It has been reported that FGF9 treatment increased the expression of miR-221, inhibiting steroid production of ovarian granulosa cells in bovines [11]. Further, follicle-stimulating hormone (FSH) treatment of cultured rat granulosa cells has been reported to affect progesterone synthesis by downregulating the expression of miR-29a and miR-30d and upregulating the expression of miR-23b [12]. It was, therefore, speculated that kisspeptin might affect steroid hormone synthesis and proliferation by altering the expression of key miRNAs in ovarian granulosa cells of Tan sheep. In recent years, several researchers have identified the expression of miRNA and mRNA in ovarian tissue under various conditions to reveal the molecular regulatory mechanisms of ovarian function using RNA sequencing [13,14]. In this study, we hypothesize that kisspeptin may regulate steroid hormone synthesis and proliferation of ovarian granulosa cells through the kisspeptin-miRNA-mRNA pathway. Thus miRNA-seq and mRNA-seq were performed on ovarian granulosa cells of Tan sheep treated with kisspeptin. The sequencing results were verified by qPCR, followed by the miRNA-mRNA integration analysis, and the miRNA-mRNA interaction relationship was determined. Kisspeptinmediated miRNA and its targeted regulated mRNA were screened. The signaling pathway regulating steroid production of granulosa cells was further analyzed to determine the molecular mechanism of kisspeptin in the regulation of ovarian granulosa cell function and provide a reference for future studies on the breeding performance of Tan sheep. Collection of Ovarian Samples Ovarian samples of Tan Sheep were obtained from Yongning Hongxiang slaughterhouse, Yinchuan, Ningxia. The ovarian tissue was collected from slaughtered female Tan sheep and immediately stored in normal saline containing 1% double antibody at 37 • C. The ovary was brought back to the laboratory within 2 h. Culture and Treatment of Primary Ovarian Granulosa Cells of Tan Sheep The collected ovaries were poured into a large beaker containing follicle-washing solution. The connective tissue around the ovaries was cut out with scissors. The ovaries were washed with 75% ethanol solution for 45 s and then rinsed three times for 3 min each in a preheated follicular washing solution. The ovaries were clamped with forceps and 2 mL of DMEM/F12 culture solution was aspirated in a 10 mL syringe. The follicular fluid was extracted from selected 3-6 mm in diameter follicles, placed in DMEM/F12 culture medium tube, and centrifuged at 1000 rpm for 8 min at room temperature. The supernatant was discarded and centrifugation was performed again with a pre-warmed DMEM/F12 culture medium. Then, the second supernatant was discarded and the cells were resuspended in a 3 mL DMEM/F12 culture medium. The cells were evenly distributed in all the wells of a six-well plate. When 70% confluent, the cells were divided into two groups with four replicates in each group. The first group, i.e., the control group (MC group), was cultured by DMEM/F12 medium with sample numbers MC1, MC2, MC3, and MC4. The test group (MT group) was treated with DMEM/F12 containing 500 nM Animals 2022, 12, 2989 3 of 16 kisspeptin, with sample numbers MT1, MT2, MT3, and MT4. The control and test groups were cultured in 37 • C, 5% CO 2 concentration cell incubator. Extraction and Detection of Total RNA from Granulosa Cells The MC and MT groups were incubated for 24 h as described in the last section. Postincubation, the six-well plate was removed from the incubator and the culture medium was discarded. The total RNA in the granulosa cells in the MC group (n = 4) and the MT (n = 4) group were extracted by the TRIzol method. The obtained RNA's integrity was assayed by 1% gel electrophoresis, and RNA concentration and purity were detected using NanoDrop 2000 and Agilent 2100 RNA 6000 Nano kits. The samples with concentrations higher than 500 ng/µL and RNA integrity score ≥ 7 were selected. Library Building and Sequencing mRNA libraries, constructed according to the characteristics of mRNA with poly (A) tail, were hybridized with total RNA with poly (T) probe beads to adsorb the mRNA with poly (A) tail. The magnetic beads were recycled to elute poly (A) mRNA from the beads. The eluted mRNA was treated with a magnesium ion solution and then random primers (dNTP) were employed for reverse transcription of the interrupted mRNA fragments to form cDNA. Finally, the "Y" type connector was connected at both ends of the doublestranded cDNA, making a library for hands-on sequencing after PCR amplification. Then, it was used to construct the library in a Small RNA Sample Pre Kit (E7300L, NEB, Ipswich MA, USA). The 5 end of the small RNA has a phosphate group, while the 3' end has a hydroxyl group. The linker is directly added to both ends of the small RNA using the total RNA as the starting sample. Next, eight cDNA libraries in the treatment group and the control group were constructed using reverse transcription kits (RR047A, Takara Bio, Dalian, China). cDNA of 350-400 bp were screened, and 2 n mol/L of the library's effective concentration was needed for precise measurement. The built library was tested using the Agilent 2100 (Agilent 2100, Agilent Technologies, Palo Alto, CA, USA) and the ABI StepOnePlus Real-Time PCR System (ABI StepOnePlus, ABI, CA, USA) for quality and yield. Finally, the Illumina NovaSeq 6000 system (RNA Nano 6000 Assay Kit of the Bioanalyzer 2100, illumina, San Diego, CA, USA) was used to complete the on-machine sequencing of the library according to a standardized process. Processing and Validation of Sequencing Data After sequencing, raw reads obtained from transformation were processed to ensure the quality of information analysis by removing reads with joints and low quality in the following steps: The base of the mass value SD ≤ 20 was removed from reads that accounted for more than 30% of the entire read. Reads with N > 10%, 5 joint contamination, lacking a 3' joint sequence, and insert fragments, were removed. The 3 connector sequence was trimmed, and the reads of polyA/T/G/C were removed. DE-seq2 [15] analyzed the expression levels of miRNA in the test and control groups. The differential multiple of screening differentially expressed miRNAs and mRNA was ≥1.5 (|log 2 FoldChange| ≥ 0.585), p < 0.05. Known and unknown miRNAs were identified and annotated using miREvo (Version: miREvo_v1.1, Ming Wen, Guangzhou, China) [16], Vien-naRNA (Version: ViennaRNA-2.1.1, Andreas R. Gruber, Wien, Austria) [17], and Mirdeep2 (Version: mirdeep2_0_0_5, Marc R. Friedländer, Berlin-Buch, Germany) [18] databases. Six differentially expressed genes and seven differentially expressed miRNAs were randomly selected for qPCR validation to ensure the transcriptome data sequencing accuracy and reliability. The fluorescent dye, TB Green Primer ExTaqTM II (RR820A, Takara Bio, Dalian, China), was used. The primer information is shown in Table 1. Primer Premier 5 (Version: V5.5.0, Premier, Canada) was used to design the gene primers. Sheep U6 was used as a miRNA internal reference and β-actin as a genetic reference. Novel_621 is a predicted miRNA, so the accession number cannot be found in GenBank. The following reaction An enriched analysis of the differential genes was performed based on hypergeometric distribution and BH test principles. GOseq (version number: Release2.12, Matthew D Young, Parkville, Australia) [19] software was used to enrich the differential gene with GO function. The enrichment method was Wallenius. GO analysis included biological processes (BP), cellular components (CC), and molecular function (MF). KOBAS (version number: v2.0, Chen Xie, Beijing, China) [20] software was used to compare sequences with the KEGG (Kyoto Encyclopedia of Genes and Genomes) database sequences and perform KEGG channel enrichment analysis. Enrichment of significant pathways identified the major biochemical metabolic pathways involved in the genes. Kisspeptin-Mediated Integration Analysis of Granulosa Cells mRNA and miRNA The differential miRNAs were filtered and compared with the reference genome. The known miRNA and the novel miRNA were annotated and subjected to differential expression analysis. GO and KEGG enrichment analyses were performed on the results to reveal the enrichment of relevant genes in significantly different pathways. MiRanda (version: miRanda-3.3a, Bino John, Massachusetts, USA) [21] and RNAhybrid (version: RNAhybrid v2.0, Jan Krüger, Bielefeld, Germany) [22] online prediction websites were used to predict target genes of miRNAs. The predicted target genes were then intersected with differential mRNA data to analyze miRNA-mRNA pairs with negative regulatory relationships. In order to observe the regulatory relationship between each miRNA and its target genes in the processed samples more intuitively, Cytoscape (version: Cytoscape_v3.8.2, Paul Shannon, California, USA) [23] software was employed to draw the miRNA-mRNA targeting relationship network diagram. Cell Culture and Statistical Analysis of Transcriptome Data of Kisspeptin-Treated Granulosa Cells Tan sheep granulosa cells cultured in vitro for 0 h, 24 h, 48 h, and 72 h are shown in Figure 1. After the data of the control and the test groups were filtered out of the low-quality sequence, the 3 and 5 joints were removed. 12,694,192 and 12,995,552 clean reads were obtained, accounting for 92.17% and 98.29%, respectively, of the total reads. More than 97% of sequencing data error rates were lower than Q20 (0.01), and 91% of sequencing data error rates were lower than Q30 (0.001). The sequencing results were, therefore, further analyzed. Sequencing results demonstrated that GC base content in each sample was equal, and the base composition was stable and balanced. An average of 93.81% of clean reads could be compared with the reference genome sequence of sheep (https://www.ensembl. org/index.html, Oar_v3.1, accessed on 25 September 2020). Approximately 72.38% of reads matched a unique location ( Table 2). The MC and MT sequences were annotated to the Rfam database for comparison. In the MC, rRNA accounted for 0.75%; snRNA accounted for 0.00%; snoRNA accounted for 0.02%; and tRNA accounted for 0.10%. In the MT, rRNA accounted for 0.11%; snRNA accounted for 0.00%; snoRNA accounted for 0.04%; and tRNA accounted for 0.12% (Table 3). rRNA, snRNA, snoRNA, and tRNA that might exist in samples were found and removed as much as possible. with differential mRNA data to analyze miRNA-mRNA pairs with negative regulatory relationships. In order to observe the regulatory relationship between each miRNA and its target genes in the processed samples more intuitively, Cytoscape (version: Cyto-scape_v3.8.2, Paul Shannon, California, USA) [23] software was employed to draw the miRNA-mRNA targeting relationship network diagram. Cell Culture and Statistical Analysis of Transcriptome Data of Kisspeptin-Treated Granulosa Cells Tan sheep granulosa cells cultured in vitro for 0 h, 24 h, 48 h, and 72 h are shown in Figure 1. After the data of the control and the test groups were filtered out of the lowquality sequence, the 3′ and 5′ joints were removed. 12,694,192 and 12,995,552 clean reads were obtained, accounting for 92.17% and 98.29%, respectively, of the total reads. More than 97% of sequencing data error rates were lower than Q20 (0.01), and 91% of sequencing data error rates were lower than Q30 (0.001). The sequencing results were, therefore, further analyzed. Sequencing results demonstrated that GC base content in each sample was equal, and the base composition was stable and balanced. An average of 93.81% of clean reads could be compared with the reference genome sequence of sheep (https://www.ensembl.org/index.html, Oar_v3.1, accessed on 25 September 2020). Approximately 72.38% of reads matched a unique location ( Table 2). The MC and MT sequences were annotated to the Rfam database for comparison. In the MC, rRNA accounted for 0.75%; snRNA accounted for 0.00%; snoRNA accounted for 0.02%; and tRNA accounted for 0.10%. In the MT, rRNA accounted for 0.11%; snRNA accounted for 0.00%; snoRNA accounted for 0.04%; and tRNA accounted for 0.12% (Table 3). rRNA, snRNA, snoRNA, and tRNA that might exist in samples were found and removed as much as possible. Identification and Classification Annotation of miRNA The distribution of the length of small RNA sequences was analyzed. According to the statistical results, most sequences were concentrated in the 18-28 nt, with 21-24 nt sequences having a high frequency. Further, the peak at 23 nt had the highest frequency ( Figure 2). According to the sequence abundance statistics of different classifications between the MC and the MT groups, miRNA accounted for 73.57% and 79.51% in the MC and MT group, respectively ( Figure 3). About 75% of the sequences were 22-24 nt in length, and the results could be further analyzed. Identification and Classification Annotation of miRNA The distribution of the length of small RNA sequences was analyzed. According to the statistical results, most sequences were concentrated in the 18-28 nt, with 21-24 n sequences having a high frequency. Further, the peak at 23 nt had the highest frequency ( Figure 2). According to the sequence abundance statistics of different classifications be tween the MC and the MT groups, miRNA accounted for 73.57% and 79.51% in the MC and MT group, respectively ( Figure 3). About 75% of the sequences were 22-24 nt in length, and the results could be further analyzed. About 75% of the sequences were 22-24 nt in length and the results were consisten between the two groups. Differentially Expressed mRNA and miRNA Analysis There were 1303 differentially expressed genes in the MC and MT groups of o granulosa cells, among which 613 genes were downregulated, and 692 were upreg Seven differentially expressed miRNAs were downregulated, and one was found upregulated (Figure 4). Further analysis revealed that NPM1 (nucleophosmin 1) an (ERH mRNA splicing and mitosis factor) were significantly downregulated (p < About 75% of the sequences were 22-24 nt in length and the results were consistent between the two groups. Differentially Expressed mRNA and miRNA Analysis There were 1303 differentially expressed genes in the MC and MT groups of ovarian granulosa cells, among which 613 genes were downregulated, and 692 were upregulated. Seven differentially expressed miRNAs were downregulated, and one was found to be upregulated (Figure 4). Further analysis revealed that NPM1 (nucleophosmin 1) and ERH (ERH mRNA splicing and mitosis factor) were significantly downregulated (p < 0.05). However, HSBP1 (heat shock factor binding protein 1), RPL35A (ribosomal protein L35a), and other genes were significantly upregulated (p < 0.05). Moreover, miR-148a was significantly upregulated (p < 0.05), while let-7a, let-7b, let-7c, and miR-10a were significantly downregulated (p < 0.05). Cluster analysis heat maps were drawn based on transcription levels in the MC and MT groups ( Figure 5). The maps showed that these mRNAs and miRNAs were well clustered and significantly upregulated or downregulated in the samples. Differentially Expressed mRNA and miRNA Analysis There were 1303 differentially expressed genes in the MC and MT groups of ov granulosa cells, among which 613 genes were downregulated, and 692 were upregul Seven differentially expressed miRNAs were downregulated, and one was found upregulated (Figure 4). Further analysis revealed that NPM1 (nucleophosmin 1) and (ERH mRNA splicing and mitosis factor) were significantly downregulated (p < However, HSBP1 (heat shock factor binding protein 1), RPL35A (ribosomal protein L and other genes were significantly upregulated (p < 0.05). Moreover, miR-148a was s icantly upregulated (p < 0.05), while let-7a, let-7b, let-7c, and miR-10a were signific downregulated (p < 0.05). Cluster analysis heat maps were drawn based on transcri levels in the MC and MT groups ( Figure 5). The maps showed that these mRNAs miRNAs were well clustered and significantly upregulated or downregulated in the ples. GO Enrichment Annotation of Differentially Expressed mRNA and miRNA GO function annotation of miRNA-negative related target genes in the kissp treated test and control group was prepared. According to the results of the GO da a total of 622 GO terms were annotated. The cellular component was annotated endoplasmic reticulum quality control compartment and MHC class I protein co The biological process was annotated to regulate the progesterone biosynthetic p and the estrous cycle, and the molecular function was annotated to GTPase activi zinc ion binding. The most significantly enriched GO entries for mRNA and miRN shown in Table 4. GO Enrichment Annotation of Differentially Expressed mRNA and miRNA GO function annotation of miRNA-negative related target genes in the kisspeptintreated test and control group was prepared. According to the results of the GO database, a total of 622 GO terms were annotated. The cellular component was annotated to the endoplasmic reticulum quality control compartment and MHC class I protein complex. The biological process was annotated to regulate the progesterone biosynthetic process and the estrous cycle, and the molecular function was annotated to GTPase activity and zinc ion binding. The most significantly enriched GO entries for mRNA and miRNA are shown in Table 4. The size and color of the dot represent the number of enriched genes and the magnitude of significance in Figure 6. KEGG Pathway Analysis of Differential Genes and miRNA Target Genes KEGG enrichment annotation was performed for the differential target gene dicted by mRNA and miRNA. The enrichment of mRNA results focused on stero synthesis, IL-17 signaling pathway, GnRH signaling pathway, oxytocin signaling way, MAPK signaling pathway, ovarian steroidogenesis, and other target pathway ure 7). miRNA results were enriched into NF-Kappa B signaling pathway, chemoki naling pathway, Ras signaling pathway, and PI3K-Akt signaling pathway ( Figure Figure 6. GO analysis of miRNA. KEGG Pathway Analysis of Differential Genes and miRNA Target Genes KEGG enrichment annotation was performed for the differential target genes predicted by mRNA and miRNA. The enrichment of mRNA results focused on steroid biosynthesis, IL-17 signaling pathway, GnRH signaling pathway, oxytocin signaling pathway, MAPK signaling pathway, ovarian steroidogenesis, and other target pathways (Figure 7). miRNA results were enriched into NF-Kappa B signaling pathway, chemokine signaling pathway, Ras signaling pathway, and PI3K-Akt signaling pathway (Figure 8). KEGG Pathway Analysis of Differential Genes and miRNA Target Genes KEGG enrichment annotation was performed for the differential target genes predicted by mRNA and miRNA. The enrichment of mRNA results focused on steroid biosynthesis, IL-17 signaling pathway, GnRH signaling pathway, oxytocin signaling pathway, MAPK signaling pathway, ovarian steroidogenesis, and other target pathways (Figure 7). miRNA results were enriched into NF-Kappa B signaling pathway, chemokine signaling pathway, Ras signaling pathway, and PI3K-Akt signaling pathway (Figure 8). Kisspeptin-Mediated Key miRNAs for Granulosa Cell Steroid Production and the Corresponding Target Genes miRNAs are primarily regulated in a negatively correlated manner with the predicted target genes in animals and eukaryotes. In this study, we predicted the target genes of miRNA. Analysis of miRNAs and mRNAs' associations revealed 16 miRNA-mRNA pairs with negative regulation, forming 22 pairs of targeting relationships. Moreover, the co-expression network of miRNA-mRNA was drawn and it was found that eight miRNAs were in the center of the network, regulating 22 genes (Figure 9), while one miRNA could regulate multiple target genes at the same time. Oar-let-7b could simultaneously regulate TRAF1, MUL1, PTPN23, EGR1, LVRN, ZFHX2, EDEM1, and TUBB2A genes. The EDEM1 gene also simultaneously regulated oar-let-7a, oar-let-7b, oar-let-7c, and oar-let-7d. The gene EGR1, regulating steroid hormone production, targeted the oar-let-7b, while oar-miR-10a targeted the HNRNPD gene. Figure 10 shows the mRNA protein interaction analysis. The interaction diagram between the genes was drawn by selecting the genes with the top 100 confidence levels from the string database. Kisspeptin-Mediated Key miRNAs for Granulosa Cell Steroid Production and the Corresponding Target Genes miRNAs are primarily regulated in a negatively correlated manner with the predicted target genes in animals and eukaryotes. In this study, we predicted the target genes of miRNA. Analysis of miRNAs and mRNAs' associations revealed 16 miRNA-mRNA pairs with negative regulation, forming 22 pairs of targeting relationships. Moreover, the coexpression network of miRNA-mRNA was drawn and it was found that eight miRNAs were in the center of the network, regulating 22 genes (Figure 9), while one miRNA could regulate multiple target genes at the same time. Oar-let-7b could simultaneously regulate TRAF1, MUL1, PTPN23, EGR1, LVRN, ZFHX2, EDEM1, and TUBB2A genes. The EDEM1 gene also simultaneously regulated oar-let-7a, oar-let-7b, oar-let-7c, and oar-let-7d. The gene EGR1, regulating steroid hormone production, targeted the oar-let-7b, while oar-miR-10a targeted the HNRNPD gene. Figure 10 shows the mRNA protein interaction analysis. The interaction diagram between the genes was drawn by selecting the genes with the top 100 confidence levels from the string database. Validation of Differentially Expressed miRNA and mRNA by qPCR qPCR analysis of the six differential genes and seven miRNAs revealed that the dissolution curves of all miRNAs and genes were single peak and the primer design was reasonable. The relative expression trends of miRNAs and genes in the MT and MC groups were consistent with the sequencing results of the transcriptome (Figure 11), indicating that the sequencing results were accurate, reliable, and could be used for subsequent functional verification. Animals 2022, 12, x 12 of 17 qPCR analysis of the six differential genes and seven miRNAs revealed that the dissolution curves of all miRNAs and genes were single peak and the primer design was reasonable. The relative expression trends of miRNAs and genes in the MT and MC groups were consistent with the sequencing results of the transcriptome (Figure 11), indicating that the sequencing results were accurate, reliable, and could be used for subsequent functional verification. Figure 11. qPCR and RNA-seq quantitative analysis of differentially expressed genes and miRNA. As shown in Figure 11, the upregulation and downregulation trends verified by qPCR were precisely the same as those obtained from RNA-seq. Discussion Kisspeptin can regulate animal reproduction by regulating follicular development through the paracrine or autocrine pathway in the gonads. Kisspeptin treatment promotes progesterone secretion in cultured bovine granulosa cells [24], while transfection of the kiss1 gene into porcine ovarian granulosa cells promotes steroid secretion [25]. Our research group also showed that kisspeptin could promote the secretion of steroid hormones and cell proliferation in ovarian granulosa cells of Tan sheep in vitro (unpublished data), but the specific molecular mechanism remains unclear. Recent studies have indicated that kisspeptin-10 could significantly change the expression of circulating miRNAs (let-7e, miR-100-5p, and others) obtained from the plasma of gonads of Senegalese Sole, affecting their reproduction [26]. Kisspeptin-10 has been reported to induce progesterone synthesis in bovine granulosa cells by regulating the expression of miR-146-targeted StAR genes [24]. Therefore, we speculate that kisspeptin affects ovarian granulosa cell function by modulating miRNA and mRNA expression. This study identified some essential mRNA, miRNA, and signaling pathways by integrated analysis after combined sequencing. GO and KEGG enrichment analysis for miRNAs revealed that 8 of the 31 significant pathways were related to regulating steroid hormones, estrous, cell proliferation, and ovulation. Furthermore, GO and KEGG enrichment analysis for RNA also showed that EGR1, HSD17B12, CYP1B1, etc., were enriched in the regulation of progesterone biosynthesis, estrous cycle, progesterone metabolism, C21-steroid hormone biosynthesis, steroid biosynthesis, ovulation cycle, and steroid metabolism. Meanwhile, twelve upregulated and twelve downregulated genes were also enriched into the PI3K-Akt signaling pathway. It is well known that ovarian granulosa cells primarily express hormone receptors, Figure 11. qPCR and RNA-seq quantitative analysis of differentially expressed genes and miRNA. As shown in Figure 11, the upregulation and downregulation trends verified by qPCR were precisely the same as those obtained from RNA-seq. Discussion Kisspeptin can regulate animal reproduction by regulating follicular development through the paracrine or autocrine pathway in the gonads. Kisspeptin treatment promotes progesterone secretion in cultured bovine granulosa cells [24], while transfection of the kiss1 gene into porcine ovarian granulosa cells promotes steroid secretion [25]. Our research group also showed that kisspeptin could promote the secretion of steroid hormones and cell proliferation in ovarian granulosa cells of Tan sheep in vitro (unpublished data), but the specific molecular mechanism remains unclear. Recent studies have indicated that kisspeptin-10 could significantly change the expression of circulating miRNAs (let-7e, miR-100-5p, and others) obtained from the plasma of gonads of Senegalese Sole, affecting their reproduction [26]. Kisspeptin-10 has been reported to induce progesterone synthesis in bovine granulosa cells by regulating the expression of miR-146-targeted StAR genes [24]. Therefore, we speculate that kisspeptin affects ovarian granulosa cell function by modulating miRNA and mRNA expression. This study identified some essential mRNA, miRNA, and signaling pathways by integrated analysis after combined sequencing. GO and KEGG enrichment analysis for miRNAs revealed that 8 of the 31 significant pathways were related to regulating steroid hormones, estrous, cell proliferation, and ovulation. Furthermore, GO and KEGG enrichment analysis for RNA also showed that EGR1, HSD17B12, CYP1B1, etc., were enriched in the regulation of progesterone biosynthesis, estrous cycle, progesterone metabolism, C21-steroid hormone biosynthesis, steroid biosynthesis, ovulation cycle, and steroid metabolism. Meanwhile, twelve upregulated and twelve downregulated genes were also enriched into the PI3K-Akt signaling pathway. It is well known that ovarian granulosa cells primarily express hormone receptors, including follicle-stimulating hormone receptor (FSHR) and luteinizing hormone receptor (LHR), which can regulate ovarian granulosa cell function by binding its ligand. Previous studies have shown that EGR1 regulates embryo implantation [27] and LHR expression [28]. These results suggest that the EGR1 gene may play a specific role in regulating the reproduction of Tan sheep. HSD17B12 belongs to the hydroxysteroid (17β) dehydrogenase family and plays a vital role in female fertility through its role in arachidonic acid (AA) metabolism [29]. CYP1B1, a member of the cytochrome P450 1 subfamily, is mainly responsible for the metabolism of halogenated and polycyclic aromatic hydrocarbons in body tissues. CYP1B1 has been reported to regulate the expressions of progesterone, estrogen, and steroid hormone receptors [30][31][32]. These results indicated that those genes might play significant roles in these pathways regulating the function of ovarian granulosa cells. Integrative analysis of miRNA-mRNA revealed that the significantly different oarlet-7b targets eight genes, and miR-10a regulates lipid metabolism and steroid hormone synthesis by binding its target gene. Previous studies have discovered that miRNA could regulate follicular development and cell proliferation by binding its target genes [33]. The results of an in vivo ovarian oxidative stress model revealed that downregulation of miR-181a expression inhibited the apoptosis of granulosa cells by upregulating the transcriptional activity of SIRT1 inhibitory pro-apoptotic factors [34]. miR-335-5p has been reported to be involved in granulosa cell proliferation by decreasing SGK3 expression in patients with polycystic ovary syndrome (PCOS) [35]. miR-210 has been reported to regulate the granulosa cell function in the pre-ovulation period through HRA and EFNA3 [36]. All of these suggest that miRNAs play an essential role in regulating granulosa cell function. The let-7 family is the most abundant miRNA detected in the ovary and plays a crucial role in follicles development. Further, let-7c has been reported to be mainly present in granulosa cells [37]. In this study, four members of the let-7 family (let-7a, let-7b, let-7c, and let-7d) were downregulated. One study demonstrated that let-7b might bind to activin receptorIand Smad2/3 genes, affecting follicular development and estrogen secretion through the TGF-β signaling pathway [37]. let-7a was inversely regulated by estrogen and progesterone in the endometrium of estrus mice [38]. Moreover, transfection of let-7b and let-7c can significantly inhibit the release of progesterone from human ovarian granulosa cells [39]. In addition, overexpressed hsa-let-7b could significantly inhibit the proliferation ability of A375 and A2058 cells [40]. Moreover, let-7b might inhibit the proliferation of hepatocellular carcinoma cells [41]. Based on these results, it was speculated that an increase in let-7b levels in vivo might inhibit the expression of target genes associated with steroid hormone secretion, cell proliferation, and follicle development. In addition, the miR-10 family, including miR-10a and miR-10b, are highly conserved and have similar roles in most species, i.e., inhibiting proliferation and inducing apoptosis of granulosa cells in humans, rats, and mice [39,42]. miR-10a and miR-10b are reported to inhibit proliferation and induce apoptosis of granulosa cells during follicular development by inhibiting BDNF and TGF-β pathways [43]. miR-10a plays an essential role in male germ cell development and spermatogenesis by regulating meiotic processes in male humans and mice [43]. miR-10a-5p overexpression is also reported to reduce the proliferation of porcine ovarian granulosa cells and increase apoptosis. Correspondingly, the transfection of miR-10a-5p inhibitor showed the opposite results [44]. Furthermore, miR-10a reduced the proliferation of granulosa cells in humans [45]. Recent studies have also revealed that the expression of oar-miR-10a in the dominant follicles of sheep was significantly downregulated compared with the pre-follicular recruitment, suggesting that miR-10a affects follicular development by regulating ovarian granulosa cell proliferation [46]. The sequencing results showed that kisspeptin treatment significantly reduced the expression of miR-10a. Therefore, it was speculated that kisspeptin might regulate granulosa cell proliferation and steroid hormone production by downregulating miR-10a in Tan sheep. Integrative analysis of miRNA-mRNA revealed that HNRNPD, a low-density lipoprotein (LDL) receptor mRNA degradation factor, is a target mRNA of miR-10a. HNRNPD is involved in cholesterol-mediated inhibiting liver LDL receptor expression [47]. Cholesterol is the primary substrate for steroid hormone biosynthesis. Cholesterol uptake from blood lipoprotein particles via LDL receptors is utilized during progesterone and estrogen biosynthesis during follicle development [48]. Hence, HNRNPD may indirectly impact steroid hormone synthesis by regulating LDL receptors. Meanwhile, the knockdown of HN-RNPD can inhibit the proliferation of lung cancer cells and glioma cells [49,50]. Therefore, kisspeptin may regulate steroid hormone production and proliferation of ovarian granulosa cells though probably downregulating miR-10a and upregulating HNRNPD. Additionally, EGR1 identified a target mRNA of let-7b by integrative analysis of miRNA-mRNA. EGR1 can regulate follicle development in the ovary, and the knockout of this gene significantly reduces the number of corpus luteum and the levels of progesterone and LHβ, resulting in infertility [51]. Furthermore, one study has shown that EGR1 genes influence cell proliferation and apoptosis [52]. Another study has demonstrated that increased expression of EGR1 could induce cell proliferation by activating the transforming growth factor β1/Smad signaling pathway [53]. As a result, we supposed that kisspeptin might modulate steroid hormone production and proliferation of ovarian granulosa cells by increasing the expression of let-7b and decreasing the expression of EGR1 in Tan sheep. In addition, in the inter-gene interaction diagram, many genes are not miRNA targets but crucial to other biological processes in granulosa cells. Such as the centrally regulated STC2, TRA2B, and UBE211, of which STC2, known as the estrogen response gene, is expressed in the ovary as a paracrine regulator and is involved in follicular development. STC2 was also reported to promote human follicular development by affecting the proteolytic activity of pregnancy-associated plasma protein A [54]. Accordingly, kisspeptin may also indirectly regulate granulosa cell function by these pathways. In summary, we identified some key miRNA, mRNA, and important signaling pathways of kisspeptin regulating the function of ovarian granulosa cells by combined sequencing. Integrative analysis of miRNA-mRNA revealed that the significantly different oar-let-7b targets eight genes, of which EGR1 might play a significant role in regulating granulosa cell function, and miR-10a regulates lipid metabolism and steroid hormone synthesis by targeting HNRNPD. However, the precise molecule mechanism of kisspeptin regulating the function of ovarian granulosa cells remains to be investigated in the future. Conclusions This study used RNA-seq techniques to characterize the transcriptome of kisspeptintreated and untreated ovarian granulosa cells of Tan sheep. Further, key miRNAs (miR-10a, let-7b, and let-7c) and genes (EGR1, HNRNPD) regulating steroid hormone production and cell proliferation were screened, and the direct involvement of the targeted relationship (let-7b-EGR1, miR-10a-HNRNPD) in regulating the function of granulosa cells was discovered by integrative analysis of miRNA-mRNA. PPI analysis revealed genes there are not miRNA targets but are crucial to other biological processes in granulosa cells. The findings of this work may help understand the molecular mechanism of kisspeptin regulating steroid hormone secretion, cell proliferation, and other physiological functions in ovarian granulosa cells of Tan sheep. Informed Consent Statement: Not applicable, as this research did not involve any humans. Data Availability Statement: All raw data during the current study are available in the NCBI BioProject (https://submit.ncbi.nlm.nih.gov/subs/bioproject) with accession number PRJNA895403 (accessed on 30 September 2022).
2022-11-04T19:26:41.457Z
2022-10-30T00:00:00.000
{ "year": 2022, "sha1": "b6529255f59c38423f18ab051a57faa5074dc70c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/12/21/2989/pdf?version=1667279861", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d337abb0a6c9c8792c08730b7cd90efbf1429c89", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
128358172
pes2o/s2orc
v3-fos-license
Determination of Varying Group Sizes for Pooling Procedure Pooling is an attractive strategy in screening infected specimens, especially for rare diseases. An essential step of performing the pooled test is to determine the group size. Sometimes, equal group size is not appropriate due to population heterogeneity. In this case, varying group sizes are preferred and could be determined while individual information is available. In this study, we propose a sequential procedure to determine varying group sizes through fully utilizing available information. This procedure is data driven. Simulations show that it has good performance in estimating parameters. Introduction Routine monitoring or large scale of screening usually occurs in biomedical research to identify infected specimens [1][2][3][4]. However, some test kits, e.g., nucleic acid amplification test (NAAT), are expensive [2,5]. erefore, the expense during a large-scale monitoring process is usually a financial burden if resource is limited [6][7][8]. e strategy of pooling biospecimens is attractive to address this issue [9][10][11], which was first used during World War II to screen for syphilis [12]. is strategy is firstly to pool specimens into groups and then screen these groups. If a group tests negative, all specimens in this group will be declared negative; otherwise, continue to perform individual test. When the prevalence is low, the total number of tests using pooling will be far less than that using the individual test. Due to its efficiency and cost saving, pooling is now applied in many fields, such as agriculture [13], genetics [14,15], HIV/AIDS [16,17] and blood screening [18], and environmental epidemiology [19,20]. e gain of pooling mainly depends on the pooling algorithm. Assuming homogeneity of the population, dozens of papers have investigated the problem how to design an efficient algorithm [21][22][23][24][25]. However, this assumption might be violated in practical application [26][27][28]. While individual information is available, it is of interest to estimate individual-level prevalence through incorporating such information. Note that only group-level status is observed, e.g., positive or negative. is problem has been studied in parametric context through the framework of binary regression models [29][30][31], and also in semiparametric [32,33] or nonparametric context [34,35]. However, aforementioned work mostly uses a single group size that is determined in advance. A set of pool sizes might be more appropriate while considering population heterogeneity. For example, varying pool sizes were used to estimate the infection prevalence of Myxobolus cerebralis, which causes whirling disease, among free-ranging salmonid fish collected from the Truckee River in Nevada and California [36]. In a study of estimating the prevalence of several viruses in carnations grown in nursery glasshouses in Victoria, sequential pooled testing involving several pool sizes was adopted [37]. Using a single group size might be optimal for some estimates but far from others, especially when we have little information ahead of the experiment [37,38]. More work is better on this issue since the benefit of pooling algorithm mainly depend on the choice of pool size [38][39][40]. In this study, we propose a pooling strategy with varying pool sizes through taking advantage of individual information. Our procedure is a data-driven pooling algorithm, where groups are formed sequentially. Its performance is extensively investigated by simulations and a real data set. Notations and Background. Suppose N specimens are assigned into m groups each with size k i for i � 1, 2, . . . , m. z i denotes the observed status of the i th group, and X ij denotes the covariates of the j th specimen in the i th group for j � 1, . . . , k i and i � 1, . . . , m. e observations are z i , X ij , j � 1, . . . , k i , i � 1, . . . , m}, where X ij � 1, x 1,ij , . . . , x d−1,ij T . Here, the notation A T represents the transpose of matrix A. e sensitivity and specificity of the screening tool are denoted by S e and S p , respectively. e full likelihood function is where r � S e + S p − 1 and e parameter β is defined by β � β 0 , β 1 , . . . , β d−1 T , and the function g −1 (·) is a known, monotone, and differentiable link function. Sometimes there might be a maximum admissible group size k max , e.g., a large group size might bring the dilution effect. erefore, we should carefully choose an appropriate group size that is smaller than k max . Define a set K � 1, 2, . . . , k max { }, and denote it by k � k 1 , . . . , k m , k i ∈ K, i � 1, . . . , m. Once the group size k is determined, we could obtain the estimator of β through maximum likelihood function L(β, z, X). e Fisher information matrix of the parameter β could be rewritten as follows: where e calculation of Fisher information I(β, k) is presented in Supplemental Material (Available here). To obtain a better estimator β, we try to find k that maximizes Fisher information I(β, k). However, individual-level measurements make it difficult to achieve this goal. e Fisher information I(β, k) defined in (2) is closeness let the Fisher information reduce to the following format: . en, we propose to determine the group sizes through minimizing all C i (β, k i ) with respect to k i for i � 1, . . . , m. Note that the aforementioned approximate approach requires the pools are homogeneous. ere are two methods to obtain homogeneous pool: reorder the specimens according to similarity of covariants or based on individual risk probability. e latter is adopted in this study. Following the method in McMahan et al. [42], the procedure of forming homogeneous pool is as follows. Firstly, use training data or prior knowledge to obtain an initial estimator β (0) [42]. Secondly, sort the specimens by their risk probability. Let G denotes the set which contains total covariants of enrolled specimens, G � x 1 , . . . , x N , where N is the number of specimens and x i is the covariant of the i th specimen. Sort G by risk probability p i � g(x T i β (0) ) in the descending order, and obtain a sorted set G s � x s 1 , · · · , x s N . e remaining procedure is directly performed on this sorted set. Sequential Adaptive Pooling Algorithm. Our strategy is an adaptive design, which is often adopted in the biological experiment and also in the pooled test [22]. Before stating the algorithm, we need the following result. Suppose the specimens are assigned for the first l − 1 groups with the corresponding group sizes k 1 , . . . , k l−1 . Let n l � l j�1 k j for l ≥ 1 and n 0 � 0. Denote W l (β) � −log(1 − g((x s n l−1 +1 ) T β)). en the group size for the next group, k l , equals k max if k max ≤ ϕ 0 /W l (β (0) ). Here, ϕ 0 is the root of an equation 2S e (1 − S e )(ϕ − 1)e 2ϕ + r(2S e − 1)(ϕ − 2)e ϕ + 2r 2 � 0 and is approximately 1.8414. e proof of this result is presented in Supplemental Material (Available here). Our pooling strategy is described as follows: Step 1. Label the specimens according to the ordering of G s . For example, label the specimen with covariants x s 1 by number 1. Assign specimens with labels up to k max into l th group. Computational and Mathematical Methods in Medicine Step 3. Let G s � G s /G l , l � l + 1. Repeat Step 2 to form the next group in the same way until all specimens are assigned. Step 4. Screen the groups and obtain maximum likelihood estimator of β. Note that this is a data-driven pooling strategy. Additionally, the above procedure does not strictly require that all specimens are enrolled before screening since the set G s is dynamic and could be renewed by new enrolled specimens. Numerical Results. In this section, we proceed to evaluate the performance of our proposed procedure. Name it by PSV, which is pooling strategy with varied group sizes. For comparison, we also present the results of pooling strategy with a single group size k, named by PSS(k). e group size k for PSS(k) is given in advance, e.g., k � 5, 10, or could be determined by the average prevalence of those enrolled samples. For the latter, we determine the optimal single group size k * by minimizing the variance of p. To investigate the performance of these methods, define the link function g(·) as the logistic function g(u) � 1/(1 + exp(−u)). en, individual prevalence is obtained through the following model: We first consider a single covariant (d � 2), following the normal distribution N(2, 1.5) or the gamma distribution Γ(2.5, 0.8). e corresponding parameters are set by β 0 � −3 and β 1 � 0.4. e samples are generated under these settings, and the procedures are repeated by M � 5000 times. We report the estimators β 0 and β 1 , along with their mean square error (MSE) in Table 1 under different settings of sensitivity, specificity, and the number of groups. In Figure 1, we further report the relative bias of the parameters. Table 1 shows that all procedures have similar performance except PSF [5]. While using the procedure PSF, we have to choose a group size in advance. It is crucial for a group testing algorithm since the precision of estimators severely depend on the group size. In our setting, the average of individual prevalence is about 0.0997, and the corresponding optimal single group size is mostly k * � 13, 12, 11 for (S e , S p ) � (0.99, 0.99), (0.95, 0.95), and (0.9, 0.9) respectively. Consequently, the procedure PSF [10] has better performance than PSF [5] since the latter procedure uses a too smaller group size. Figure 1 further shows the relative bias of the parameters, β 0 and β 1 . Our procedure with varying group sizes, PSV, has very good performance under different scenarios. e procedure PSF [5] still has the poorest performance on the measurement of relative bias. As data-driven pooling strategies, PSV and PSF (k * ) both show good performance, but PSV has smaller bias, which is a desired characteristic. e overall relative bias of these estimators reported in Figure 3 also confirms such property. It also reveals that pooling procedures using a single group size are not desired for a heterogeneous population, even the group size is carefully chosen, e.g., k * . An Illustrative Application. Verstraeten et al. conducted a surveillance study in Kenya to monitor a trend in HIV risk over time [43]. e samples were collected from pregnant women, along with potential risk covariants such as age, parity, and education level. ey used a common group size of 10 to estimate the seroprevalence of HIV. However, the individual prevalence of HIV is related with those risk covariants, e.g., the risk of HIV might tend to increase with age. For this data set, Vansteelandt et al. reported a set of group sizes varying between 5 and 12 under cost-precision trade-off [40]. Discussion In biological and epidemiological studies, there is growing interest in developing methods for a more accurate result but less cost. Group testing is such a cost saving strategy. In this study, we developed a pooling strategy that uses varying group sizes while individual information is available. is strategy is attractive since it only depends on the information of enrolled specimens and does not require a group size chosen in advance. Due to the characteristic of data-driven and theoretical justification, the procedure, "PSV," proposed in this study has a robust performance under different settings. It is convenient for practical application since we do not have to worry about how to choose an appropriate group size. Varying group sizes are reasonable to be used when the target population is diverse. For example, a sequential testing procedure using several group sizes is adopted to estimate virus infection levels of carnation populations grown in glasshouses since different carnation populations were expected to have a wide range of infection levels [45]. We could pool more specimens into one group if the probability of testing positive is small. It sounds reasonable to balance the probability of testing positive for each group, a way to mimic the situation when all enrolled specimens are homogeneous. In this study, we also propose a procedure using a single group size k * determined by minimizing the variance of estimator of the prevalence. We could choose this procedure if we prefer a simple procedure or the diversity among the specimens to be screened is ignorable. Besides, we did not consider the cost of collecting specimens. If a test is much more expensive than that of collecting specimens, then the cost of tests is the main consideration in a project involving large-scale screening. Otherwise, it is necessary to take into account the overall cost of collecting and test while using the pooling strategy. Data Availability e Kenya data supporting this study are from previously reported studies and datasets, which have been cited. e data are available at https://cran.r-project.org/package=binGroup. Conflicts of Interest e authors declare no conflicts of interest.
2019-04-24T13:03:09.033Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "5a746d63febda29e9f50dec6244b810eda095805", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2019/4381084", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5a746d63febda29e9f50dec6244b810eda095805", "s2fieldsofstudy": [ "Mathematics", "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
249227920
pes2o/s2orc
v3-fos-license
Non-Thermal Plasma Installation as a Pre-Treatment Method of Barley Seeds (Hordeum vulgare L.) Microbial organisms are key pathogens for plants, animals, and humans. The elimination of pathogenic microbes is an essential topic for researchers. Non-thermal atmospheric plasma (NTAP) can potentially inactivate pathogenic microorganisms. This paper draws the results of the biocidal effect study in pre-treatment of barley seeds by NTAP. The effect was observed in 7-day-old seedlings. The results of reducing the total microbial number (by 36.7% after 5 minutes of exposure) were obtained. Thus, treatment with non-thermal plasma can reduce the microbiological contamination of agricultural plant seeds. Introduction In the agro-industrial complex, an essential task is to increase yields and improve the storability of grown raw materials and plant foods. According to the International Organization for Food and Agriculture (FAO at the UN), grain losses during storage are at least 10% and increase when the storage conditions for temperature and humidity deviate from optimal. Losses are mainly associated with pest damage and microbiological spoilage. In addition, with long-term storage of grain in adverse conditions, the ability to germinate is disturbed, and quality decreases. One of the currently developing physical methods of pre-sowing treatment agricultural products is non-thermal atmospheric pressure plasma (NTAP) [1]. Pre-sowing seed treatment with NTAP can reduce microbiological contamination [2]. Previous studies have shown that the treatment of barley seeds with NTAP does not significantly affect germination capacity and the length of sprouts and roots. However, it reduces the infection with phytopathogenic fungi [3]. In addition, domestic studies [1,4,5] demonstrated the bactericidal effect of NTAP on several microorganisms: yeast, gramnegative, and gram-positive bacteria, and their spore forms. It is also worth noting the formation of free radicals due to plasma chemical reactions, which are strong oxidants [6,7]. They cause membrane degradation and DNA disruption [1,8]. This work aims -to access the biocidal effect of NTAP on the surface microbiota of barley seeds and morphometric parameters of 7-day-old seedlings. Materials and Methods In the experiments, we used the barley seeds (Hordeum vulgare L.) of the Vladimir sort, harvested in 2018. Each investigated case of the experiment included 150 seeds. The seeds were pre-treated with NTAP, which uses a microwave discharge of a coaxial configuration. Argon was used as a plasmaforming gas. [3,9]. The scheme of the experimental installation is shown in Figure 1. Seeds were pre-treated with NTAP at three levels of time: 1, 5 and 10 minutes. The distance from the nozzle to the seeds was 13 cm. Argon consumption was 5 l/min, and the power of the microwave generator was 1.2 kW. The gas flow temperature at the seed placement level was measured using a thermal imager and was about 37 C°. Further, the control and treated seed groups were germinated on wet filtered papers in a thermostat at 20-21 C° in accordance with Russian National Standard 12038-847. Each group contained 50 seeds. On the third day, the seed germination energy was determined. On the seventh day, laboratory germination, growth strength, sprout and root length, and wet and dry mass were determined. Then a phytosanitary examination of barley seeds was carried out. The degree of damage and the prevalence of diseases by Helminthosporium, Fusarium, and Penicillium were determined in accordance with Russian National Standard 12044-93. Results and Discussion The spring barley of the Vladimir variety has high adaptability to various cultivation conditions resistant to drought and soil acidity. The variety is moderately resistant to loose smut, highly susceptible to helminthosporiasis, moderately resistant to leaf rust and powdery mildew. Germination energy characterizes the ability of seeds to give uniform and even seedlings in the field, which guarantees good evenness and survival of plants. This indicator shows the percentage of germinated seeds of the total number of seeds planted in the experiment. Our experiments studied the effectiveness of presowing treatment with APNTP of spring barley seeds, which was determined by changes in morphological parameters and phyto-expertise. Following exposure to APNTP on the 7th day of germination, the length of the sprout and root did not change statistically significantly (Table 1). Our experiments decreased germination energy of barley seeds was observed during 5 and 10-minute 3 exposure to plasma. Next, the percentage of seeds that gave seedlings under standard conditions was determined -laboratory germination. Following exposure to APNTP, laboratory germination did not change statistically significantly. It was also found that pre-sowing plasma treatment of seeds before germination does not affect the growing strength of 7-day-old seedlings. (Table 2) for a more detailed study of the effect of plasma on the initial growth processes of spring barley. It was found that the percentage of water content in 7-day-old barley seedlings after exposure to APNTP on seeds before germination does not change statistically significantly. The study results showed that the fresh weight of seedlings after irradiation at 10 minutes decreased, but the dry weight did not change.Other paragraphs are indented (BodytextIndented style). Table 2. Wet and dry weight of 7-day-old barley seedlings after exposure to APNTP on seeds before germination Exposure time Damage degree by helminthosporiasis, % Prevalence of the disease, % Control 25,00 ± 5,00 75,00 ± 12,00 1 min 24,00 ± 11,00 68,00 ± 19,00 5 min 25,00 ± 9,00 73,00 ± 24,00 10 min 22,00 ± 5,00 70,00 ± 7,00 Thus, the obtained results of the study showed the absence of significant violations of the sowing qualities of barley seeds during presowing treatment with APNTP. The studied seed samples had a low initial infection with diseases (except for helminthosporiasis); therefore, no pronounced disinfecting effect was observed. However, the biocidal effect of argon ANTP was shown when exposed to barley seeds, determined by a decrease in the total microbial number of the surface microbiota (by 36.7% when exposed for 5 min) ( Table 4). As a result of studies on the impact of APNTP on barley seeds for 1 to 10 minutes, there was no change in their main morphometric parameters. However, exposure for 10 minutes reduced fresh weight in 7-day-old barley seedlings. decrease in seed germination energy was observed after 5 and 10 min of plasma exposure, but laboratory germination did not change significantly. A decrease in seed germination energy was observed after 5 and 10 min of plasma exposure, but laboratory germination did not change significantly. The damage degree and the prevalence of seed diseases by various fungi were similar to control samples. A biocidal effect was shown, expressed in a decrease in the total microbial number of the surface microbiota of seeds.
2022-06-01T20:07:40.571Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "450a5dff032354577f8e29f5e6be70b5dc3d4607", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/2270/1/012012", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "450a5dff032354577f8e29f5e6be70b5dc3d4607", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Physics" ] }
233775549
pes2o/s2orc
v3-fos-license
Time-frequency analysis associated with the Laguerre wavelet transform We define the localization operators associated with Laguerre wavelet transforms. Next, we prove the boundedness and compactness of these operators, which depend on a symbol and two admissible wavelets on Lα(K), 1 ≤ p ≤ ∞. Then T = ∂ ∂t and The theory of harmonic analysis on L p rad (H d ) was exploited by many authors (see [23,27,32]). When one considers the problems of radial functions on the Heisenberg group H d , the underlying manifold can be regarded as the Laguerre hypergroup K := [0, ∞) × R. Stempak [33] introduced a generalized translation operator on K 32 HATEM MEJJAOLI AND KHALIFA TRIMÈCHE and established the theory of harmonic analysis on L 2 (K, dν α ), where the weighted Lebesgue measure ν α on K is given by dν α (x, t) := x 2α+1 dxdt πΓ(α + 1) , α ≥ 0. In this paper we are interested in the Laguerre hypergroup K. We recall that (K, * α ) is a commutative hypergroup [29], on which the involution and the Haar measure are respectively given by the homeomorphism (x, t) → (x, t) − = (x, −t) and the Radon positive measure dν α (x, t). The unit element of (K, * α ) is given by e = (0, 0). In the classical setting, the notion of wavelets was first introduced by Morlet, a French petroleum engineer at Elf Aquitaine, in connection with his study of seismic traces. The mathematical foundations were given by Grossmann and Morlet in [18]. The harmonic analyst Meyer and many other mathematicians became aware of this theory and recognized many classical results inside it (see [6,21,26]). Classical wavelets have wide applications, ranging from signal analysis in geophysics and acoustics to quantum theory and pure mathematics (see [8,16] and the references therein). Next, the theory of wavelets and the continuous wavelet transform has been extended to hypergroups, in particular to the Laguerre hypergroups (see [29,34]). One of the aims of wavelet theory is the study of localization operators for the continuous wavelet transform. Time-frequency localization operators are a mathematical tool to define a restriction of functions to a region in the time-frequency plane that is compatible with the uncertainty principle and to extract time-frequency features. In this sense, these operators have been introduced and studied by Daubechies [9,10,11] and Ramanathan and Topiwala [30], and they are now extensively investigated as an important mathematical tool in signal analysis and other applications [17,12,13,35,7]. As the harmonic analysis on the Laguerre hypergroup has known remarkable development, it is natural to ask whether there exists the equivalent of the theory of localization operators for the continuous wavelet transform related to this harmonic analysis. Using the properties of the generalized Fourier transform on the Laguerre hypergroup K, our main aim in this paper is to expose and study the two-wavelet localization operator on the Laguerre hypergroup. The reason for the extension from one wavelet to two wavelets comes from the extra degree of flexibility in signal analysis and imaging when the localization operators are used as time-varying filters. It turns out that localization operators with two admissible wavelets have a richer mathematical structure than the onewavelet analogues. The remainder of this paper is arranged as follows. Section 2 contains some basic facts about the Laguerre hypergroup, its dual, and the Schatten-von Neumann classes. In Section 3 we introduce and study the two-wavelet localization operators in the setting of the Laguerre hypergroup. More precisely, the Schatten-von Neumann properties of these two localization wavelet operators are established, and for trace class Laguerre two-wavelet localization operators, the traces and the trace class norm inequalities are presented. Section 4 is devoted to proving that under suitable conditions on the symbols and two admissible wavelets, the L p boundedness and compactness of these two-wavelet localization operators hold. Preliminaries In this section we set some notation and we recall some basic results in harmonic analysis related to Laguerre hypergroups and Schatten-von Neumann classes. The main references are [29,35]. • C * (K) is the space of continuous functions on R 2 , even with respect to the first variable. • C * ,c (K) is the subspace of C * (K) formed by functions with compact support. m being the Laguerre polynomial of degree m and order α. •K := R × N equipped with the weighted Lebesgue measure γ α onK given by It is well known (see [29]) that for all (λ, m) ∈K, the system where D 1 and D 2 are singular partial differential operators given by The harmonic analysis on the Laguerre hypergroup K is generated by the singular operator , (x, t) ∈ K, while its dualK is generated by the differential difference operator where the operators Λ 1 , Λ 2 are given, for a suitable function g onK, by and the function where the difference operators ∆ + , ∆ − are given, for a suitable function g onK, by These operators satisfy some basic properties which can be found in [29,2]; namely, one has Definition 2.1. Let f ∈ C * ,c (K). For all (x, t) and (y, s) in K, we put (2.1) where x, y r,θ := x 2 + y 2 + 2xyr cos θ. The operators τ Notation: • S * (K) is the space of functions f : R 2 → C, even with respect to the first variable, C ∞ on R 2 , and rapidly decreasing together with their derivatives, i.e., for all k, p, q ∈ N we have Equipped with the topology defined by the semi-norms N k,p,q , S * (K) is a Fréchet space. • S(K) is the space of functions g :K → C such that (i) For all m, p, q, r, s ∈ N, the function is bounded and continuous on R, C ∞ on R * such that the left and the right derivatives at zero exist. Equipped with the topology defined by the semi-norms ν k,p,q , S(K) is a Fréchet space. (ii) The generalized Fourier transform F α extends to an isometric isomorphism from L 2 α (K) onto L 2 α (K). Corollary 2.11. For all f and g in L 2 α (K) we have the following Parseval formula for the generalized Fourier transform F α : Schatten-von Neumann classes. Notation: • l p (N), 1 ≤ p ≤ ∞, is the set of all infinite sequences of real (or complex) numbers u := (u j ) j∈N such that For p = 2, we provide the space l 2 (N) with the scalar product (ii) For 1 ≤ p < ∞, the Schatten class S p is the space of all compact operators whose singular values lie in l p (N). The space S p is equipped with the norm Remark 2.14. We note that the space S 2 is the space of Hilbert-Schmidt operators, and S 1 is the space of trace class operators. Definition 2.15. The trace of an operator for any orthonormal basis (v n ) n of L 2 α (K). Definition 2.17. We define S ∞ := B(L 2 α (K)), equipped with the norm 2.3. Basic Laguerre wavelet theory. In this subsection we recall some results introduced in [29]. where the measure µ α is defined by Definition 2.18. A Laguerre wavelet on K is a measurable function h on K satisfying, for almost all (λ, m) ∈K\{(0, 0)}, the condition Let a ∈ R\{0} and let h be a measurable function. We consider the function h a defined by are the generalized translation operators given by (2.1). This transform can also be written in the form wheref (x, t) = f (x, −t) and * α is the generalized convolution product given by (2.2). Laguerre two-wavelet localization operators In this section we will derive a host of sufficient conditions for the boundedness and Schatten class of the Laguerre two-wavelet localization operators in terms of properties of the symbol σ and the windows h and k. Preliminaries. Definition 3.1. Let h, k be measurable functions on K, and σ a measurable function on R × K. We define L h,k (σ), the Laguerre two-wavelet localization operator on L p α (K), 1 ≤ p ≤ ∞, by According to the different choices of the symbols σ and the different continuities required, we need to impose different conditions on h and k, and then we obtain an operator on L p α (K). It is often more convenient to interpret the definition of L h,k (σ) in a weak sense, that is, for f in L p α (K), p ∈ [1, ∞], and g in L p α (K), In what follows, such operator L h,k (σ) will be named localization operator, for the sake of simplicity. p ∈ [1, ∞). Formally, we assume that we have Proposition 3.2. Let Then its adjoint is the linear operator L k,h (σ) : Proof. For all f in L p α (K) and g in L p α (K) it immediately follows from (3.2) that In the rest of this section, h and k will be two Laguerre wavelets on K such that The main result of this subsection is the proof that the linear operators We first consider this problem for σ in L 1 µα (R × K) and next in L ∞ µα (R × K), and we then conclude by using interpolation theory. Proof. For all functions f and g in L 2 α (K), we have from relations (3.2) and (2.10), For all functions f and g in L 2 α (K), we have from Hölder's inequality . Using Plancherel's formula for Φ α h and Φ α k , given by relation (2.9), we get We can now associate a localization operator L h,k (σ) : The precise result is the following theorem. . We consider the operator Then, by Proposition 3.3 and Proposition 3.4, and Since (3.7) is true for arbitrary functions f in L 2 α (K), we obtain the desired result. α (a, x, t). Proof. Let σ be in L p µα (R × K) and let (σ n ) n∈N be a sequence of functions in Schatten-von Neumann properties for L h,k (σ). The main result of this subsection is the proof that the localization operator such that σ n → σ in L p µα (R × K) as n → ∞. Then by Theorem 3.5, On the other hand, as by Proposition 3.6 L h,k (σ n ) is in S 2 and hence compact, it follows that L h,k (σ) is compact. 9) where σ is given by where s j , j = 1, 2, . . . , are the positive singular values of L h,k (σ) corresponding to φ j . Then, we get HATEM MEJJAOLI AND KHALIFA TRIMÈCHE Thus, by Fubini's theorem, Cauchy-Schwarz's inequality, Bessel's inequality, and relations (2.8) and (2.6), we get . We now prove that L h,k (σ) satisfies the first inequality of (3.9). It is easy to see that σ belongs to L 1 α (K), and using formula (3.10) we get Then from Fubini's theorem, we obtain Thus using Plancherel's formula for Φ α h , Φ α k we get The proof is complete. α (a, x, t). In the following we give the main result of this subsection. α (a, x, t). Now we state a result concerning the trace of products of localization operators. Corollary 3.12. Let σ 1 and σ 2 be any real-valued and non-negative functions in L 1 µα (R × K). We assume that h = k and that h is a function in L 2 α (K) such that h L 2 α (K) = 1. Then, the localization operators L h,k (σ 1 ), L h,k (σ 2 ) are positive trace class operators and, for any natural number n, n S1 . HATEM MEJJAOLI AND KHALIFA TRIMÈCHE Proof. By Theorem 1 in Liu's paper [22] we know that if A and B are in the trace class S 1 and are positive operators, then ∀ n ∈ N, tr(AB) n ≤ tr(A) n tr(B) n . So, if we take A = L h,k (σ 1 ), B = L h,k (σ 2 ), and we invoke the previous remark, the desired result is obtained and the proof is complete. 4. L p α boundedness and compactness of L h,k (σ) In this section we will derive a host of sufficient conditions for the boundedness and compactness of the localization operators L h,k (σ) on L p α (K), 1 ≤ p ≤ ∞, in terms of properties of the symbol σ and the windows h and k. Boundedness of , and h ∈ L p α (K). We are going to show that L h,k (σ) is a bounded operator on L p α (K). Let us start with the following propositions. For every function f in L 1 α (K), from Fubini's theorem and the relations (3.1), (2.11), and (2.7), we have Proof. Let f be in L ∞ α (K). As above, from Fubini's theorem and the relations (3.1), (2.11), and (2.7), we have . With a Schur technique, we can obtain an L p α -boundedness result as in the previous theorem, but the estimate for the norm L h,k (σ) B(L p α (K)) is cruder. Then there exists a unique bounded linear operator (4.1) We have By simple calculations, it is easy to see that Thus by Schur's lemma (see [15]), we can conclude that L h,k (σ) : L p α (K) → L p α (K) is a bounded linear operator for 1 ≤ p ≤ ∞, and we have Proof. For any f ∈ L p α (K), consider the linear functional Using Fubini's theorem and the relation (2.11), we get Thus, the operator I f is a continuous linear functional on L p α (K), and the operator norm satisfies , which establishes the proposition. Combining Proposition 4.1 and Proposition 4.7, we have the following theorem. . We can now state and prove the main results in this subsection. Theorem 4.9. Let σ be in L r µα (R × K), r ∈ [1,2], and h, k ∈ L 1 Proof. Consider the linear functional By Proposition 4.1 and Theorem 3.5, we have and Therefore, by (4.2), (4.3), and the multi-linear interpolation theory (see [5,Section 10.1] for reference), we get a unique bounded linear operator By the definition of I, we have As the adjoint of L h,k (σ) is L k,h (σ), L h,k (σ) is a bounded linear map on L r α (K) with its operator norm satisfying where Using an interpolation of (4.4) and (4.5), we have that, for any p ∈ [r, r ], r+1 , 2r r−1 , and we have where In order to prove this theorem we need the following lemmas. Then there exists a unique bounded linear operator Proof. Consider the linear functional Then by Proposition 4.1 and Theorem 3.5, and where · B(L p µα (R×K),B(L q α (K))) denotes the norm in the Banach space of the bounded linear operators from L p µα (R × K) into B(L q α (K)), 1 ≤ p, q ≤ ∞. Using an interpolation of (4.7) and (4.8) we get the result. Proof. As the adjoint of is the bounded linear operator the result follows from duality and the previous lemma. Proof. The proof follows from Theorem 4.8 and Theorem 3.5 with p = 1, q instead of p, and interpolation theory. In the following we give two results for compactness of localization operators. Proof. The result is an immediate consequence of an interpolation of Corollary 3.10 and Proposition 4.14. See again [4, pp. 202-203] for the interpolation used.
2021-05-07T00:04:07.099Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "d11980795c07c54e53172942d7e5e8a5a89b8705", "oa_license": "CCBY", "oa_url": "http://inmabb.criba.edu.ar/revuma/pdf/v62n1/v62n1a02.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "8f079febd7b63010aa300bfd73d1b518383c3b7a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
271130172
pes2o/s2orc
v3-fos-license
Hydrometeorological conditions drive long-term changes in the spatial distribution of Potamogeton crispus in a subtropical lake Globally, anthropogenic disturbance and climate change caused a rapid decline of submerged macrophytes in lake ecosystems. Potamogeton crispus (P. crispus), a species that germinates in winter, explosively expanded throughout many Chinese lakes, yet the underlying mechanism remained unclear. Here, this study examined the long-term changes in the distribution patterns of P. crispus in Lake Gaoyou by combining remote sensing images and hydrometeorological data from 1984 to 2022 and water quality data from 2009 to 2022. It aims to unravel the relationships between the distribution patterns of P. crispus and hydrometeorological and water quality factors. The results showed that the area of P. crispus in Lake Gaoyou showed a slight increase from 1984 to 2009, a marked increase from 2010 to 2019, followed by a decline after 2020. Spatially, P. crispus was primarily distributed in the western and northern parts of Lake Gaoyou, with less distribution in the central and southeastern parts of the lake. Wind speed (WS), temperature (Temp), water level (WL), ammonia nitrogen (NH3-N), and Secchi depth (SD) were identified as the key factors regulating the variation in the P. crispus area in Lake Gaoyou. We found that the P. crispus area showed an increasing trend with increasing Temp, WL, and SD and decreasing WS and NH3-N. The influence of environmental factors on the area of P. crispus in Lake Gaoyou varied among seasons. The results indicated that hydrometeorology (WS, Temp, and WL) may override water quality (NH3-N and SD) in driving the succession of P. crispus distribution. The findings of this study offer valuable insights into the recent widespread expansion of P. crispus in shallow lakes across Eastern China. Introduction As a crucial producer in aquatic ecosystems, macrophytes not only prevent sediment resuspension, reduce nutrient release, and inhibit the growth of phytoplankton, but also provide habitats for various aquatic organisms (Søndergaard et al., 2010).As one of the important types of macrophyte, submerged macrophytes play an important role in ecological restoration; the water body in which they are present typically has low nutrient levels and phytoplankton biomass (Sayer et al., 2010).Generally, submerged macrophytes absorb nutrients from the water body during their growth period, significantly improving water quality and increasing transparency (Wu and Hua, 2014).However, extensive submerged macrophyte decomposition following excessive growth can have negative impacts on aquatic ecosystems by depleting dissolved oxygen (DO) in the water, which is an important cause of water quality degradation in macrophyte-dominated eutrophic lakes (Roman et al., 2001;Wang et al., 2018).Therefore, maintaining an appropriate level of submerged macrophyte coverage is essential for maintaining water quality and aquatic ecosystem stability, as well as for ecological restoration (Temmink et al., 2021). Over the past few decades, many lake ecosystems in China have undergone rapid degradation, characterized mainly by reduction in macrophyte coverage, dominance of single species, and overgrowth of specific species (Cao et al., 2015;Dong et al., 2022).These phenomena were influenced by various factors, including global climate change and anthropogenic activities.Previous studies have indicated that an increase in temperature promotes the germination of macrophytes, affecting their reproductive strategies, interactions, and species richness (Zhang YL et al., 2016;Li et al., 2017;Fares et al., 2020;Kim and Nishihiro, 2020).Changes in hydrological conditions such as water level and flow velocity are important factors contributing to the significant decrease in submerged macrophyte coverage and diversity (Breugnot et al., 2008;Luo et al., 2015).Anthropogenic activities may cause eutrophication in lakes and trigger a shift from macrophyte-dominated to phytoplankton-dominated states; a regime shift theory had been developed to describe the abrupt changes (Scheffer et al., 1993;Akasaka et al., 2010;Zhang PY et al., 2016).Previous studies have explored the impact of environmental factors on submerged macrophyte decline, investigating the spatiotemporal variability of the distribution of submerged macrophyte in eastern lakes such as Lake Taihu in China through field investigation, controlled experiment, remote sensing, and ecological modeling (Zhang YL et al., 2016;Dong et al., 2022).These studies mainly focused on the decline of macrophyte coverage.However, studies on the explosive growth of single species are also necessary; in recent years, Potamogeton crispus (P.crispus) has expanded explosively in many shallow lakes in China and disrupted water quality and the stability of aquatic ecosystems (Cao et al., 2015;Huang et al., 2022). P. crispus is a submerged macrophyte that requires lower temperatures during its growth period; it usually germinates in the winter, grows in the spring, and then degrade in late spring and early summer (Jian et al., 2003;Woolf and Madsen, 2003).Because of this unique phenological character, P. crispus has emerged as a predominant submerged macrophyte species in most shallow lakes in Eastern China in spring (Cao et al., 2015;Chen et al., 2017).Among these, there were few submerged macrophyte species in Lake Gaoyou, and P. crispus has emerged as a dominant species in recent years; it spread across the entire lake during spring (Tian et al., 2019;Xia et al., 2022).Following its bloom period, P. crispus decomposes quickly, releasing nutrients that have a substantial negative influence on water quality and the stability of aquatic ecosystems, potentially endangering the safety of the local water supply security (Wang et al., 2018;Huang et al., 2022).Therefore, elucidating the factors influencing the growth of P. crispus appears to be quite important.Since the 1990s, there have been field investigations on macrophyte communities in some lakes, such as Lake Nansi and Lake Dongping (Yu et al., 2017;Xia et al., 2022).In recent years, some studies used remote sensing technologies to interpret and identify wetlands and macrophyte distribution (Wang et al., 2019;Huang et al., 2021).However, there have been few studies on the long-term spatiotemporal variability of the distribution and driving factors of P. crispus. Here, this study combined remote sensing images and hydrometeorological factors of Lake Gaoyou from 1984 to 2022 and water quality factors from 2009 to 2022 to achieve the following research objectives: (1) clarify the long-term changes in distribution characteristics of P. crispus in Lake Gaoyou, and (2) disentangle the relative importance of hydrometeorology and water quality factors in regulating the area of P. crispus in Lake Gaoyou.We hypothesized that long-term changes in distribution patterns of P. crispus were strongly related to hydrometeorological conditions compared to water quality because of climate change in the past decades (Wu et al., 2021;Xia et al., 2022).This study could provide insights into understanding the mechanisms behind the recent largescale blooms of P. crispus in shallow lakes of Eastern China. Study area Lake Gaoyou (32°30′-33°05′N, 119°06′-119°25′E) is located in the central part of Jiangsu Province, China, in the downstream area of the Huai River; it mainly receives water from the Huai River (Figure 1).The total area of the water body is 728 km 2 .Lake Gaoyou is situated in the subtropical monsoon climate zone, with an average annual precipitation of 1,029 mm and an average annual evaporation of 890 mm (Chen et al., 2017).The prevailing wind direction is southeast.The main rivers along the lake include Linong River, Baita River, and Qinlan River.Lake Gaoyou is a typical overflow lake, primarily playing a crucial role in flood control and water supply.More importantly, it serves as a water source for the Eastern Route of the South-to-North Water Diversion Project (ER-SNWDP), thus contributing to water diversion benefits and drinking water safety (Qu et al., 2020).However, Lake Gaoyou is undergoing drastic changes in hydrological regime and is strongly impacted by anthropogenic activities such as reclamation and enclosed aquaculture, leading to eutrophication (Guo et al., 2023). Remote sensing data collection and analysis The remote sensing data for this study are Landsat 5 and Landsat 8 satellite data, with a spatial resolution of 30 m.The Landsat series images of Lake Gaoyou obtained are atmosphericcorrected surface reflectance data.The obtained image data are near-cloudless images (cloud cover <20%) during the submerged macrophyte (especially P. crispus) growth period in April and May. This study employed a remote sensing-based automatic classification algorithm for the extraction of macrophytes, which is able to distinguish algal blooms, emergent/floating-leaved macrophytes, and submerged macrophytes in eutrophic lakes.The decision tree is composed of two vegetation indices and their respective thresholds (Supplementary Figure 1): the Aquatic Vegetation Index (AVI) (Luo et al., 2023) and the Normalized Difference Vegetation Index (NDVI).Here, AVI was calculated based on the humidity coefficient and reflectance of various bands after the Landsat tasseled cap transformation and was used to extract the macrophyte area (Equation 1).NDVI was used to extract the floating-leaved vegetation and emergent macrophyte (FEM) area (Equations 1, 2).The specific process is as follows: (1) By a threshold value a, the study area was divided into macrophyte areas and non-macrophyte areas: the pixel with AVI > a was identified as macrophyte and the remaining pixels were nonmacrophyte.(2) In the macrophyte area, further classification was carried out by a threshold value b.The pixel with NDVI > b was classified as FEM and the remaining pixels were submerged macrophytes.Among them, the threshold a for AVI was a dynamic threshold, which varies for images acquired on different dates.It was obtained through a mixed linear model (Equation 3).The NDVI threshold was derived from extensive empirical data training, aimed at extracting FEM in Lake Gaoyou.The universal threshold b = 0.2 for NDVI was acquired through the threshold statistical graph obtained by the maximum gradient method.The AVI, NDVI, S m,i (l) formula was expressed as Equations 1-3: where k(l i ) represents the wetness coefficient of the tasseled cap transformation for band i in different satellite images.R(l i ) represents the surface reflectance of the corresponding spectral band i; R NIR , R Red , and R SWIR1 are the reflectances of the nearinfrared, red, and short-wave infrared bands, respectively; l NIR , l Red , and l SWIR1 correspond to the central wavelengths of the nearinfrared, red, and short-wave infrared bands, respectively; S m,i (l), S W,i (l), and S V (l) represent the spectra of mixed materials, pure water, and pure vegetation, respectively; p represents the proportion of the pure water spectrum in the spectral mixture.Accuracy evaluation results of the classification confusion matrix are displayed in Supplementary Table 1. Owing to the typical spectral characteristics of vegetation exhibited by FEM growing in the lakeshore zone, which has stronger spectral signals compared to submerged macrophytes, they are easily distinguishable from each other.P. crispus dominates the submerged macrophyte population in Lake Gaoyou, with the distribution of other species being quite limited (Tian et al., 2019;Xia et al., 2022).Additionally, considering that other submerged macrophytes are in their germination phase in April and May, while P. crispus is in its rapid growth phase with its stems closest to the water surface, exhibiting relatively strong spectral signals at this time.Therefore, we selected satellite images from April and May to identified the area of P. crispus in Gaoyou Lake.The remote sensing images from 2011 to 2013 and 2015 have high cloud content or contain stripes.By downloading Landsat 7 ETM and Landsat 8 OLI satellite data, visual interpretation was conducted to extract the area of P. crispus for these years. Meteorological, hydrological, and water quality data The meteorological factors, air temperature (Temp), wind speed (WS), and precipitation (PP) from 1984 to 2022, were obtained from the National Weather Science Data Center (https:// data.cma.cn/).The hydrological factor [water level (WL)] data from 1984 to 2022 were obtained from the Jiangsu Provincial Hydrology and Water Resources Investigation Bureau.The water quality data from 2009 to 2022 were obtained from field surveys and sampling analysis. The water quality survey conducted between 2009 and 2022 involved seven sampling sites (Figure 1), namely, two seasonal sampling sites across the lake (February, May, August, and November/December) and five monthly sampling sites.After the initial processing, the collected water samples were further detected and analyzed in the laboratory, ultimately yielding data on seven water quality parameters.Secchi depth (SD) was measured with the Secchi disk, and DO was measured with a portable multi-parameter water quality meter (YSI Professional Plus, USA).Surface, middle, and bottom water samples taken with a Plexiglas sampler were pooled and kept cool in a 1-L refrigerated container (at 4°C) and transported to the laboratory within 24 h.Total nitrogen (TN) was determined by potassium persulfate oxidation and UV spectrophotometry, and total phosphorus (TP) was determined by ammonium molybdate spectrophotometry.Ammonia nitrogen (NH 3 -N) was determined using the nano reagent photometric method, and the permanganate index (COD Mn ) was determined using the dichromate method.Chlorophyll a (Chl-a) was determined using a spectrophotometer (UV-2450, Shimadzu Co., Ltd., Japan) after filtering known amounts of water through a GF/F (Whatman International Ltd., Maidstone, England) filter. The flowchart of the grouping of environmental factors to analyze the area of P. crispus is shown in Supplementary Figure 2. Data analysis To understand the variation trend of P. crispus area, the "lm" function was used to analyze the long-term temporal changes of environmental factors in Lake Gaoyou.Subsequently, the "segmented" package was employed to fit segmented models to the time series of P. crispus area over the years.As P. crispus germinates in winter and blooms in spring, we examined the effects of environmental factors on the P. crispus area based on data of winter (from December of the previous year to March), spring (from April to May), and the entire year (from January to December).To identify the key environmental factors explaining the area of P. crispus, the "cor.test"function was employed to analyze the correlation between the area of P. crispus and meteorological factors, hydrological factors, and water quality factors.To perform stepwise regression analysis and select the main driving factors explaining the area of P. crispus, the "step" function was used.Subsequently, the "lm" function was employed to fit the area of P. crispus with the selected factors. The "vegan" package was used for variation partitioning analysis to analyze the importance of two types of factors (hydrometeorological factors and water quality factors) in influencing the area of P. crispus. The "ggplot2" package was used to visualize the above analysis results.In addition, p < 0.05 is considered to have a statistically significant difference in all analyses.Data analysis was performed using relevant packages in R version 4.3.2, and the plots were generated on both Origin and R platforms. Temporal changes in the environment factors The hydrometeorological factors during the bloom period of P. crispus showed significant changes from 1984 to 2022 (Figure 2, Supplementary Figures 3, 4).For example, in spring, the Temp (18.22 ± 1.08°C) increased markedly over time and reached its maximum of 20.53°C in 2017, followed by a decreasing trend between 2017 and 2020.Conversely, WS (2.65 ± 0.35 m/s) declined significantly from 2013 to 2019 and reached its minimum of 1.92 m/s, with a noticeable rebound from 2019 to 2020, followed by a subsequent decline.The annual average WL (5.79 ± 0.25 m) increased significantly over time and reached its maximum of 6.21 m in 2018.From 2009 to 2022, the main water quality factors of Lake Gaoyou also showed significant temporal variations.For example, in spring, Chl-a (0.012 ± 0.004 mg/L) increased markedly and reached its maximum of 0.019 mg/L in 2018.DO (8.25 ± 0.39 mg/L) and SD (0.34 ± 0.04 m) showed a fluctuating trend.TP (0.06 ± 0.01 mg/L) increased from 2009 to 2013 and peaked at 0.08 mg/L in 2013, followed by fluctuating changes.TN (1.06 ± 0.31 mg/L) showed considerable fluctuation, mostly remaining at approximately 1.0 mg/L.COD Mn (4.17 ± 0.30 mg/L) showed an overall slight increasing trend and reached its maximum of 4.79 mg/L in 2020, followed by a declining trend. In addition, some water quality factors showed a significantly seasonal difference (Figure 3).The investigated lake in spring had a median TP concentration of 0.059 (0.036-0.076) mg/L and a median COD Mn concentration of 4.17 (3.68-4.79)mg/L.However, TP and COD Mn concentration for the entire year were 0.070 (0.054-0.080) mg/L and 4.41 (3.95-4.88)mg/L, respectively.TP and COD Mn in spring were significantly lower than the concentration for the entire year (p < 0.001 for TP and p < 0.05 for COD Mn ).DO also showed pronounced differences among three periods and had a significantly higher median concentration of 10.92 (10.27-11.63)mg/L in winter than the other two (p < 0.001). Long-term trends and the spatial distribution of P. crispus The segmented linear fitting results indicated that the variation in the area of P. crispus between 1984 and 2022 can be divided into three time periods (Figure 4).From 1984 to 2009, the area of P. crispus in Lake Gaoyou slightly increased from approximately 37.86 km² to approximately 100 km², with a relatively low distribution Temporal changes in hydrometeorological and water quality variables in spring.See the main text for abbreviations.From a spatial perspective, P. crispus was primarily distributed in the western and northern parts of Lake Gaoyou, with less distribution in the central and southeastern parts of the lake (Figure 5; Supplementary Figure 5).From 1984 to 2010, P. crispus in Lake Gaoyou transitioned from a scattered distribution in the northern part of the lake to extending towards the western part, gradually increasing in area.From 2010 to 2019, P. crispus continued to extend towards the southeastern part of the lake and spread from the surrounding areas towards the central part of the lake, rapidly expanding in area.From 2019 to 2022, the area covered by P. crispus decreased rapidly as it receded from the central part of the lake towards the surrounding areas. Correlation analysis The results of the Pearson correlation analysis showed that the area of P. crispus in Lake Gaoyou showed a significantly positive correlation with Temp and WL (p < 0.001) and a significantly negative correlation with WS (p < 0.001).The correlations between water quality factors for different periods and the area of P. crispus varied (Figure 6A): In the analysis of environmental factors in winter, the area of P. crispus was positively correlated with SD (p < 0.05).In the analysis of environmental factors in spring, the area of P. crispus was negatively correlated with NH 3 -N (p < 0.05). Stepwise regression analysis The stepwise regression analysis showed that the five most important factors influencing the change in P. crispus area in Lake The variation in the area of P. crispus in Lake Gaoyou from 1984 to 2022. Gaoyou were WS, Temp, WL, NH 3 -N, and SD (Table 1).In particular, the area of P. crispus was found to correlate closely with hydrometeorology, and the contribution of WS was particularly significant (p < 0.001).The area of P. crispus consistently showed a negative association with WS and a positive relationship with SD in the analysis of three periods (p < 0.05).In the analysis of entire-year and winter environmental factors, Temp and WL (p < 0.05) were two important factors, both of which were positively correlated with the area of P. crispus (Supplementary Figures 7, 8).In the analysis of entire-year and spring environmental factors, the area of P. crispus showed a significantly negative correlation with NH 3 -N (p < 0.05) (Figure 6B), and the relative contributions of the variables to explain the total variation were NH 3 -N > SD.The regression models selected with significance (p < 0.001) between hydrometeorology and the area of P. crispus for different periods had a total explanation of 72.6%, 75.2%, and 64.0%, respectively.The regression models selected with significance between water quality and the area of P. crispus for different periods had a total explanation of 36.4%, 34.0% (p < 0.05), and 44.9% (p < 0.05), respectively. Variation partitioning analysis The results of the variation partitioning analysis showed that the P. crispus presented significant differences among the different periods.In the analysis of entire environmental factors, WS and Temp accounted for 60.2% of the variation in the P. crispus area, with their interaction with NH 3 -N and SD explained 15.0% (Figure 7A).For winter environmental factors, WS and WL explained 39.7% of the variation in the P. crispus area, with the shared fraction with SD explained 30.5% (Figure 7B).For spring environmental factors, WS explained 35.4% of the variation in the P. crispus area, with their interaction with NH 3 -N and SD explained 21.0% (Figure 7C). The variation partitioning analysis for all three periods indicated that hydrometeorological factors (Temp, WS, and WL) accounted for the high amount of variation in the P. crispus area in Lake Gaoyou compared to water quality (NH 3 -N, and SD), playing a dominant role in driving the changes in the area of P. crispus.Additionally, the interaction between hydrometeorology and water quality explained a higher proportion of variation in the area of P. crispus in winter, whereas its amount of explanation was lower for the entire year. Main driving factors affecting the distribution of P. crispus The five most important factors influencing the change in the P. crispus area in Lake Gaoyou were WS, Temp, WL, NH 3 -N, and SD.Among them, WS played a crucial role in the variation of the P. crispus area (Table 1).On one hand, WS can affect the growth of P. crispus by influencing the hydrodynamic conditions of the lake.Macrophytes in lakes are often affected by water flow resistance, which is frequently over 25 times more resistant compared to terrestrial vegetation under similar wind speeds (Denny and Gaylord, 2002).Submerged macrophytes will also show different biomechanical characteristics subject to the wind and wave conditions (Zhu et al., 2015).Consequently, most submerged macrophytes float to the water's surface when strong winds and waves cause stems to break or entire plants to be uprooted.This causes a significant decrease in submerged macrophytes in shallow lakes, or even their disappearance, which reduces the macrophytes' biomass and coverage (Riis and Biggs, 2003;Yang et al., 2004;Breugnot et al., 2008;Angradi et al., 2013).Consistent with previous findings, in this study, WS was an important variable that negatively contributed to the P. crispus area (Table 1); the P. crispus area showed an increasing trend with declining WS from 2010 to 2019.On the other hand, wind speed is often significantly negatively correlated with transparency in shallow lakes (Soria et al., 2021), which is similar to the results of this study (Supplementary Figure 9).Increased wind speeds can intensify hydrodynamic disturbances in lakes, causing lake sediments to appear in resuspension and reducing transparency (Soria et al., 2021).A decline in transparency inhibits plant photosynthesis, which was unfavorable for the growth of P. crispus in Lake Gaoyou.According to the spatial distribution of P. crispus in Lake Gaoyou, the area of P. crispus in the southeastern part of the lake was perennially small.This may be due to the fact that this area serves as a flood channel where the southeast monsoon prevails, which was prone to stronger hydrodynamic disturbances leading to lower water transparency, thus inhibiting the growth of P. crispus. Generally, water depth has been considered a significant factor influencing the growth of submerged macrophytes.Typically, there is a negative correlation between water depth and the biomass of submerged macrophytes (Li et al., 2021).However, in shallow lakes, increasing the water depth probably creates a favorable growing habitat for submerged macrophytes by changing the temperature, DO content, and underwater light intensity and by expanding the growth space for these plants (Fu et al., 2014(Fu et al., , 2018;;Su et al., 2018).On the other hand, changes in water depth or level often interact with factors such as wind speed to influence submerged Correlation analysis between the area of P. crispus and environmental factors.macrophytes (Van Zuidam and Peeters, 2015).An increase in water depth often leads to a decrease in the intensity of wind-wave disturbances (Søndergaard et al., 2003).This, to some extent, mitigated the negative effects of wave disturbance on SD, thereby promoting the expansion of the P. crispus area. The stepwise linear regression model conducted in our study indicated that NH 3 -N concentration and SD were the two important water quality variables that contributed to the area of P. crispus (Table 1).The improvement in water transparency helps to increase the underwater light intensity, thereby improving photosynthesis efficiency in plants and consequently promoting the growth of P. crispus (Wang et al., 2017).Previous studies also concluded that macrophytes showed a decreasing trend due to a deteriorating underwater light environment.In our study, NH 3 -N negatively contributed to the area of P. crispus.High concentrations of ammonium ions can interfere with the nitrogen metabolism of P. crispus (Zhang YL et al., 2016;Dong et al., 2022), and physiological stress caused by high concentrations of ammonia nitrogen in the water can also lead to a decrease in the biomass of P. crispus (Cao et al., 2015).Several laboratory and field studies have demonstrated that macrophytes in lakes can experience damage and even disappear when there is a high concentration of nitrogen, The differences in the relative contributions of hydrometeorology and water quality to the area of P. crispus.(A) entire year, (B) winter, (C) spring.Values < 0 are not shown.particularly ammonia nitrogen (Moss et al., 2012;Olsen et al., 2015;Yu et al., 2015;Zhang YL et al., 2016).The conclusion has also been confirmed by site-specific observation and remote sensing mapping (Zhang YL et al., 2016).These findings imply that the increasing SD and decreasing NH 3 -N concentration in Lake Gaoyou may enhance the area of P. crispus from 2010 to 2019.Additionally, P. crispus has an interactive relationship with water quality.Submerged macrophytes improve local water transparency by reducing sediment resuspension and absorbing nutrients from the water; thus, there is a positive feedback relationship between submerged macrophyte and local water transparency in shallow lake ecosystems (Su et al., 2019).Similarly, P. crispus needs to absorb nutrients such as ammonia nitrogen from sediment and water through its roots and stems to support growth (Li et al., 2015). Seasonal characteristics of driving factors The influence of environmental factors in different seasons on the area of P. crispus in Lake Gaoyou varied (Table 1).In this study, NH 3 -N significantly contributed to the fluctuation of P. crispus area in spring, while no statistical difference was observed for NH 3 -N in winter.Conversely, WL in winter significantly contributed to the variation in the P. crispus area, while no statistical difference was observed for water level in spring. Water level itself generally does not directly affect macrophytes but often influences plant growth by altering other environmental factors (Van Zuidam and Peeters, 2015).Increasing fluctuation frequency increases disturbance to plants, leading to increased nutrient loss and tissue damage (Bornette et al., 2008).High-frequency fluctuations in the water level can also lead to the resuspension of sediments, increasing water turbidity, thereby inhibiting the growth of macrophytes (Coops and Hosper, 2002;Luo et al., 2015).In May, Lake Gaoyou experiences a period of rising water levels, with significant fluctuations in water levels (Jiang et al., 2023).The flow velocity may rise simultaneously with water level during this period (Zhang et al., 2017).The rising flow velocity intensifies water disturbance, which is unfavorable for the growth of P. crispus.Compared to spring, water level fluctuations were smaller in winter, which may be the reason why the positive effect of WL changes on the expansion of the P. crispus area was more significant in winter. The interactions between NH 3 -N and P. crispus showed a seasonal difference.On one hand, in a shallow eutrophic water body, macrophytes can assimilate a large amount of nitrogen from sediments via their roots during the growing season (Xie et al., 2013).The efficiency of assimilating and utilizing nitrogen increases when P. crispus blooms in spring.On the other hand, toxic effects cause the various growth indices of P. crispus, such as leaf length, leaf mass, and root length, to decline with increasing nitrogen and ammonia concentrations (Yu et al., 2015).Therefore, in Lake Gaoyou, the relatively higher concentration of NH 3 -N in spring compared to winter may explain a more noticeable inhibitory effect of NH 3 -N on the growth of P. crispus during spring. The temperature was only screened as a significant influencing factor in the factor analysis at the annual scale.This was primarily because the multivariate stepwise regression algorithm aimed to balance model stability and simplicity.Because of the significant correlation between factors (such as Temp and WS, p < 0.05), where WS coincided with Temp covariation, there was some redundancy in information (Supplementary Figure 6).Once WS entered the model, Temp could not enter to the same extent.However, from the correlation coefficients, it could be observed that Temp was significantly correlated with the area of P. crispus.Research related to the influence of temperature on P. crispus indicated that warming treatments significantly increased plant height and total biomass (Yan et al., 2021).As a submerged macrophyte that grows in winter and spring, P. crispus will enter the growing season earlier and occupy more spatial ecological niches as the winter gets warmer (Li et al., 2018).Consistent with previous studies, the results of this study demonstrate that an increase in Temp had a positive effect on the growth of P. crispus in Lake Gaoyou (Table 1; Figures 2, 4). Relative importance of hydrometeorology and water quality The primary influencing factors for the growth of P. crispus in Lake Gaoyou were hydrometeorological factors (Temp, WS, and WL), which fits with our hypothesis.Moreover, the relative importance of the interaction between hydrometeorology and water quality varied across different periods.The interaction between hydrometeorology and water quality showed a higher explanation in winter, whereas it was lower over the entire year (Figure 7). Against the backdrop of global climate change, increasing water temperatures, storm events, and associated long-term flooding have had a significant impact on the health of aquatic ecosystems dominated by macrophytes worldwide, thereby affecting the growth of macrophytes in lakes (Zhang et al., 2017).Research on large-scale vegetation in global lakes indicated that climate variables have a greater impact on species selection at a large spatial scale (Garcıá-Giroń et al., 2020).Similar to previous studies, at the large spatial scale of Lake Gaoyou, climate had a more significant impact on the growth of P. crispus compared to anthropogenic activities.Furthermore, temperature changes have a significant impact on the growth of submerged macrophytes.For example, Elodea canadensis biomass increases directly with warmer temperatures as opposed to nutrient enrichment (Wu et al., 2021).Similarly, the drastic fluctuations in Temp in the Lake Gaoyou area also have significantly influenced the changes in the area covered by P. crispus. Climate change is not a uniform warming process; its impact on winter is particularly noticeable (Franssen and Scherrer, 2008).Macrophytes can overwinter in an aboveground form under warmer winter (Havens et al., 2015).Warmer winters increase the number of branch and total biomass of macrophytes, thereby enhancing overwinter survival rates (Liu et al., 2016).Additionally, this study revealed that the interaction between winter WS and SD was more significant.It was possible that winter WS affected the growth of P. crispus via influencing SD, making the impact of winter WS on the expansion of P. crispus more significant.Additionally, this study revealed that the interaction between winter WS and SD was more significant (p < 0.05) (Supplementary Figure 9).It was possible that winter WS affected the growth of P. crispus by influencing SD, making the impact of winter WS on the expansion of P. crispus more significant. Conclusion Ensuring adequate appropriate coverage of submerged macrophyte growth is vital for maintaining water quality and ecosystem stability.The area of P. crispus in Lake Gaoyou showed a slight increase from 1984 to 2009, followed by a marked increase from 2010 to 2019, and then a decline after 2020.We found that the variation in the P. crispus area was highly influenced by WS, Temp, WL, NH 3 -N, and SD in Lake Gaoyou and showed seasonality in response to hydrometeorology and water quality parameters.Hydrometeorology factors appeared to exert a more substantial influence on the area covered by P. crispus than water quality parameters.The significantly decreasing WS and the increasing Temp and WL resulted in explosive trends in the area of P. crispus.Overall, our study revealed the long-term distribution pattern of P. crispus in Lake Gaoyou and identified key factors regulating the distribution of P. crispus.The proliferation of specific species would disrupt water quality and aquatic ecosystem stability.Therefore, effective lake management should include enhanced macrophyte monitoring and timely intervention measures to counteract the excessive growth of specific species, thereby safeguarding water resources for the China ER-SNWDP. FIGURE 1 FIGURE 1Map showing Lake Gaoyou and the distribution of sampling sites.Red circles indicate the monthly sampling sites and green circles indicate the seasonal sampling sites. area.From 2010 to 2019, the area of P. crispus began to markedly increase (p < 0.001), exhibiting a large-scale bloom trend with an average area of 291.31 km², and reached its maximum of 395.51 km².After 2019, the area of P. crispus showed a rapid decrease (p < 0.05) and reached 136.78 km² in 2022. (A) Relationships between the area of P. crispus and environmental factors.The numbers in each grid represent the correlation coefficients between the area of P. crispus and environmental factors.(B) Linear fitting of the area of P. crispus and environmental factors in spring.* p < 0.05; ** p < 0.01; *** p < 0.001. TABLE 1 The multiple regression model results for the area of P. crispus and environmental factors.
2024-07-14T15:53:02.520Z
2024-07-09T00:00:00.000
{ "year": 2024, "sha1": "ca278a8a46f1cffc2ba02a8e0b6c69fa9f347a37", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fpls.2024.1424300", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b0c55e2bc78b8b10949c69b02c2429c57be59023", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
53640014
pes2o/s2orc
v3-fos-license
AGE STRUCTURE OF YELLOW-NECKED MOUSE ( APODEMUS FLAVICOLLIS MELCHIOR , 1834 ) IN TWO SAMPLES OBTAINED FROM LIVE TRAPS AND OWL PELLETS Abstract The age structure of yellow-necked mouse (Apodemus flavicollis Melchior, 1834) has been analyzed in individuals obtained by two methods: trapping with Sherman live traps and obtainment of skulls from long-eared owl (Asio otus Linnaeus, 1758) pellets (predator diet analysis). One hundred and forty-four mice were analyzed for the degree of wear of the surface of molar crowns, and an additional 74 measurements were performed on captive-born Apodemus flavicollis individuals. We used a refined model of comparison that included seven classes of mouse age, rather than four classes, as suggested by other authors. Trapped yellow-necked mouse individuals were obtained during research on Apodemus flavicollis and Apodemus agrarius population ecology in an Orno-Quercetumpetraeae (Bor, 1955).Mišić, 1972 forest community on Mt.Avala performed as the main task of a PhD thesis in the period November 1996 -October 1999 (Vukićević, 2002).Baits used in Sherman traps were made with a mixture of fried bacon, onion, and bread, and the study plot was 1 ha, with 100 traps in each 10-m 2 square.In addition to this, 12 pairs of yellow-necked mouse were successfully bred in captivity, producing one to three broods with three to five cubs each (Vukićević et al. 2004).At intervals of 20 days one animal was sacrificed in order to compare it with animals taken from their habitat for determination of age structure of the population. The pellets that provided us with Apodemus flavicollis skulls for this investigation were collected in March of 2003 at the long-eared owl communal roosting site at the Čukarica locality in Belgrade (Jovanović et al. 2003).Identification of prey remains from pellets was carried out following criteria established by to Schmidt(1967), Niethamerand Krap(1978, 1980), März(1987), andTurni(1999).The most valid were cranial elements such as maxillary tooth rows and mandibles.Most of the prey items were identified to the species level.Problems were encountered in identifying remains of species of the subgenus Sylvaemus: Apodemus sylvaticus (Linnaeus, 1758) and A. flavicollis (Melchior, 1834) remains were difficult to distinguish in some cases (Ruprecht, 1979).None of those remains had a preserved intact skull, making it impossible to perform measurements of the neurocranium that could be useful for identification and further analysis.Rodent species undergo different damage during digestion in the owl's stomach.Due to the rather large number of identified prey items (36970; Jovanović, 2002), mice are rarely preserved with intact skulls, and this is true even in the case of insufficiently digested prey.For the present work, we used 66 Apodemus flavicollis maxillaries that were determined with certainty as such.The most precise method for estimation of a rodent's age is the one based on changes in lens weight (Kataranovski et al. 1999).However, this method is impossible to use on prey remains derived from owl pellets, so only the molar wear method was used for age estimation of individuals. Yellow-necked mouse age was determined from the degree of wear of the surface of molar crowns according to an improved version of the method suggested by Adamczevska-Andrzejewska(1967).The original method suggested by her included four classes of molar age: up to one month (I), 2-5 months (II), 5-9 months (III) and more than 9 months of age (class IV).This method allows biases that are a result of individual feeding habits and other environmental factors (Adamczewska, 1959).Since we had captive individuals for comparison, we used seven classes to provide a more refined picture of age structure.Table 1 presents the range of values for body length, tail length, and body mass as clarifies for seven age classes of measured indi-viduals.Those classes had distinctive molar wear patterns that we used for further determination of age classes in material obtained from pellets.However, in this comparison, we have to take into account that tooth wear depends on what kind of food was available to the mice we examined.It has been suggested that food in the natural habitat causes rather more extensive tooth wear than when food is supplied in the form of brickets for captive animals in the laboratory (Andrzejewskiand Liro, 1977).The yellow-necked mice we bred in laboratory conditions were fed ad libitum with food as similar as possible to what would be found in their natural habitat, which included: oak acorns; wheat, oat, barley and sunflower seeds; and Carabus insects occasionally (Vukićević et al. 2004). As Fig. 1 indicates, age classes I and II were not found in pellet residues, whereas they were present among captured specimens.One of the reasons for their absence in pellets certainly is the rather tender bone structure of mouse skulls in general, at least compared to other rodents, such as voles (Andrews, 1990;Jovanović, 2002).Thus it often happens that after a subadult mouse has been eaten, it is impossible to recover its residues from the pellets even in experimental laboratory conditions.On the other hand, traps were spread out rather densely, around active holes of yellow-necked mouse on the studied plot, making probable the capture of a wide age range of individuals. The most abundant age class in pellets is V, i.e., with estimated age in the range of 164-194 days.Skulls extracted from pellet residues reflect the most active age of Apodemus flavicollis in wild populations (Vukićević, 2002).The difference in percentage of Table 1.Measurements performed on captive yellow-necked mouse individuals: age in days, HBL -body length (mm), TL -tail length (mm), BW -body weight (g). PDF created with FinePrint pdfFactory Pro trial version www.pdffactory.comeach class in trapped animals is also a result of the fact that those animals were not taken at random (Jensen, 1975), as they would be by a predator (Jedrzejewski et al. 1993), and some were found dead in traps for unknown reasons.The most dead mice were found in 1997 (54 individuals) throughout the year, even though there were two peaks: in March and September (17 and 10, respectively).In 1996 there were 18 mice found dead in traps, while in 1998 there were only six (Vukićević, 2002).In the light of these facts, we can conclude that the two methods of obtaining insight into yellownecked mouse populations are consistent with each other.We therefore suggest that pellet analysis be used as a screening method to obtain data on the possible presence of rodents, in this case Apodemus flavicollis, before any serious population study is set in an area needing to be examined.Pellet studies may also provide valuable resource data that can reveal new records of scientific interest (Petrov, 1992;Jovanović et al. 2001). Fig. 1 . Fig. 1.Distribution of Apodemus flavicollis individuals in two samples sorted according to molar wear-defined age classes.
2018-11-14T05:52:02.485Z
2005-01-01T00:00:00.000
{ "year": 2005, "sha1": "71738d3a50e661616c0e592ba3538b8bb50e544e", "oa_license": "CCBY", "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0354-46640501053V", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "71738d3a50e661616c0e592ba3538b8bb50e544e", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
169074337
pes2o/s2orc
v3-fos-license
The Effect of New Yogyakarta International Airport (NYIA) Development Against Development Infrastructure in Kulonprogo The development of national transportation network includes land, sea and air transportation. This research aims to analyze the influence of new airport development in Kulonprogo named New Yogyakarta International Airport (NYIA) towards infrastructure development in Kulonprogo. The influence of NYIA on Kulonprogo infrastructure is massive, namely, the extension of national road to Kulonprogo Regency located in Milir Hamlet, Kedungsari Village, Pengasih Subdistrict, and bridge construction in Sentolo Subdistrict that connects between Bantul, Sleman and Kulonprogo districts. In addition to the construction of roads and bridges, the construction of infrastructure connects to Borobudur temple in Magelang regency under the term of Menoreh Surgery involving the land acquisition of Kulonprogo residents, investors and housing developers invest land in Kulonprogo to build hotel and housing near the new airport plan. The obstacles in the relocation of affected residents of this airport are the construction of public facilities and social facilities such as the availability of clean water, electricity network and household sanitation at the beginning of development that is important to support the basic needs. Public facilities and social facilities development are conducted by the Ministry of Public Works and Public Housing. The affected residents at Glagah Village, Palihan Temon Kulonprogo District refused to be relocated in case social facilities and public facilities are not available. The local government approach personally to the affected villagers by accelerating the relocation process and working with various stakeholders. Introduction Kulonprogo is one of the regencies located in the west of DIY Province bordering on Purworejo Regency, Central Java. Kulonprogo suddenly became a highly reputable district after the Central Government chose this district to be the district to be built the largest International Airport in Indonesia and number three throughout ASEAN. Kulonprogo became the target of investors to set up hotels, flight school colleges, even tourist areas. PT. Angkasa Pura I is mandated by the Government of Indonesia to socialize the design of the new airport in Kulonprogo named New Yogyakarta International Airport (NYIA). One of the development of national air transportation network is the establishment of international airport in Kulonprogo, the first stone laying of the construction of New Yogyakarta International Airport (NYIA) by the President of Indonesia Joko Widodo in Glagah village, Temon, Kulonprogo on Friday, 21 January 2017 as in Figure 1 The main obstacle faced in the construction of this new airport is the access road connecting to Kulonprogo Airport. 41 The Effect of New Yogyakarta Airport (NYIA) New International Airport Development Against Development Infrastructure in Kulonprogo Indreswari Suroso Assistant Deputy of Infrastructure for Connectivity and Logistic System of Coordinating Ministry for the Ministry of Marine Affairs Rusli Rahim explained that coordination activities of Kulonprogo airport development are land procurement, planning and construction such as design details, fencing, airside, terminal, tower, supporting building and supporting infrastructure. In addition, the coordination of the Extra High Voltage Air Channel (SUTET) related to security and flight procedures, accessibility with the construction of the airport railway and access roads or tolls, tsunami-related risk mitigation, and Daendels road as in Figure 17, with the removal of roads airport land. Related to the potential impact of the tsunami on Kulonprogo Airport, the results of the study concluded that the tsunami event could cause water levels to reach the coast near the airport. The BMKG is still conducting further studies and will deliver the results. Problem a) How will the new airport affect the develop ment infrastructure in Kulonprogo? b) How synergy between PT. Angkasa Pura I, Local Government Kulonprogo and villagers affected? c) What are the constraints in relocating residents affected by new airport candidates in Kulonprogo? Purposes a) Know the influence of new airport to development infrastructure in Kulonprogo. b) Knowing the synergy between PT. Angkasa Pura I, Kulonprogo Local Government and villagers affected. c) Knowing the obstacles in the relocation of residents affected by new airport candidates in Kulonprogo. Theory The construction of Kulonprogo airport candidate has pros and cons, until now in the process of implementation there is still rejection from the people affected by the direct airport, some of whom work as farmers refuse because their livelihood will disappear, but over time, the affected citizens become agreed the new airport due to the new airport will bring a positive impact for the people around the economic field, many hotels, malls and inns will also be very beneficial for affected residents such as Figure 2. New Airport candidate area of New International Yogyakarta Airport in Kulonprogo (Sabandar, 2014). The Coordinating Ministry for the Ministry of Marine Affairs continues to monitor the development of Kulonprogo Airport, Yogyakarta Special Region. This administrative process is almost complete. Constraints to Kulonprogo Airport development such as land acquisition, construction, residential relocation, construction of public facilities and social facilities such as the availability of clean water, electricity network and household sanitation at the beginning of development are essential to support the needs of everyday citizens. Meanwhile, 58% of land or 340 ha has been paid to affected people, while 6% of the land or about 35 hectares of land belonging to government agencies in the form of public facilities and social facilities are still in the process of payment. The remaining 9% is a resident land that rejects and is still in an inheritance dispute that is also processed through a waiver replacement at the Wates District Court. Related land owned by government agencies for public facilities and social facilities covering 6%, is currently in the settlement between Kulon Progo Local Government, Yogyakarta Province Special Region Government (DIY), and National Land Agency (BPN) DIY Regional Office which is planned to be completed before implementation the inauguration of New Yogyakarta International Airport by President Joko Widodo. (Kemenhub, 2017). Some cooperation with Kulonprogo Local Government with PT. Angkasa Pura 1 is recruitment of security scholarship provided that the participants are villagers affected by the airport to be trained to be security. Along with the local identity, five affected villages which become the location of the airport will be the gate name in the terminal. Angkasa Pura I chairman convey this is a form of appreciation for Kulonprogo residents who are willing to leave his village for this airport. Gate will be proposed with Gate Glagah name, Gate Anchors, Gate Palihan, Gate Kebon Rejo, and Gate Sindutan. Through the Corporate Social Responsibility (CSR) program, PT Angkasa Pura I (Persero) provides security unit training to residents affected by the New Yogyakarta International Airport (NYIA) airport at Dharmais Foundation Complex, Pengasih, Thursday (13/07/2017) 43 The Effect of New Yogyakarta Airport (NYIA) New International Airport Development Against Development Infrastructure in Kulonprogo Indreswari Suroso mentioned that training the residents are free of charge free of charge In the implementation, PT Angkasa Pura I in cooperation with the business services security services from Kulon Progo namely PT Tri Dhaya Prima Karya. (Sorot Kulonprogo, 2017) The local government of Kulonprogo continues to support the construction of the new airport in Kulonprogo. Public perception indicates that 76% of respondents have positive perceptions of airport development plan, zone 1 and zone 2 have high interest compared to zone 3 in making new business. Kulonprogo community on the affected residents of the new airport has an interest in the type of business stalls, grocery stores and government to accommodate the needs of society in the framework of business development. (Afwan, 2016) Angkasa Pura 1 Strategy in facing community resistance to international airport development plan in Kulonprogo as follows: 1) socialization with public figures and public citizens; 2) invite the mass media to comparative studies of the development of new airports in West Sumatra; 3) conduct talk shows on television and develop CSR (Corporate Social Responsibility) program (Yunita, 2016) In 2014, the socialization of the New Town International Airports mapping plan mapped the community development program for residents of Temon Sub-district, Kulonprogo Regency, which are particularly vulnerable. There are three recommendations: the need for a grand design of airport development that considers the sustainability of vulnerable groups, the initiation of community development programs to empower vulnerable groups, the need for affirmative policy formulation for vulnerable groups based on the principle of social justice (Wahyu, 2014) Kulonprogo Regency has the construction of mega projects such as the construction of an international airport, the development of a fishing port, iron sand mining and steel industry area. The existence of infrastructure development policy is expected to increase the economic growth of the people. Economic development in the form of infrastructure will bring positive and negative impacts. The positive impact is rapid economic growth. The negative impact is the development of infrastructure will cause the occurrence of modernity in the public sphere because there is no policy to protect the natives Kulonprogo. (Insan, 2017). Wahana Tri Tunggal is a group of people who reject the new airport in Kulonprogo, in this research there is a process of identifying threatening acts, the form of rejection is the acceptance of citizens established a new airport due to use of farmland citizens. (Naimul, 2015) The development plan is in Kulonprogo Regency. The new airport is expected to have a positive impact on the development of Kulonprogo Regency, due to the carrying capacity of the road network (national / regency/city) network, especially the southsouth road development, the existence of railway transportation network. Moreover, the development of Special Economic Zone in Kulonprogo will also provide a feed-back to the increasing growth of the region in Kulonprogo. (Bappeda, 2017) Result and Discussion Residents affected by the construction of a new airport in Temon Figure 5. Housing Relocation Type 65 residents affected by Kulonprogo new airport, Figure 6. Relocation of affected residents in Kulonprogo under construction, Figure 7. Housing Relocation Type 45 residents affected by Kulonprogo new airport Figure 8. Resident housing affected in Glagah Village, Temon Sub-district, Kulonprogo District and Figure 9. Housing Relocation Type 100 residents affected by Kulonprogo new airport. One example of concern Kulonprogo Local Government against residents affected one of the provisions of Ground Handling education scholarship to residents Kulonprogo in High School of Aerospace Technology Yogyakarta to be prepared to become human resources in the field of flight at Kulonprogo airport. STTKD Yogyakarta which has cooperated with Kulonprogo Local Government by giving the scholarship to 20 students affected by the New Yogyakarta International Airport (NYIA), located in Kulon Progo Regency, Special Province of Yogyakarta will begin construction soon. As for the picture masterplan as in Figure 11 and 12. The Local Government of Kulonprogo has a Regional Spatial Plan as shown in Figure 13. The influence of the new airport on the infrastructure in Kulonprogo is very much, among others, the extension of the road to Kulonprogo Regency on the national road is located in Dusun Milir, Kedungsari Village, Pengasih District as shown in Figure 14 and the development of the park in Wates city center as shown in Figure 15, Sentolo subdistrict that connects between Bantul, Sleman and Kulonprogo districts as shown in Figure 16. Daendels Road south path towards the new candidate in Kulonprogo as shown in Figure 17. Figure 18. New Road to Sermo and Kalibiru Reservoir. Figure 19. The good synergy between PT. Angkasa Pura I, Kulonprogo Local Government, and the residents affected, evidenced by the acceleration of NYIA development expected to have human resources derived from the local people of Kulonprogo. The residents of the five affected villages received various facilities from Pemda Kulonprogo and PT. Angkasa Pura I among others is the provision of Ground Handling education scholarship to Kulonprogo residents at Sekolah Tinggi Kediargantaraan which will be placed at the new airport operated in 2020 and security training scholarship to fill the security job vacancy around the new airport in Kulonprogo The obstacles in the relocation of affected residents of this airport are the construction of public facilities and social facilities such as the availability of clean water, electricity network and household sanitation at the beginning of development that is important to support the basic needs. The affected residents at Glagah Village, Palihan Temon Kulonprogo District refused to be relocated in case social facilities and public facilities are not available. The local government approach personally to the affected villagers by accelerating the relocation process and working with various stakeholders.
2019-05-30T23:47:29.967Z
2018-10-01T00:00:00.000
{ "year": 2018, "sha1": "d2b9448be179cc7d78c7aa07cb2698e77afe8fb6", "oa_license": "CCBYNCSA", "oa_url": "https://journal.uii.ac.id/jards/article/download/8851/9241", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "042c252b0ff80b2d6fde1a69cc585c9c3bb5fd8f", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Business" ] }
207828023
pes2o/s2orc
v3-fos-license
The study of the effect of fault transmissibility on the reservoir production using reservoir simulation—Cornea Field, Western Australia The focused area in this study is in the Cornea Field located in the Yampi Shelf, north-eastern Browse Basin, Australia. The field was stated to be an elongated unfaulted drape anticline over highly eroded basement. From the literature and seismic data, faults die at the basement in the Cornea Field. Therefore, no faults were considered previously. The tectonic activity was not apparent in the area with only deformation by gravitational movements and compaction in the basement zone. However, fault might present in the reservoir and seal depth as time passed. Therefore, the aim of this study is to simulate the Cornea field with faults, to determine the effect of fault transmissibility on oil production. The study shows that the fault permeability and fault displacement thickness ratio have a close relationship with fault transmissibility. The fault transmissibility increases when fault permeability and fault displacement thickness ratio increase. Transmissibility multiplier was also considered in this study. The fault transmissibility increases with the increase in transmissibility multiplier, thus the oil production. This study contributes to the gap present in the research of the Cornea Field with fault structure, where it is important to consider fault existence during exploration and production. Introduction The Cornea Field is located offshore Western Australia, in the Yampi Shelf of the north-eastern Browse Basin with an area of approximately 1755 km 2 . The Browse Basin was an extensional half-graben, with upper crustal faulting resulted in half-graben geometry with large-scale normal fault compartmentalizing the basin into sub-basin. Extensional faulting was concentrated on the north-eastern part of Caswell Sub-Basin and western margin of Prudhoe Terrace, and this formed Heywood Graben (Australia and Australia Geoscience 2011a, b;Australia 2012a;Michele 1999;Tuohy 2009a;Poidevin et al. 2015). The Yampi Shelf is located at the transition zone between two major compartments. The boundary zone was acted as the fault relay zone (Fig. 1). However, the fault displacement on the Yampi Shelf depleted to the northeast area (Obriena et al. 2005) stated that the fault and trap reactivation was minimal to absent across the Yampi Shelf and considered not an important feature. However, seal integrity was more likely to occur. Gas chimneys and hydrogen-related diagenetic zones (HRDZs) spread over the accretion and extended to the regional seal on laps the basement highs on the east. Alkaif (2015) also showed that from the literature and seismic data (Ishak et al. 2018) faults die at the basement in the Cornea Field. Ingram et al. (2000) also stated that the tectonic activity was not apparent in the area with only deformation by gravitational movements and compaction in the basement zone. Therefore, no faults were considered. The Cornea structure is a simple trap configuration consisting of a large, elongated, four-way dip closure formed by an unfaulted drape anticline of the Albian sandstone (comprised of zones A, B, C, D and E) of upper Heywood Formation over an eroded basement high (Fig. 2). The hydrocarbon accumulation consists of an expanded gas cap over a thin discontinuous oil rim (Poidevin et al. 2015). Most studies showed that the fault was unlikely to be present in the Cornea Field. However, there were faults existed around the Yampi Shelf zone where Cornea Field is located so there is a possibility for faults to developed in the reservoir. Ingram et al. (2000) stated that fault may not be present in the seismic; however, fault might present in the reservoir and seal depth. Sea floor was seen to demonstrate some linear features in the seismic, and there is a possibility of a strike-slip fault due to near surface stress disturbance. A significant amount of hydrocarbon has been discovered in the cornea field, and also a commercial development of the Cornea Oil Field is possible . However, there were challenges involved in the evaluation (Naser et al. 2007) of the productivity of the wells and the recovery of the oil production (Ltd 2014). Therefore, the aim of this study is to consider the fault existence in the Cornea field and to foresee the effect of the fault transmissibility on the Cornea Field that might impact the reservoir production. There are three areas in the Cornea Field: Cornea South, Cornea Central and Cornea North. This paper only focuses on Cornea South and Cornea Central. Research done in 1999 by Michele G. Bishop stated that the Cornea 1 was reported to have encountered from 600 MBBL ( 95 × 10 6 sm 3 ) of oil to 2.6 BBBL ( 413 × 10 6 sm 3 ) of oil in place. This discovery was considered to be the first commercially producible oil in the Browse Basin. The Cornea discovery proved that a large volume of oil has been generated in the mature central portion of the basin; however, no production tests were attempted and it was confirmed that migration and charge have occurred (Australia and Australia Geoscience 2011b;Australia 2012b;Bishop 1999). In 2010, exploration activities on the Cornea Field were completed by Cornea Resources Pty Ltd. Cornea Resources Pty Ltd indicated that Cornea Field was one of the undeveloped potential oil fields in Australia. A large number of exploration and appraisal wells were being drilled into the accumulation. A significant amount of hydrocarbon was discovered from the samples obtained, where the quantum of contingent resources in Cornea was reasonably expected to be economic as long as production flow rates can be achieved. Cornea Field was estimated to have a P50 of 411 MBBL ( 65 × 10 6 m 3 ). However, productivity of the reservoir has not been proved (Ltd 2014). Appraisal project was also done by Octanex in order to invest on the Cornea oil field, which had the potential of future feasible development upon the appraisal. The Cornea 1 well resulted in a discovery of a gas cap which showed in the seismic and oil leg within upper Heywood formation. A drill stem test (DST) was conducted at this well with 14.4 BFPD and 0.3 MMCFPD gas (Khan et al. 2006) obtained. However, even (Ingram et al. 2000) with the amount of the oil and gas encountered, the appraisal project did not come up to their expectation on the Cornea Field. Therefore, by reprocessing the Cornea seismic, Shell indicated that a great amount of oil resources may exist within sands B, C and E that could be further developed by using multilateral horizontal wells (Tuohy 2009a;Limited 2010). On the other hand, RPS Energy Pty Ltd reviewed the Cornea Field seismic, well data and other data and estimated the in-place volumes and recoverable volumes of the field. They stated that with only one DST flow conducted by Shell from Cornea South 2ST1, there were insufficient data to conclude that Cornea well production would match the production levels in real life. However, the volume between Top B gross reservoir map and the Base C gross reservoir was calculated. The calculated oil in place was P50 159 MBBL ( 25 × 10 6 sm 3 ). RPS also estimated that the recovery factor at lowest was 15%, best at 25% and highest at 35% (Tuohy 2009b). The presence of fault can impact the production of the reservoir. Costa et al. (2016) stated that the fault within the petroleum reservoir acted as a barrier or flow for fluid. Therefore, it was important to know the fault properties in order to optimize (Khan et al. 2012) the recovery factor. This had dramatically helped the industry to predict the impact of fault on fluid flow and also decreased the risk of exploration in the faulted zone. Fault transmissibility is one of the factors that need to be considered in oil recovery. Fault transmissibility in a reservoir simulation model depends on the grid block geometry as shown in Zhalehrajabi et al. (2014) and Rashid et al. (2014a); permeability and transmissibility multiplier are applied to the faces of the grid blocks (Fig. 3). To determine the fault transmissibility multiplier, fault properties such as fault thickness and fault permeability are required. Manzocchi et al. (2008) studied the performance of the faulted and unfaulted in the shallow marine reservoir model. There were nine different cases tested in the reservoir model. The result showed that the oil production rate was highly correlated with the fault permeability case, while the recovery factor was highest in the intermediate fault case. However, when the fault becomes less permeable, the production and the recovery factor decreased rapidly (Manzocchi et al. 2008;Houwers et al. 2015;Flodin and Durlofsky 2001;Kimura et al. 2015;Wenning et al. 2018), while Rotevatn and Fossen (2011) focused on evaluation of the subseismic fault element on the hydrocarbon exploration and production. The production and pressure data set obtained was simulated. The focused area was in the Colorado Plateau of South East Utah. Rotevatn and Fossen (2011) stated that the dimension of the fault system was important to be understood for better exploration. The low permeable fault resulted in aquifer support in the reservoir and thus enhanced production. From the flow simulation, it proves that lowering the fault permeability led to increase in sweep efficiency and recovery by increasing the injection fluid flow (Rashid et al. 2014b) and set back water breakthrough (Rotevatn and Fossen 2011;Fisher and Jolley 2007;Paul et al. 2007). This had an opposite result with Manzocchi et al. (2008) since there was aquifer support in this reservoir and water was injected into the reservoir to boost the recovery. Byberg (2009) aimed to investigate the effect of dynamic behavior of the reservoir by applying transmissibility multiplier to the fault to achieve history match and to determine its effect on field production. History-matched A-Lunde reservoir simulation model was used as reference for the study. After running simulation, it showed that there was an increased in oil production of 1.4 M SM 3 with fault model. It also showed that the higher the value of the fault transmissibility multiplier, the higher the oil production. Thus, by varying the fault transmissibility, there was a significant impact on the field performance. Toft et al. (2012) estimated the potential recovery of the segment H1 in Gullfaks. The segment H1 was injected with a chemical called Abio Gel due to oil residual stayed in the low permeable zone. Six Eclipse simulation scenarios were done by applying different transmissibility multipliers to reservoir volume. The result of the simulation showed a significant increase in total oil production with increasing transmissibility multipliers. In addition, Frischbutter et al. (2017) conducted a fault analysis from core and seismic scale (Sern et al. 2012) to assess the effect of the faults on the production and recoverable volumes in the Upper Jurassic reservoir, Norwegian offshore sector. The reservoir consisted of many compartmentalizations by depositional fault. The fault permeability was examined at high confining pressures using formation compatible brines. It was used to calculate the transmissibility multiplier that was integrated into the reservoir model to measure the impact of fault on fluid flow. The dynamic reservoir simulations showed that more than 20% recoverable volumes depending on the fault properties inserted in the simulation. Therefore, it was proved that the fault existence can impact the cumulative recoverable oil volumes and the recovery efficient (Frischbutter et al. 2017;Manzocchi et al. 1999;Ahmed 2013). The fault architecture indicates the fault shape, size, orientation and connectivity are important to be considered. Therefore, the importance of the fault properties in In conclusion, many studies have been conducted on the fault related to the reservoir production. Despite that, the uniqueness of this study is to simulate the fault into the Cornea Field, which initially did not have fault structure that exists in the reservoir. The objectives of this study are: • To focus on the Cornea South and Cornea Central Field (data available for this part only) • To develop dynamic model of Cornea Field • To simulate fault structures in the Cornea Field reservoir • To evaluate the fault permeability and fault displacement thickness ratio effects on transmissibility • To test the transmissibility multiplier effects on the transmissibility and oil production. Methodology Data were collected from different available sources. In constructing the 3D model, the following data were needed: The 3D model was constructed in PETREL using the data extracted from the literature review, Geoscience Australia and Occam Technology Company (Australia and Australia Geoscience 1998a, b;Geoscience Australia 1997). It was proved that fault did not exist in the Cornea Field and also no fault data are available. Therefore, the fault characteristic and displacement were assumed, in order to fit the objective of this paper. As mentioned from the literature review, extensional faulting exists within the area of the Cornea Field. Therefore, normal fault was considered in this case. The location of the fault was randomly picked. Two faults were assumed and constructed in the Cornea Field. In order to construct the fault, fault polygons were generated in the mapping application. The fault polygons data were imported to the model (Khan et al. 2003) and generated by fault modeling process. Pillar gridding has a close relationship with the fault model. The concept of the pillar gridding is to construct the skeleton framework for top, mid-and base skeleton that connects the top, mid-and base key pillars to generate 3D grid. The last step was to insert the horizons by inputting the surfaces, zones generation and layering to construct the fine-scale layering 3D model (Zene et al. 2019;Witt et al. 2007). The unfaulted model was calibrated by modifying the porosity (Saeid et al. 2018) using ECLIPSE keyword MULTIPLY to match with the literature-reviewed oil in place in the reservoir to have a more realistic model. The fault was constructed after model (Khan et al. 2001) calibration. See Table 5 for original oil in place data. Sensitivity analysis was done by varying fault permeability and fault displacement-to-thickness ratio to test the uncertainty in transmissibility. Transmissibility multiplier MULTFLT keyword in ECLIPSE was used to test its effects on the fault transmissibility and oil production. There were two algorithms considered: Monzhocchi and Sperrevik algorithm, in calculating the fault permeability, Eqs. (1) and (2). The fault displacement-to-thickness ratio formula is shown in Eqs. (3) and (4). where Hull, 1988Walsh, 1998 To capture effectively the fault transmissibility (Fig. 4), fault transmissibility multiplier needs to be calculated. ( (3) The volumetric recoverable oil production (STOOIP) was calculated (Table 1) from volume calculation in Petrel as shown in Eq. (8). The stock tank oil in place calculation is: The detail of the computer hardware is attached in "Appendix." Figure 5 shows the porosity map of the unfaulted model of the Cornea Field with dimension of 134 m × 160 m × 50 m . The well production is Cornea 1, Cornea 1B, Cornea South 1 and Cornea South 2 ST1. The illustrated model shows Unit B, Unit C, oil water contact, producible oil water contact layers. The active unit in the Cornea Field is Unit B and Unit C. Porosities are higher in the Cornea South 1 and Cornea South 2ST1 well region. This indicates that there is more hydrocarbon accumulation in this region. Table 2 shows the initial condition and fluid condition extracted from well completion report. Table 3 states the porosity data for Cornea 1, Cornea 1B, Cornea South 1 and Cornea South 2ST1 wells. Table 4 shows the permeability data for Cornea 1, Cornea 1B, Cornea South 1 and Cornea South 2ST1 wells. Results Transmissibility multiplier, TM = TransF ij Figure 6 shows the Cornea Field with fault polygon displayed, while Fig. 7 shows the Cornea Field after the fault polygons were converted into faults. Table 5 shows the nature and position of the faults. In this case, the fault location is placed in between the well location. Monzocchi algorithm gave higher fault permeability compared to Sperrevik algorithm, thus higher transmissibility values. Therefore, Monzocchi algorithm was used in this model. See Figs. 8 and 9 for the comparison of the fault Figures 10, 11, 12, 13 and 14 show varying fault permeability effect from 0.5, 1.0, 1.5, 10, 100 mD on fault transmissibility. From the trends, it shows that as the fault permeability increases, fault transmissibilities also increase. On the other hand, fault displacement-to-thickness ratio of 66, 100, 170 was also tested to determine the fault transmissibility. See Figs. 15, 16, 17 and 18. The fault transmissibilities also increase with increasing fault displacement thickness ratio value. Figure 14 (permeability 100mD) shows higher transmissibility compared with other permeabilities; therefore, these data were used to further determine the oil production using transmissibility multiplier. Transmissibility multiplier was varied from 0.2, 0.5, 1.0, 1.5 and 3.0 to determine its effects on transmissibility and oil production. See Figs. 18 and 19. Figure 17 (fault displacement thickness ratio 170) also shows higher transmissibility; therefore, these data were used to further determine the oil production using transmissibility multiplier. Transmissibility multiplier was also varied from 0.2, 0.5, 1.0, 1.5 and 3.0 to determine its effects on transmissibility and oil production (Table 6). See Figs. 20 and 21. Figures 18, 19, 20 and 21 show that oil and water production increases with time. Thus, it shows that the higher the transmissibility multiplier, the more oil will be produced. Conclusions The report published by Tuohy (2009a) was basically based on the real appraisal and exploration on the Cornea Field. Shell conducted the appraisal drilling on the Cornea Central, (Rashid et al. 2014b;Rotevatn and Fossen 2011;Saeid et al. 2018;Sern et al. 2012) Well Cornea South and Cornea North. However, this research is based on simulating the reservoir to resemble the real reservoir and only focused on Cornea Central and Cornea North. The exploration included nine wells drilled around the area. However, this research only focused on Cornea 1, Cornea 1B, Cornea South 1 and Cornea South 2 ST1. The data from this well were reliable because it shows that wire logging and conventional core sample obtained by Shell from Cornea 2 ST2 and Cornea South 1 shows the best reservoir properties. The wells used in this research are the wells that have the evidence of producing the most from this potential oil-producing field. 3. Transmissibility multiplier was also important to be considered to see its effects on oil production. It shows that the transmissibility increases with the increase in transmissibility multiplier; thus, it increases oil production. 4. Fault analysis is important to be taken into account for successful exploration and production. Hence, this paper contributes to the gap in the Cornea Field research related to fault structure existence. From the result, it is confirmed that the production has a direct relation to the reservoir structure and properties. Therefore, it is important to consider possible faults that exist in the reservoir during exploration and production. From the literature review, it showed that gas chimney was detected in the Cornea Field which can also have a significant impact on production. Therefore, future study can be considered on modeling the gas chimney in the Cornea Field for further understanding of the field framework. From the literature review, the Cornea Field was stated to have a gas chimney that exists within the reservoir. Therefore, more simulation illustration and research need to be done. This is to study the geological structure of the gas chimney and to see the impact of the gas chimney on the reservoir production.
2019-11-03T09:57:50.167Z
2019-11-01T00:00:00.000
{ "year": 2019, "sha1": "6b0209f988e795e589c2bef2910d77ff26c49eef", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13202-019-00791-6.pdf", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "6b0209f988e795e589c2bef2910d77ff26c49eef", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Geology" ] }
71144380
pes2o/s2orc
v3-fos-license
Computer aided quantification of intratumoral stroma yields an independent prognosticator in rectal cancer Tumor-stroma ratio (TSR) serves as an independent prognostic factor in colorectal cancer and other solid malignancies. The recent introduction of digital pathology in routine tissue diagnostics holds opportunities for automated TSR analysis. We investigated the potential of computer-aided quantification of intratumoral stroma in rectal cancer whole-slide images. Histological slides from 129 rectal adenocarcinoma patients were analyzed by two experts who selected a suitable stroma hot-spot and visually assessed TSR. A semi-automatic method based on deep learning was trained to segment all relevant tissue types in rectal cancer histology and subsequently applied to the hot-spots provided by the experts. Patients were assigned to a ‘stroma-high’ or ‘stroma-low’ group by both TSR methods (visual and automated). This allowed for prognostic comparison between the two methods in terms of disease-specific and disease-free survival times. With stroma-low as baseline, automated TSR was found to be prognostic independent of age, gender, pT-stage, lymph node status, tumor grade, and whether adjuvant therapy was given, both for disease-specific survival (hazard ratio = 2.48 (95% confidence interval 1.29–4.78)) and for disease-free survival (hazard ratio = 2.05 (95% confidence interval 1.11–3.78)). Visually assessed TSR did not serve as an independent prognostic factor in multivariate analysis. This work shows that TSR is an independent prognosticator in rectal cancer when assessed automatically in user-provided stroma hot-spots. The deep learning-based technology presented here may be a significant aid to pathologists in routine diagnostics. Introduction In most solid malignancies, therapeutic decision making is primarily based on pathological staging of tumors. The traditional tumor, (lymph) node, metastasis (TNM) staging system [1] is routinely used to estimate patient prognosis and guide treatment worldwide. For certain tumor types, however, the TNM system lacks accuracy in assessing the metastatic potential of a tumor. For instance, TNM stage II colorectal cancer (CRC) comprises a heterogeneous group with a diverse outcome [2]. As a result, the TNM stage is not informative for therapy planning of these patients, leading to both under-and over-treatment. Reliable new biomarkers are needed to guide personalized adjuvant treatment for these groups of patients. A widely studied prognostic factor is the tumor-stroma ratio (TSR), expressing the relative amounts of tumor and intratumoral stroma. TSR is a straightforward measure which can be assessed by microscopic inspection of hematoxylin and Authors Oscar G. F. Geessink and Alexi Baidoshvili contributed equally to this work. eosin (H&E) stained tissue sections. TSR has been shown to yield prognostic information in a range of solid malignancies, including breast cancer [3][4][5] and lung cancer [6,7]. Generally, TSR is an independent prognostic factor, where a high content of intratumoral stroma is associated with a poor prognosis. A number of previous studies showed promising results on the prognostic relevance of TSR in CRC [8][9][10][11][12]. Despite this evidence, there is no implementation in routine pathology reporting. This may be attributed to the variety in methodology and the lack of a standardized procedure for TSR assessment. Published studies propose visual assessment ('eyeballing'), systematic point counting, and the use of scanned (digitized) tissue sections (whole slide images; WSI). Although good inter-observer agreement was found in earlier studies [9,11,13], visual assessment of pathological quantitative features in general may suffer from reproducibility issues. To facilitate an objective and standardized TSR assessment, image analysis and machine learning algorithms have been applied on H&E-stained sections of CRC before, however, these algorithms were applied to image regions extracted from WSI. Computer-aided tumor and stroma quantification has been proposed based on automated tissue segmentation in H&E-stained sections using a combination of hand-crafted features and machine learning [14]. Furthermore, TSR has been computed via automated point counting in H&E-stained images [15]. Similar image analysis techniques based on classical machine learning have been applied to tissue microarrays for epidermal growth factor receptor (EGFR) detection by immunohistochemistry [16,17]. A new branch of machine learning algorithms, so-called deep learning algorithms, have recently entered the field of computational pathology and shown promise for automating certain tasks in histopathology. Detection of sentinel lymph node metastases [18] and of cancer in prostate biopsies [19] could successfully be performed using convolutional neural networks (CNN), a specific type of deep learning. We recently showed [20] that a deep learning-based algorithm can distinguish between 9 different types of tissue in CRC WSI with an overall accuracy of 93.8%. The present study aims to leverage our previously developed CNN for automated TSR assessment in the CRC subclass of rectal adenocarcinomas. Only a limited number of studies have been published on TSR for rectal cancers and in a sub-analysis (n = 43) by West et al. [12] its prognostic value could not be confirmed. Work by Scheer et al. [8] recently showed that TSR has potential as a prognostic factor for survival in surgically treated rectal cancer patients, however, TSR was only found to be an independent prognosticator in lymph node metastasis negative cases. The performance of the automated TSR system described here will be compared with data from human experts and its prognostic value will be evaluated in terms of disease-specific and disease-free survival times. Patients An existing cohort of 154 patients [8] with rectal adenocarcinoma stages I-III was used. All patients received curative surgery in the period 1996-2006 at the Medisch Spectrum Twente hospital (The Netherlands). No patient was neoadjuvantly treated with radiotherapy and/or chemotherapy or died within 30 days after surgery. At the time of surgery, none of the patients had known distant metastases, inflammatory bowel disease, hereditary nonpolyposis colorectal cancer (HNPCC) or other/earlier cancers. Histopathological data were obtained from the Laboratory for Pathology Eastern Netherlands (LabPON). Clinical data were obtained from the Medisch Spectrum Twente hospital and the Netherlands Comprehensive Cancer Organization (IKNL). Collected clinicopathological data included tumor grade (differentiation), depth of invasion (pT) and lymph node involvement (pN) according to the Union Internationale Contre le Cancer/American Joint Cancer Committee (UICC/AJCC) TNM staging system [1]. Data regarding adjuvant therapy and local or distant recurrence were also available. Tissue slide preparation and scanning According to standard procedures at LabPON, formalin fixed and paraffin embedded tissue sections were cut at 2 μm and stained in an automatic stainer with hematoxylin and eosin (H&E) for routine diagnostic purposes. For the present study, a single slide per patient was selected which contained the most invasive part of the tumor and was used in diagnostics to assess the tumor pT-status. Slides were scanned at ×200 total magnification (tissue level pixel size~0.455 μm/pixel) using a Hamamatsu NanoZoomer 2.0-HT (C9600-13) scanner (Herrsching, Germany). Visual estimation of intratumoral stroma Two observers (GvP, WM; both > 10 years of experience with TSR scoring) independently scored the slides using a conventional light microscope according to a previously published protocol for TSR assessment [9,10]. Briefly, the procedure consisted of 1) coarse localization of the tissue area with the highest intratumoral stroma content at low microscope magnification, and 2) selection of one field of view at ×100 total magnification and visual estimation of the tumor-stroma ratio (TSR-visual) in the selected circular region. Ideally, the selected region should meet the following criteria: high intratumoral stroma content (predominantly found at the invasive margin of a tumor); presence of tumor cells at all borders of the field of view; no large quantities of muscle, mucus, necrosis or large vessels; and no tears or tissue retraction artefacts. As much as possible, the region with the highest stroma content (stroma hot-spot) was selected that met all the above requirements. TSR-visual was estimated by both observers independently, using 10% increments. As a result of the specific microscope and lenses used, the specimen-level diameter of the circular region was 1.8 mm at ×100 magnification. There is a lot of variation among published studies concerning used TSR procedures (e.g. major differences in the location and size of the assessed tissue regions as well as what was actually measured: relative tumor or stroma content). For clarity, in this study the tumor-stroma ratio was defined as TSR = 100% × [intratumoral stroma area] / [tumor area + intratumoral stroma area]. Lumen, tears and other tissue types in the selected circular region were excluded during visual estimation. Lastly, the tissue region considered most suitable for TSR assessment was identified during a consensus meeting between the two observers in which 1) a binary TSR consensus score was determined: 'stroma-low' or 'stromahigh', and 2), the center of the stroma hot-spot was marked on the glass slide. Automated computation of intratumoral stroma To study the value of applying a deep learning algorithm for automated TSR assessment (TSR-auto), a CNN was developed similar to a previously published algorithm [20]. The CNN performs tissue segmentation (i.e. subdivision of tissue areas) of H&E-stained rectal cancer WSI into nine different classes: tumor, intratumoral stroma, necrosis, muscle, healthy epithelium, fatty tissue, lymphocytes, mucus and erythrocytes. The CNN was trained using manually annotated regions in 74 WSI taken from the cohort used in this study. Regions to annotate were selected for covering tissue variety across WSI, rather than producing exhaustive annotations on a small number of WSI. Annotations were produced by a pathology researcher (OG) and a medical student, and were checked and corrected when deemed necessary by an experienced pathologist (AB). A digital staining normalization method [21] was applied to all WSI as a pre-processing step to accommodate for typical differences in tissue staining intensities, caused by variations in slide preparation. Unlike Ciompi et al. [20], here we used patches of 256 × 256 pixels for classification, which experimentally showed to improve performance and produce a smoother segmentation map (data not shown). Performance of the system was assessed by segmenting all WSI in the dataset in a five-fold cross validation fashion (at WSI level) and evaluating accuracy in all annotated regions. To enable comparison, the CNN-based TSR-auto was computed in the same circular region (with 1.8 mm diameter) that was selected by the observers at the consensus meeting, where TSR-visual was assessed. The corresponding image data were extracted from each WSI as circles with a diameter of~4000 pixels and processed further by the CNN described above (Fig. 1). Segmentation of a WSI into nine different tissue classes enabled in-and exclusion of specific tissue types comparable to the visual assessment procedure. The used definition of TSRauto is similar to TSR-visual, expressing the area consisting of stroma as a percentage of the area occupied by both tumor and stroma. Statistical analyses In this study, TSR-visual and TSR-auto were compared as prognostic factors in rectal cancer. Statistical analyses were performed using IBM SPSS software v24.0 (Armonk, NY, USA). The intraclass correlation coefficient (ICC) was used to determine the correlation between TSR assessed by two observers and by the automated method. To investigate a possible relationship between clinicopathological variables and the numerical values of TSR-visual and TSR-auto, Mann-Whitney U and Kruskal-Wallis tests were performed for two-and multi-class variables, respectively. For further statistical analysis, TSR-visual and TSR-auto were dichotomized, subdividing patients into two groups: 'stroma-low' and 'stroma-high'. Dichotomization of TSR-visual was performed based on a cut-off value previously established [10] on 63 colon cancer cases: stroma-high = TSR-visual > 50% and stroma-low = TSR-visual ≤ 50%. In this study, we analyzed results for two different cut-off values for TSR-auto since the optimal cut-off value for the automated approach is not yet established. One method of dichotomization used the '50% stroma cut-off', similar to TSR-visual, referred to as TSRauto(50%), and the other dichotomization method was based on the median value for all measured TSR-auto values, referred to as TSR-auto(median), yielding equal numbers of patients in stroma-low and stroma-high groups. Inter-observer agreements were calculated using Cohen's Kappa (κ) on the dichotomized TSR values. Kaplan-Meier survival analyses were performed and log-rank statistics were used to test differences in both disease-specific survival (DSS) and disease-free survival (DFS) distributions. DSS was defined as the time between the date of surgery and the date of death attributable to rectal adenocarcinoma. For DFS, the date of the first event of cancer recurrence was used, which could be loco-regional or a distant metastasis. In case no event occurred, the time period until the last date of follow-up was used in the survival analyses. Finally, both uni-and multivariate analyses were performed for TSR-visual and TSR-auto using the Cox proportional hazards model. Probability values < 0.05 (2-sided) were considered statistically significant. Clinicopathological data Of 154 cases projected for inclusion in this study, twelve cases with mucinous carcinoma were excluded as these tumors exhibit largely different TSR values. Twelve other cases were excluded because, at the time of writing, the required slides or data were unavailable. One case was excluded because the corresponding tissue slide did not contain invasive carcinoma. The median follow-up time for the remaining 129 patients used in the present study was 5.6 years (interquartile range 2.3-8.3). The median age of the patients at the time of surgery was 67 years (interquartile range 59-74). Further clinicopathological data can be found in Table 1. There was no significant correlation between the clinicopathological variables and assessed values of TSR-visual or TSR-auto (p > 0.05). Performance of the deep learning system Measures of sensitivity and specificity per tissue type as well as overall accuracy were assessed for the automatic method by pixel-wise comparison of predicted labels with ground truth labels in manually annotated regions. We found that the overall accuracy was 94.6%, which shows improvement on what was reported by Ciompi et al. [20]. Values of per-class sensitivity and specificity are reported in Table 2. Examples of tissue segmentation by the CNN in four circular regions selected by the observers are shown in Fig. 1. In line with the high classification accuracy, good segmentation of tumor, stroma and other tissues types was observed. Further qualitative inspection of the circular regions revealed some minor segmentation errors. Directly at the stroma-tumor interface, a very thin band of stroma pixels is often misclassified as tumor. Likewise, however, small groups of tumor cells (e.g. tumor buds, or thin tumor structures) were sometimes misclassified as stroma. Inter-observer and computer-observer agreement The ICC between the two observers for the assessment of TSR was 0.736 (95% confidence interval (95% CI) 0.646-0.806). The co-occurrence of TSR scores assessed by the two observers is depicted in Fig. 2. The ICC's between TSR-auto and TSR-visual were 0.475 (95% CI 0.330-0.598) and 0.411 (95% CI 0.257-0.545) for observers 1 and 2, respectively. A moderate agreement between the two observers (κ = 0.578) was found after dichotomizing TSR-visual on basis of the 50% cut-off as described in section 2.5. Using the identical cut-off for TSR-auto, we observed only a fair agreement between TSR-visual and TSR-auto (κ = 0.239). Agreement improved considerably (κ = 0.521) when the median was used as cut-off for TSR-auto, resulting in: stroma-low = TSR-auto ≤ 65.47% and stroma-high = TSR-auto > 65.47%. Patients assigned to stroma-low or stroma-high groups by the observers and the automatic method are detailed in Tables 3, 4 and 5. Survival analyses Survival analysis generally showed a worse outcome for stroma-high patients compared to stroma-low patients For TSR-visual, a significantly lower DSS was seen in the stroma-high group compared to the stroma-low group (p = 0.042), but not for DFS (p = 0.182). Similarly, for TSR-auto(50%) this difference was significant for DSS (p = 0.018), but not for DFS (p = 0.066). For TSR-auto(median), both DSS and DFS were found to be significantly lower in the stromahigh group compared to the stroma-low group (p = 0.007 and p = 0.021, respectively). After stratification for TNM stage, stroma-high was also found to be associated with worse survival in stage II rectal cancer patients (n = 45), but this result was only significant for TSR-auto(median) (DSS p = 0.003 and DFS p = 0.015). Discussion For different cancer types, TSR has been shown to yield prognostic information. Visual assessment of TSR requires training, and may be difficult for cases close to the decision threshold of 50%. The present study shows that specifically for rectal adenocarcinoma the observer agreement is only moderate. Recent advances in slide scanning technology and machine learning have opened up new possibilities for computerized assessment of TSR. To the best of our knowledge, the present study shows for the first time that TSR can reliably be assessed by an automatic deep learning algorithm. The agreement of the automated system (using median cut-off) with the observer consensus (kappa = 0.521) was comparable to the inter-observer agreement (kappa = 0.578). The TSR assessed in this manner appeared to be a strong independent prognostic factor both for DSS and DFS in rectal adenocarcinoma. The prognostic value of the automated TSR was comparable to that assessed in consensus by two experienced observers for DSS in univariate analysis, but not in multivariate analysis. For DFS, only the automatically assessed TSR was significantly associated with outcome, both in univariate and multivariate analysis. Interestingly, automated TSR (using the median as cut-off) showed prognostic value for TNM stage II patients. Clinically, this is a subgroup of patients for which post-operative treatment is still under debate and more research is needed [22,23]. TSR can potentially help to direct this discussion and add information for a more personalized treatment of this patient category. In a recent study, Scheer et al. [8] analyzed TSR on the same cohort of patients as used in the present study. However, rather than a hot-spot measure, the authors applied a scoring procedure in which an average TSR was assessed based on the entire tumor area in a slide. Also, they defined TSR as the carcinoma percentage (CP) and the estimated percentages were grouped using three categories (low-CP, intermediate-CP and high-CP). In univariate survival analysis, CP was found to be prognostic for DSS and DFS. With CP-high as baseline and after correction for age, grading, pathological T-stage, and adjuvant treatment, CPintermediate was found to be correlated with worse DSS and DFS, however, this result was obtained only in the subset of lymph node metastasis negative cases (n = 94). In the present study, the prognostic value of TSR remained intact for the entire cohort of patients after correction for clinicopathological variables, including lymph node status. The most probable cause for this difference is the TSR scoring method. In the present study we decided to follow a more widely accepted scoring system, which appears to outperform methods where the overall tumor area is scored by averaging. The results of our observer study indicate that TSR obtained by visual estimation serves as a prognostic factor of DSS (although not reaching statistical significance when correcting for [9,10,13] on TSR assessment on colon cancer. This discrepancy may be explained by the fact that compared to colon, the rectum bowel wall has a thicker muscle layer and in, some cases, it may be difficult to distinguish between stromal tissue and smooth muscle cells, especially with darker H&E-stained slides. Muscle tissue, which should be excluded from scoring, may therefore be interpreted as stromal tissue by one observer and not by the other. Furthermore, as shown in Fig. 2, most discrepancies (15/24 cases) are found around the cut-off point of 50%. Especially for these cases, computer-aided TSR assessment may be very useful. For the automated method two different stroma cut-off values have been investigated in this study: the value used for the visual estimation (50%), and the median of measured TSR-auto values. We found comparable results for the two cut-offs, with a slightly higher hazard ratio for the 50% cutoff at the cost of a wider 95% confidence interval. However, since in general automated assessment of TSR yields higher stroma percentages than visual assessment, the use of a 50% cut-off for TSR-auto corresponded much less to TSR-visual compared to the use of the median cut-off (as is reflected in the kappa values). The optimal cut-off value for TSR-auto should be further investigated and validated in an independent cohort. It is worth noting that one of the patient inclusion criteria for the cohort that was used in this study was the absence of neoadjuvant treatment. The reason for this design choice, originally made by Scheer et al. [8], was that both chemotherapy and radiotherapy modifies the tissue architecture and, as such, may hamper the assessment of TSR or its prognostic value. The proposed method can, therefore, aid clinicians in selecting the right treatment options for rectal cancer patients who did not receive preoperative (chemo)radiotherapy. Furthermore, given the fact that the colon and the rectum are parts of the same continuous organ and have a similar histological appearance, the presented deep learning algorithm has the potential to be successfully applied to the analysis of colon cancer as well. The deep learning-based approach proposed in this work needs the position of a user-provided stroma hot-spot as input in order to assess TSR. After this manual input is provided, the proposed method can process the hot-spot area in the whole-slide image automatically. As such, human input is still required, making the method only semi-automatic. It is worth noting that in Ciompi et al. [20] a computer model similar to the one used in Age was used as a continuous variable c Due to low numbers, pT1 (n = 4) and pT2 cases were grouped together as well as pT3 and pT4 (n = 6) cases d Lymph node metastases e Due to low numbers, cases with well (n = 3) and moderately differentiated tumors were grouped together Significant results (p > 0.05) are indicated in bold this work has shown a high performance at segmenting several tissue types in rectal cancer at the whole-slide image level, i.e., beyond the limited area of the selected hot-spot. As a consequence, this method has the potential to be used to assess TSR both at whole-tumor level and at whole-slide image level. Such an approach would overcome the need for a user-provided stroma hot-spot and, therefore, allow investigating TSR at very large scale via fully-automatic computation. Future work will be directed towards further automation of TSR assessment and validation in a large independent cohort. Although, to the best of our knowledge, TSR assessment (visual or automated) has not yet been implemented in routine pathology diagnostics, it was recently reported [24] that the TNM Evaluation Committee (UICC) and the College of American Pathologists (CAP) have discussed TSR and acknowledged its potential for integration with the TNM staging system. To achieve this for colon cancers, we are currently investigating the reproducibility of (visual) TSR assessment in a large European multicenter study [25]. The results of the present study suggest that automated TSR can potentially be of significant aid to pathologists in routine diagnostics. However, validation of the proposed technology on a larger and independent data set is essential and, therefore, among our future research goals. The objectiveness of a deep learning-based method, which allows obtaining accurate and reproducible quantification of TSR, has the potential to pave the way to implementation of TSR in clinical practice.
2019-03-08T15:45:24.135Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "2d7797e69a35b5ffe021a2762ef251764562dca4", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s13402-019-00429-z.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "9090d09039f37b0bcab74889fb578fd3ac260517", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
216401401
pes2o/s2orc
v3-fos-license
Use of Extra-Narrow-Diameter Implants in Reduced Alveolar Ridge: A Case Report Background: Narrow-diameter implants (3.0 - 3.5 mm range) have been introduced for the replacement of teeth with insufficient bone structure and/or limited mesiodistal or interimplant spaces, and appear to offer clinical results similar to those obtained with implants of greater diameter. Studies using ex-tra-narrow-diameter implants (2.8 mm) are scarce. Case Presentation: A 59-year-old male patient received two extra-narrow-diameter implants, 2.8 × 11 mm in the region between elements 11 and 14. Together, two 3.5 × 8.5 mm SYSTHEX® platform 4.1 implants were installed in the region of elements 15 and 16 to provide greater stability in the occlusion. Of four previous implants on the maxillary left side, one in the region between the elements 23 and 24 that was located in a very apical position and vestibularized was removed. The provisional was already installed on the elements 11, 21, and 22 with the metal cores already prepared and with the Globteck® implants in the region of the elements 23, 24, and 27. The functional and esthetic results were satisfactory. Conclusions: Insertion of extra-narrow-diameter implants of 2.8 mm in the maxillary anterior region is a interimplant spaces, such as in the upper lateral or lower incisors areas, and are claimed to be a reasonable alternative to bone augmentation procedures [1] [2] [3]. These implants are associated with high survival rates, favorable marginal bone loss, and increased satisfaction and quality of life of patients [4]. Narrow implants are generally subdivided into implants with diameter of less than 3.0 mm classified as extra-narrow-diameter, and with diameter equal to or more than 3.0 mm and less than 3.75 mm classified as narrow-diameter implants. In a study of narrow-diameter (3.5 mm) implants replacing either a central or a lateral incisor in the maxilla, follow-up examinations up to 3 years after loading showed successful results and margin bone loss following the same pattern than in standard diameter (3.75 mm) implants [5]. The diameter of a dental implant is selected based on the type of edentulousness, the amount of the residual bone, the space volume available for the prosthetic rehabilitation, the emergency profile, and the type of occlusion [6]. On the other hand, given the challenges of rehabilitation in edentulous patients for the use of implants in regions where there is bone insufficiency, the use of narrow-diameter implants without the need of complementary surgeries to increase the amount of bone, makes treatment less costly and traumatic in the preparatory period [7]. It is also important to consider the mesiodistal dimension of the prosthetic space, since an adequate distance between the teeth and implants is necessary to reduce subsequent bone resorption in the papilla region, with an external hexagon interface of 3 mm between implants and cone morse of 1.5 mm between implants. Therefore, in order to preserve these characteristics, in many cases, it is necessary to use extra-narrow-diameter implants such as those of 2.8 mm, which allows for oral rehabilitation while respecting the functional spaces [8] [9]. The purpose of this study is to describe a case report as an example of how rehabilitation is possible in a patient with absence of elements 12 and 13 using extra-narrow-diameter implants of 2.8 mm diameter. Because the clinical experience with extra-narrow-diameter implants is limited, we also intended to present this case to show the applicability and success of using extra-narrow-diameter implants in daily oral implantology practice. Case Presentation A 59-year-old male patient attended a specialization course in Implant Dentistry at the Faculty of Dentistry of the State University of Rio de Janeiro, Rio de Janeiro (Brazil) seeking to restore its masticatory function and aesthetics in the upper arch. He reported having undergone various treatments along the previous months, which were unsuccessful to achieve satisfactory implant rehabilitation. In the clinical and radiographic examination four implants (Globteck, Bethaville SP, Brazil) were observed on the maxillary left side, with the implant area of element 23 placed protuberantly and in a more apical position as compared to the remaining implants ( Figure 1). All possibilities of rehabilitation were considered based on results of panoramic radiographs, periapical radiographs, and computed tomography (CT) studies, as well as analysis in semi-adjustable articulators to check for occlusion of the arcades. Once a correct and comfortable occlusion for the patient was determined, diagnostic waxing was done to simulate the position of the implant supported teeth to achieve the desired esthetics and function. The implant in the region between the elements 23 and 24 that was completely outside the ideal positioning for the placement of a functional crown, located in a very apical position and vestibularized was removed. As shown in CT studies, type III bone with little thickness prevented to place a 3.5 mm implant. Therefore, two narrow-diameter implants, 2.8 × 11 mm (SYSTHEX Implantes Dentários, Curitiba, Brazil) in the region between elements 11 and 14 ( Figure 2) were placed instead of a fixed bridge of four elements in the edentulous space between elements 11 and 14, as these would be better in biomechanics, and besides esthetics, the required space was insufficient for the placement of two lateral and canine incisor crowns ( Figure 3). Together, two 3.5 × 8.5 mm SYSTHEX® platform 4.1 implants were installed in the region of elements 15 and 16, to provide greater stability in the occlusion, in addition to returning the canine and group guides. The conduits of elements 21, 22, and 11 were properly prepared and received metal pin and core (Figure 4). Figure 5 shows the provisional already installed on the elements 11, 21, and 22 with the metal cores already prepared and with the Globteck® implants in the region of the elements 23, 24, and 27. Such a provisional one gave an improvement which was initially esthetic, which pleased the patient, who did not smile much, and began to chew more harmonically. These implants were long without mechanical load and needed to be put into function. Discussion The present clinical case illustrates that extra-narrow-diameter implants can be successfully used to restore esthetics and function of missing maxillary teeth. The replacement of lost natural teeth by means of tissue-integrated implants represents a major advance in clinical treatment. The genesis of osseointegration, a concept introduced by Brånemark in the 1960s [10] has widened the scope of treatment options for edentulous patients, making feasible and extending the treatment to areas of partial edentulism [11]. Generally, the use of standard diameter or large diameter implants is recommended in order to ensure adequate bone-implant contact, with a space of 2 -3 mm between the implant surface and the natural collateral root surface. However, the smaller mesiodistal diameter of certain anterior teeth or the thinness of the bony ridge does not always allow for such implants to be placed. In these circumstances, narrow-diameter implants have been shown to be an effective option. Delgado et al. [12] evaluated the insertion torque, the amount of deformation, and the characteristic pattern of distortion experienced by narrow dental implants (3.3 mm and 3.5 mm) and different internal connections designs after their insertion in artificial Type II bone, concluding that correct implant handling and proper implant bed preparation are essential to a reduction in deformation and the release of titanium particles in the implant index during the insertion of narrow implants. With the introduction of grade 5 titanium implants is was possible to reduce deformation and to achieve more predictability in the long-term results of oral rehabilitation using narrow implants [13]. In a study of 40 patients requiring a single-implant crown in the anterior or premolar regions, titanium-zirconium 3.3 mm diameter implants not differ from titanium 4.1 mm diameter regarding the clinical performance over a 3-year period [14]. Moreover, in a comparison of two commercially available narrow-diameter implants, 3.5 and 2.9 mm, for their performances under axial and oblique loading in simulated situations of all-on-4 treatment, both implants showed a similar biomechanical behavior [15]. However, despite a lower risk of peri-implant bone loss, the 3.5-mm model had higher peak stress on implants and abutments than the 2.9-mm model [15]. Thus, narrow implants up to 2.8 mm in diameter are not associated with any structural risk as compared to standard diameter implants [16]. A number of studies published in the literature have provided evidence of the usefulness of narrow-diameter implants in different clinical indications compared to standard diameter implants [6] [17] [18] [19] [20]. Also, narrow-diameter implants could be considered for use with fixed restorations and mandibular overdentures, since their success rate appears to be comparable to that of regular diameter implants [21]. Arisan et al. [22] performed a study to evaluate clinical outcome, success and survival rates, changes in bone level, mechanical and prosthetic complications after loading of the narrow implants through clinical follow-up for 5 -10 years. They concluded that narrow implants can be used with confidence where a regular diameter implant is not convenient. The loss of marginal bone in narrow implants occurred predominantly during the two years of loading and after that period the loss was minimal. Both implants with a reduced diameter (between 3 and 3.5 mm) and implants with an extra-small diameter (<3 mm) have been used in definitive rehabilitation treatments in the anterior maxilla and mandible, and are available in 2 (implant and prosthetic component) and can be loaded conventionally or immediately. In our patient, two extra-narrow-diameter implants of 2.8 mm diameter were placed between elements 11 and 14 because type III bone with little thickness prevented to place a 3.5 mm implant. Narrow-diameter implants have been used to restore limited spaces in the anterior esthetic zones. In a consecutive case series with 3 to 14 years follow-up, 19 narrow-diameter implants placed in 14 patients, no implant failures or prosthetic complications were reported, yielding a 100% survival rate and an 84.2% success rate, with all patients reporting that they were very satisfied with the esthetic results [23]. A clinical and radiological follow-up of narrow-diameter implants (3.0 -3.3 mm) replacing maxillary laterals and mandibular incisors, good function and implant survival of 97.2% was reported, but the two main patient concerns were discoloration and regression of the buccal gingiva [24]. In a meta-analysis of 29 studies with 3048 narrow-diameter implants (3.0 and 3.25 mm), satisfactory survival rates of around 95% and little marginal bone resorption of around 0.5 mm after a mean follow-up of 3 years were found [25]. In addition, a meta-analysis of 892 narrow-diameter implants placed in the anterior region in 736 patients, with a mean follow-up of 40 months, showed a mean success rate of 95.2% [26]. On the other hand, surface characteristics (TiUnite) have been identified as a risk factor for failure in narrow-diameter implants in the maxillary anterior region [27]. Conclusion The findings of the present case indicate that insertion of extra-narrow-diameter implants of 2.8 mm in the maxillary anterior region is a reliable option in a patient with absence of elements 12 and 13, restoring masticatory function and aesthetics in the upper arch.
2020-03-19T10:29:56.034Z
2020-02-04T00:00:00.000
{ "year": 2020, "sha1": "f5c4c3d57268d2135e782fdad84ecdb6a82ed1d6", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=98921", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a3a6419c6de9c0de337fe7350bd54c025065541f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
266588525
pes2o/s2orc
v3-fos-license
Employees’ professional situation and the abuse of sick leave absence in Poland Abstract The propensity to abuse sickness leave is a complex issue conditioned by various individual and contextual factors. The aim of the article is to assess the effect of various work-related factors on the abuse of sick leave in Poland. Three categories of sick leave abuse were distinguished: compulsion, escape and recreation. The data were gathered using the CAWI survey. Statistical analyzes incorporated multivariable linear regression and structural equation modelling. The research sample consisted of 1067 respondents (full-time employees). Some work-related factors have a significant impact on the abuse of sick leave. These factors are: (1) motivational working conditions, (2) social working conditions, (3) qualifications and (4) form of ownership. The main conclusion is that the assessment of specific aspects of working conditions has a different impact on the declared abuse of sickness absence. A high assessment of the 'social’ (interpersonal) aspect is associated with a low tendency to engage in unethical behavior, whereas a high assessment of the 'motivational’ aspect is associated with a high tendency in this respect. Moreover, it was found that a low tendency to abuse is also expressed by people who highly assess their professional qualifications. Finally, abuses are committed relatively often by public administration employees. Introduction The level of sickness absence has long been regarded as a reliable indicator of the health status of workers (Marmot et al., 1995).Nowadays, this approach is increasingly being abandoned and the important role of other (non-health related) factors is being emphasised.As a result, different perspectives in the study of sickness absence began to emerge.The medical approach focuses primarily on the health aspects of being absent.The psychosocial approach treats absence as a function of psychological and cultural factors.The economic approach, on the other hand, sees absence as the result of a rational decision, taking into account the potential costs and benefits of not working. In theory, employees use sick leave according to their incapacity to work due to illness.In practice, however, there are 'deviations' from this optimal state, both upwards when sick leave is overused (absenteeism) and downwards when it is underused (presenteeism) (Hensing, 2004).The subject of interest in this article is the former 'deviation', i.e., when sickness absence is excessive and misused.Various terms have been used in the literature to describe this phenomenon: shirking, voluntary absenteeism, absenteeism without illness, chosen absenteeism.In this study, we use the terms 'sickness absence misuse' and 'sickness absence abuse'.They refer to the situation where sick leave (sickness absence) is used contrary to its intended purpose, i.e., for treatment or convalescence. Abusing sickness absence is a growing problem in many countries.Taylor et al. (2010, p. 270), in their analysis of the situation in the United Kingdom, noted that a new moral panic certainly appears to be upon us.Popular discourse insists that malingering is endemic in 'sick note Britain', with workers 'swinging the lead', or, to use the currently fashionable term, taking 'duvet days '.In recent years this problem has been the subject of a number of studies.These studies have mainly focused on comparing the level of absenteeism over different time periods.Eventual inconsistencies have been linked to 'circumstances' that (by their very nature) limit the motivation to work.So far, such circumstances have been shown to be: nice weather (Shi & Skuterud, 2015), important sporting events (Skogman Thoursie, 2004), birthdays (Thoursie, 2007), or simply Wednesday (Vahtera et al., 2001).Presumably, this type of abuse may also occur during long weekends (so-called 'bridging days'), but there is no empirical evidence to support this assumption (B€ oheim & Leoni, 2020a).Furthermore, absenteeism was also found to be positively associated with bar opening hours, suggesting that evening and (especially) night-time alcohol consumption contributes to the misuse of sick leave on the following day (Green & Navarro Paniagua, 2016). In general, absenteeism is determined by a number of individual and contextual factors (Alexanderson, 1998;Miraglia & Johns, 2021).Individual factors include demographic variables (age, gender, ethnic group), personality traits and personal values, family situation, and health status (Alexanderson, 1998;Geurts, 1994).According to Ł. Jurek (2023), age is particularly important in this regard, as the propensity to abuse sickness absence is highest among the youngest employees and gradually decreases in older age groups Regarding the context, in turn, absenteeism is related to numerous social, economic and institutional settings.First of all, workers, as rational beings, are sensitive to economic incentives.They tend to overuse sick leave as long as the perceived marginal benefits of absence exceed the perceived marginal costs (Kaiser, 1998).Thus, absenteeism is a function of the generosity of sickness benefits (Halima & Koubi, 2022;Johansson & Palme, 2002;Ziebarth & Karlsson, 2014).It applies also to employers.Their informal consent to the misuse of sick leave depends on the cost, in terms of both, work disorganization (Heywood et al., 2008) and sickness payments (B€ oheim & Leoni, 2020b;Pertold & Westergaard-Nielsen, 2018).Moreover, rational workers consider not only the financial but also the non-financial consequences of their absence.The most important non-financial consequence is the risk of being fired.In general, the fear of dismissal acts as a disciplinary tool.It reduces avoidance (Shapiro & Stiglitz, 2011).Such a situation takes place in the case, for example, of a deterioration in the labour market (Bratberg & Monstad, 2015) or a change in employment protection (Bradley et al., 2014).Secondly, the rationality of behaviour is being modified by social norms (Elster, 1989).It applies also to absence behaviour.The propensity to abuse sick leave (as well as any other welfare benefit), is deeply rooted in a cultural background, both national (Miraglia & Johns, 2021;Pfau-Effinger, 2005) and local (Virtanen et al., 2000).It derives from the low level of so-called 'benefit morale', which is defined as the individual reluctance to exploit the welfare state via benefit fraud (Halla et al., 2010, p. 36).Finally, in accordance with social psychology, decisions (also those regarding absence) made by individual employee are not independent, but to some extent depend on the decisions of other employees in the workplace (organization) (Bazerman et al., 1983;Geurts, 1994).Due to the mutual obligations between coworkers, their decisions are interdependent.As so, social context of work is an important determinant of absence behavior. An attempt to integrate this both aspects (individual and contextual) was made by R. Steers and S. Rhodes (Steers & Rhodes, 1978) in their process theory.It states that attendance is a function of two variables: (1) ability to work, and (2) motivation to work.Ability is involuntary, whereas motivation is voluntary element of absence.Motivation is shaped by two different forces: push and pull.Push forces are negative factors that discourage work and limit its attractiveness.Pull forces, in turn, are positive factors that encourage out-of-work activity.Moreover, these forces operate at three different levels: (1) macro (e.g., culture, welfare regime, labour market), ( 2) meso (e.g., working environment), and (3) micro (e.g., personal characteristics). The subject of interest in this article is the first (micro) level, associated with professional skills, and the second (meso) level associated with working conditions.The research aim is to assess the effect of various work-related factors on the declared abuse of sickness absence in Poland. The data used for the statistical analysis comes from a CAWI survey conducted in 2021.The survey distinguished eleven 'circumstances' that could potentially lead to abuse of sick leave, such as renovation, overwork or important administrative matters.Respondents indicated their propensity to abuse in each of these circumstances.Then work-related factors, both micro-level (level of job skills) and meso-level (characteristics of the work environment), were associated with declared abuse.Structural equation modelling (confirmatory factor analysis) was then used to reduce the number of dependent variables (the circumstances of sickness absence abuse) and independent variables (micro-and meso-level factors).Finally, linear regression was used to estimate the influence of certain predictors on the three basic categories of absence abuse: (1) compulsion, (2) escape and (3) recreation. Although the relationship between work-related situation and sickness absence has been of interest to researchers for a long time, in some aspects the topic is still under-recognised.First, abuses can be of different kinds: can be forced by objective (but not health-related) factors, can result from a willingness to 'escape from' undesirable occupational activities, and also can result from a willingness to 'escape to' desirable non-occupational activities.It can be presumed that workers in different professional situations commit different types of abuse.Previous research has not considered this aspect.Secondly, to the best of our knowledge, many potentially important predictors of sick leave abuse have not been taken into consideration so far, such as the level of professional qualifications, such as professional experience, interpersonal skills or familiarity with new technologies. The paper is divided into five sections.The first section discusses the findings of the previous research on the links between professional situation and sickness absence abuse.The second section discusses the source of the data and the characteristics of the research sample.The third section presents the author's research approach.The fourth section presents the estimation results of models showing the impact of the analysed factors on the different categories of abuse (compulsion, recreation, escape).The fifth (final) section contains conclusions and discussion. Abuse of sick leave and work-related situation: literature review The links between work environment and absenteeism have already been repeatedly confirmed.The findings so far show that particular measures of sickness absence (prevalence, length) vary substantially between workplaces, even within the same industry and within the same country (Ichino & Maggi, 2000).According to Szubert and Sobala (2003), the level of sickness absence is strictly dependent on working conditions (material, psychosocial, organisational).Using the example of a large privatised company in Poland, they showed that absenteeism changes with the modernisation of the organisation. Of course, in some industries, the working environment and/or the nature of the job demands have a natural impact on the level of sickness absence.The most frequent users of sick leave are people who are overburdened with physical work (Andersen et al., 2018(Andersen et al., , 2021;;Kowalczuk et al., 2020), especially in conditions involving monotonous movements or constant exposure to the effects of a harmful factor (Voss et al., 2001).The absence is additionally affected by material and organisational working conditions (Sundstrup et al., 2018;Thorsen et al., 2021), including in particular the ergonomics of workstations (B€ ockerman & Laukkanen, 2010), and shift work (Bernstrøm & Houkes, 2020).All these factors tend to cause various adverse health effects and therefore may lead to high absence.However, there can be also another, indirect influence, with an impact on job satisfaction and, subsequently, absenteeism (Miraglia & Johns, 2016). There are also a number of non-health-related factors that shape absence behaviour.Based on their literature review, Miraglia and Johns (2021) distinguished two domains of work-related factors: (1) organisational, and (2) occupational.In the organisational domain they included: absence culture, group cohesion, psychological and demographic similarity between the individual and the work group or manager, workplace support and conflict, and organisational ethical climate.In the occupational domain, they included social expectations of attendance, social job demands, gender composition and trade unions. Although the above list is very comprehensive, it is certainly not exhaustive.Several important work-related factors reported in the literature are missing.Four additional determinants are worth mentioning. First, the size of the company.In general, the higher the number of employees, the higher the level of absence (CIPD, 2020).It can be assumed that there are more opportunities to abuse sickness absence in larger companies.Firstly, in a large team, it is relatively easy to arrange for a replacement during illness.Second, in large companies, people tend to be anonymous to each other and are not connected by strong personal ties, and it is therefore easy for them to burden others with additional tasks resulting from their own absence (Edwards & Ram, 2019). Second, the form of ownership.In the public sector, the level of absence is higher than in the private sector (Løkke & Krøtel, 2020).In the UK it is almost twice as high (CIPD, 2020).J. Hansen et al. (2019) presented two possible explanations for this phenomenon.First, it is associated with job security, which is higher in the public sector.Second, it is associated with pressure on performance and profit, which is higher in the private sector. Third, leadership.Absenteeism is largely dependent on the behaviour, style and attitude of leaders (Buzeti, 2022;Dietz et al., 2020;Løkke, 2022;Løkke & Krøtel, 2020;Schmid et al., 2017;Sørensen et al., 2020;Stengård et al., 2021).They have many tools to influence, such as general health and absence management, social modelling, etc.However, this factor is very complex and includes many other components that have been treated separately in other research, such as social capital and social norms (Clausen et al., 2020;A. S. K. Hansen et al., 2018;Løset et al., 2018), quality of interpersonal relationships and conflicts at work (Laki� sa et al., 2021;Sterud et al., 2022). Fourth, job satisfaction.The results of previous research indicate that absenteeism increases with a lack of support from colleagues and supervisors (North et al., 1996;V€ a€ an€ anen et al., 2003); an increase in stress and tension at work (Kristensen, 1991;Szubert et al., 2009); limited opportunities for career development and lower levels of participation (Melchior et al., 2003); and a decrease in overall job satisfaction (Marmot et al., 1995).This suggests that sickness absence is a result of frustration and helplessness.It becomes a form of escape from a toxic workplace, people and/or tasks. However, there are a number of work-related factors that could potentially influence absence behaviour, but for some reason have not been the subject of research so far.One such factor is the level of professional qualifications.Are employees who rate their knowledge and skills positively more likely to abuse sick leave than those who rate them negatively?Another factor is the type of work.Who is more likely to abuse sickness absence: white-collar workers or blue-collar workers?The remaining factors are: position held, method of remuneration and work experience. In summary, the work-related factors affect sickness absence in two ways.First, they affect the health of employees.Second, they affect employees' absence behaviour, including their tendency to abuse sick leave.However, it is still unclear under which circumstances abuse is most likely to occur and which work-related factors are associated with it. Data source and sample characteristics The abuse of sick leave is a problem that is still relatively poorly understood.Above all, there is a lack of empirical research on the issue.Research in this area is difficult to conduct due to the blurred distinction between justified and unjustified use of sick leave.Researchers are forced to observe high levels of caution in interpreting the available data, as it is never fully clear whether an absence is forced by an actual illness, or whether it is the effect of other non-health-related causes. In Poland, the abuse of sick leave has not as yet been the subject of scientific research, and as a result the scale of the phenomenon is not known.The one available source of information on the topic are the results of spot checks carried out by the welfare authorities (ZUS).Unfortunately, the possibility of drawing conclusions on the basis of this data is severely limited as the spot checks are both selective and also cover only a narrow group of sick leave referrals (long-term sick leave absence). The lack of reliable and complete data from public sources requires the sourcing of information in another way.One of the potential solutions is to use a survey-based study.Of course, the information gathered in this way does not reflect the actual state of affairs, and merely contains the declarations of respondents, which can to a lesser or greater degree diverge from reality, especially if difficult and/or morally questionable topics are covered (Bostyn et al., 2018).Nevertheless, this is also a valuable source of information, which, while it may not present the problem studied as it actually is, does reveal the way in which it is perceived by respondents. The source material used in this article is from a survey study conducted in December 2021 by the research agency BBiAS.The information was gathered using the CAWI method, that is via an internet survey.The territory covered by the research encompassed the whole of Poland, and the participants were full-time employees covered by national health insurance.The research sample totalled 1067 respondents.The structure of the sample according to character and place of employment is presented in Table 1, while Table 2 presents the sample structure according to declared assessments of: employment conditions, own professional qualifications and the characteristics of the employer. The random sampling was made up of national panels of respondents.It can be assumed that the randomised character of the sample provides grounds for generalisation of the results.The maximum measurement error was ±3% with a reliability level of 95%. Abuse of sick leave in the light of declarations by respondents Based on the results of prior research, eleven circumstances were isolated that particularly encourage the abuse of sick leave referrals.These are situations in which employees may feel a particular temptation to partake in unethical behaviour.These circumstances are: CIR1: extending the period away from work e.g., during public holidays or long weekends, CIR2: overtiredness and/or overwork (sick leave as additional rest), CIR3: refusal to grant regular leave (sick leave as a form of retaliation), CIR4: demonstrating dissatisfaction with working conditions (sick leave as a form of strike), CIR5: escape from problematic work tasks and/or from cooperation with unliked people, CIR6: a spontaneous escapade (e.g., fishing, mushroom picking, to a favourite team's match), CIR7: a situation of higher necessity (e.g., an important family occasion), CIR8: renovation work or other important work on the home, CIR9: carrying out other paid work (e.g., an urgent task), CIR10: the need to arrange an important administrative matter, CIR11: caring for a loved one or an animal. The respondents were asked to respond to each of these eleven cases and declare if they had ever taken sick leave in such circumstances.The results are presented in Figure 1.Employees use sick leave the least often (8.43%) to demonstrate dissatisfaction with working conditions, and the most often (22.87%) in situations of higher necessity.Detailed results showing the distribution of responses in relation to individual variables are included in the Appendix. Method and research procedure The aim of the research was to assess the influence of various factors related to professional situation on the abuse of sick leave in Poland.For structural equation modelling, the MLR algorithm was used (maximum likelihood estimation with robust (Huber-White) standard errors), which is recommended when the assumption of a multivariate normal distribution is not met (Lai, 2018).Next, a series of multi-variable linear regression analyses were conducted, which were used to assess the influence of the predictors on individual categories of abuse.The statistical analysis was conducted using R software with the 'Lavaan' and 'car' package. Dependent variable Based on the classic Fraud Triangle concept (Cressey, 1953), the individual circumstances of abuse were grouped according to the type of motivational element into three intercorrelated subfactors (according to the division presented in Table 3), which were defined as 'categories' of abuse: COMPULSION, ESCAPE and RECREATION.At the stage of initial calculations, it was found that circumstance CIR4, that is sick leave as a form of demonstrating dissatisfaction with working conditions, was not correlated with the other circumstances, and as a result was excluded from further analysis.The obtained structural model estimates are presented in Table 4.In the COMPULSION category, the motivation for absence is the pressure related to the need to deal with an important and/or unpredicted matter that is in conflict with working hours.Such pressure is related to an important administrative matter or another situation of higher necessity, renovation work or the need to provide personal care for a loved one or an animal.In the ESCAPE abuse category, the motivation is the desire to 'escape from' unwanted work tasks, or 'escape to' desired activities which collide with working hours.This desire is related to various factors that either push away from work (push factors), such as avoiding unpleasant events and/or people, or attract towards absence (pull factors) such as the wish to participate in a spontaneous escapade (fishing, mushroom picking, to a favourite team's match).In the RECREATION abuse category, the motivation to abuse sick leave is rest and recuperation.These circumstances take place in situations such as extending one's free time away from work (e.g., a long weekend), or as a reaction to weariness, overtiredness and/or over-work. Independent variables The independent variables used in the analysis were various factors related to the respondents' professional situation.These factors can be divided into four groups.The first are variables related to the character of work and the place of employment.The second group are variables related to assessment of employment conditions and atmosphere at work.The third group are variables related to assessment of one's own professional qualifications.The fourth (final) group are variables related to the assessment of the employer's characteristics. In order to reduce the number of variables in groups two, three and four, confirmatory factor analysis was conducted and latent variables were created. Variables in the second group related to the assessment of employment conditions and atmosphere at work were grouped into two inter-correlated subfactors: MOTIVATIONAL COND and SOCIAL COND, according to the division presented in Table 5.The SOCIAL COND subfactor refers to the 'social' aspects of working conditions related to interpersonal relations and the feeling of job security.Meanwhile, the MOTIVATIONAL COND subfactor refers to the 'motivational' aspects of working conditions related to the feeling of satisfaction, prestige and opportunities for development.At the stage of initial calculations, it was found that the ECON4 assessment, that is satisfaction with non-material working conditions, was not correlated with the other assessments in the group, and as a result was excluded from further analyses.The results of the obtained structural model estimates are presented in Table 6. The variables in the third group, related to the assessment of one's own professional qualifications, were grouped into one factor-QUALIFICATIONS -and the results of the obtained structural model estimates are presented in Table 7.The variables in the fourth group, related to the assessment of the employer's characteristics, were grouped into one factor-EMP CHAR, and the results of the obtained structural model estimates are presented in Table 8. Finally, the following independent variables were taken into consideration for the further part of the research: Main variables: In order to assess robustness of regression estimates B for main variables (MOTIOTIONAL COND, SOCIAL COND, EMP CHAR, QUALIFICATION), a comparison of two tested models was done: with all variables (model 1), and only with the main variables (model 2).Such procedure was conducted for all three categories of abuse (COMPULSION, ESCAPE, and RECREATION).Results are presented graphically in Figures 2-4.Putting both tested models together enables comparison similarity of regression estimates only for the main variables with regression estimates for the same variables but nested with the confounding variables. Estimate of the predictive model for the COMPULSION abuse category In order to estimate the effect of the previously defined work-related factors on abuse in the COMPULSION category, multivariate linear regression analysis was conducted.The obtained model proved to be statistically significant, F(24, 1042) ¼ 2.55; p < 0.001.It explains around 6% (3% after correction) of the variability of the tested variable (R 2 ¼ 0.06, adj.R 2 ¼ 0.03).The results of the model estimation are presented in Figure 2. In the model, four predictors were shown to be statistically significant: MOTIVATIONAL COND, SOCIAL COND, Work experience 5-9 years and Form of ownership Private.In addition, one factor turned out to be on the borderline of statistical significance: Qualification.An increase in results for the variable MOTIVATIONAL COND was linked to an increase in results for COMPULSION.This means that a better assessment of motivational working conditions increases the degree of abuse in this category.In turn, an increase in results for the variables SOCIAL COND and QUALIFICATION had the opposite effect, i.e., it was linked to a decrease in the results for COMPULSION.This means that a better assessment of social aspects of working conditions as well as of one's own professional qualifications reduces the degree of abuse. An increase in results for the variable Work Experience 5-9 years was linked to an increase in results for COMPULSION, while an increase in results for the variable all variables, model 2: only the main variables).Note: The error whisker bars present 95% of the confidence interval for estimate B. Lines that cross one another represent the lack of differences between the predictors in the effect on the level of Compulsion.However, lines that do not cross one another represent important differences in the effect on the level of the Compulsion variable.Overlapped orange and blue whiskers means that there were no differences between estimates in two different models (model with [blue] and without [orange] confounding variables).Source: own elaboration. Form of ownership Private was linked to a drop in the results for COMPULSION.This means that employees who have worked for a given employer for a period of between 5 to 9 years more often declare a propensity to abuse sick leave than employees who have worked for under 1 year.As far as employees employed in private enterprises are concerned, they less frequently declare the abuse of sick leave than employees in the civil service. In the remaining cases, the effect of the variables was shown not to be statistically significant. Estimate of the predictive model for the ESCAPE abuse category Similarly to the previous category, multivariate linear regression analysis was conducted.The obtained model was shown to be statistically significant, F(24, 1042) ¼ 2.54; p < 0.001.It explains around 6% (3% after correction) of the variability of the tested variable (R 2 ¼ 0.06, adj.R 2 ¼ 0.03).The results of the model estimation are presented in Figure 3. In the model, six predictors were found to be statistically significant: MOTIVATIONAL COND, SOCIAL COND, Reward Fixed, Position Industrial worker, Work Experience 1-2 years and Work experience 5-9 years.In addition, one factor turned out to be on the borderline of statistical significance: Form of ownership Private. Similarly to the previous category, an increase in the value MOTIVATIONAL COND was linked to an increase in the results for ESCAPE, and an increase in the value SOCIAL COND had the opposite effect, i.e., it was linked to a decrease.This means that a higher assessment of motivational working conditions increases the degree of abuse in this category, while a higher assessment of the social aspects of working conditions reduces the amount of abuse. An increase in the values of the variables Reward Fixed and Form of ownership Private was linked to a drop in the value for ESCAPE, which means that people who receive stable remuneration and those employed in private firms less frequently declare abuse than people with an undefined form of remuneration and civil servants. Increases in the values of the variables Position Industrial worker, Work Experience 1-2 years and Work experience 5-9 years are linked to a rise in ESCAPE, which means that employees who have worked for a given employer for 1-2 years and 5-9 years more frequently declare abuse than people who have worked for up to 1 year, while industrial workers more often declare abuse than employees of undetermined sectors. In the remaining cases, the effect of the variables was shown not to be statistically significant. Estimate of the predictive model for the RECREATION abuse category Similarly to the previous cases, multivariate linear regression analysis was conducted.The obtained model was shown to be statistically significant, F(24, 1042) ¼ 1.40; p > 0.05.It explains around 3% (1% after correction) of the variability of the tested variable (R 2 ¼ 0.03, adj.R 2 ¼ 0.01).The results of the model estimation are presented in Figure 4. In the model, only one predictor was found to be statistically significant: SOCIAL COND.An increase in the value of this factor was linked to a drop in the value for RECREATION, which means that a higher assessment of the social aspects of working conditions decreases the declared propensity to abuse sick leave. Summary of estimation results for all three models The study confirms that certain work-related factors influence the abuse of sickness absence.Table 9 summarises the results, including the direction of the effect of certain predictors on each category of abuse.Lines that cross one another represent the lack of differences between the predictors in the effect on the level of Escape.However, lines that do not cross one another represent important differences in the effect on the level of the Escape variable.Overlapped orange and blue whiskers mean that there were no differences between estimates in two different models (model with [blue] and without [orange] confounding variables).Source: own elaboration. The study analysed the impact of ten factors: (1) motivational working conditions, (2) social working conditions, (3) characteristic of employer, (4) level of professional qualifications, (5) character of work, (6) remuneration method, (7) position held, (8) work experience, (9) form of ownership, (10) company size (number of employees), Motivational working conditions have a positive effect on all categories of sickness absence abuse.This impact is partially statistically significant (in one of the three categories).It means that an increase in job satisfaction, commitment and career opportunities leads to an increase in the propensity to abuse sickness absence.Such a result strongly contradicts previous theoretical findings (Steers & Rhodes, 1978) and empirical evidence (B€ ockerman & Ilmakunnas, 2008;Luz & Green, 1997).It means that the relationship between work motivation and unethical organisational behaviour is Lines that cross one another represent the lack of differences between the predictors in the effect on the level of Recreation.However, lines that do not cross one another represent important differences in the effect on the level of the Recreation variable.Overlapped orange and blue whiskers mean that there were no differences between estimates in two different models (model with [blue] and without [orange] confounding variables).Source: own elaboration. complex and ambiguous.An increase in motivation does not lead to a reduction in abuse, as is commonly believed.It is difficult to provide a clear explanation for this phenomenon.Presumably it is an effect of the specific cultural background in Poland, where the abuse of social benefits (such as sickness absence) is perceived as an action against the welfare state rather than the employer.It can also be a manifestation of hypocrisy, consisting in the belief that a high level of work motivation can be compensated (if necessary) by additional absence.Time off work may be perceived as a form of additional reward for commitment and dedication to the company.Further in-depth research is certainly needed to unravel this issue. Social working conditions have the opposite effect to motivational ones.An increase in the quality of interpersonal relationships (both with colleagues and superiors) reduces the propensity to abuse.This effect is statistically significant in all three cases.It is consistent with previous findings that greater group cohesion reduces shirking (Miraglia & Johns, 2021). The assessment of employer characteristics has a positive effect on sickness absence misuse.The more positive the employees' opinion of the organisation they work for, the greater the tendency to engage in unethical absenteeism in the COMPULSION and RECREATION categories.Although these effects are not statistically significant, they are contrary to intuitive expectations and the findings of previous research (Bekker et al., 2009;Kangas et al., 2017).A positive evaluation of characteristics such as concern for employee-related issues, social responsibility, management style and organisational culture should lead to a restriction of unethical practices.This paradox, as in the case of motivational working conditions, is difficult to explain and requires further in-depth research. Self-assessed qualifications have a negative effect on all categories of sickness absence abuse.This effect is partially statistically significant (in one of the three categories).It means that the better employees feel about their knowledge and skills, the less likely they are (at least as far as reporting is concerned) to engage in unethical behaviour with regard to absenteeism.It can be concluded that people who are better educated and more familiar with their professional duties are more aware of the harmful effects of excessive absenteeism.However, it cannot be excluded that to some extent this is the effect of a psychological phenomenon (coherence bias), i.e., people who have a positive view of their own qualifications will also have a positive view of their own attitude to sickness absence. The character of work has an inconsistent effect on excessive absenteeism.Bluecollar workers are more likely to commit abuses from the COMPULSION and ESCAPE categories, while white-collar workers are more likely to commit abuses from the RECREATION category.However, none of these effects are statistically significant. The remuneration method has a mixed effect on sickness absence abuse.Employees paid on a fixed and variable basis are less prone to abuse in the COMPULSION and ESCAPE categories, while employees who find it difficult to determine the method of payment are more prone to abuse in the RECREATION category.This effect is only partly statistically significant. In terms of the position held, it is difficult to see any regularity in the abuse of sickness absence.Moreover, the effect was not shown to be statistically significant. Working experience has mixed effects on abusing sick leave.Employees with a short work experience (up to 1 year) are less prone to abuse in the COMPULSION and ESCAPE categories than those with a longer work experience.In the category RECREATION, this effect is diverse and it is difficult to indicate a logical relationship.In most cases these effects are not statistically significant. Public sector employees are much more likely to abuse sick leave than private sector employees.This is true for all categories of abuse.This effect is partly statistically significant (in two of the three categories).Such a result is consistent with previous research (J.R. Hansen et al., 2019;Løkke & Krøtel, 2020) and confirms that excessive absenteeism is the domain of public administration.Moreover, it can now be clearly confirmed that this high level of absenteeism is not due to poorer health, but to unethical behaviour.It can be assumed that public organisations are less successful in attendance management -managers are not properly trained and management tools are not used effectively.It is also possible that unethical behaviour in terms of absenteeism is a reaction by employees to poor employment and working conditions. The final factor is the size of the organisation (number of employees).In general (with one exception), employees in larger organisations are less likely to abuse sickness absence than those in the smallest organisations (up to ten employees).This effect is not statistically significant.It suggests, however, that the tendency towards unethical behaviour is not influenced by the size of the organisation, but rather by its structure, the management style, the organisational culture and, above all, the quality of relations between employees (cohesion of teams within the organisation).Moreover, large companies are more likely to implement formalised attendance management programmes, which (at least to some degree) solve the problem of sickness absence abuse. Discussion and conclusions According to R. Gardiner (1992, p. 290) absenteeism is probably the most common and often the most frustrating problem with which a supervisor deals.It causes a number of negative consequences, both financial (payment of benefits) and non-financial (work disorganisation) (Grinza & Rycx, 2020). The negative consequences of excessive absenteeism have led to increased interest in the problem.Identifying its determinants is therefore an increasingly important area of research.Employers use this theoretical knowledge to develop practical solutions that increase their effectiveness in limiting the abuse of sick leave. Absenteeism is a complex issue that depends on a number of different individual and contextual factors that relate to the personal characteristics (micro factors), the work environment (meso factors) and the wider environment (macro factors).The ability of employers to influence these factors is, of course, limited.They can only modify the working environment and, to some extent, the qualifications of the employees (micro factors).These work-related factors are the focus of this article. The results of the survey suggest that the abuse of sick leave is quite common practice in Poland.Almost one in four respondents reported abusing sick leave in situations of higher necessity, e.g., to attend an important family celebration.Almost one in five respondents reported abusing sickness absence to take care of a loved one or animal, and also to deal with important administrative (non-work) matters.Each of these circumstances is of course important and urgent, but they are not excuses for using sick leave, which is intended for treatment and rehabilitation.Sickness absence in such cases is unethical behaviour.It constitutes welfare abuse. The statistical analysis concerned the impact of ten work-related factors on three specific categories of abuse ('compulsion', 'escape', and 'recreation').The potential predictors covered different aspects of the work situation: social and motivational conditions, characteristics of the employer (including number of employees and form of ownership), professional qualifications, methods of remuneration, professional experience, position held and type of work performed.The study showed the influence of many of these on sickness absence abuse, but the most consistent and statistically significant results were obtained in four cases: (1) motivational working conditions, (2) social working conditions, (3) qualifications and (4) form of ownership. The obtained conclusions provide many practical implications.First and foremost, companies that want to limit excessive absenteeism should focus on the "interpersonal" issues.The quality of the relationships between employees and between employees and their superiors is crucial in combating unethical behaviour.Therefore, human resources policies should expand the range of activities aimed at integration, cohesion, trust and mutual responsibility. The second key issue is qualifications.Employees are less likely to abuse sickness absence if they consider themselves as qualified and highly skilled.It is therefore worth investing in people's development and building their self-esteem so that they believe in their own abilities. Another conclusion is that abuse of sickness absence is widespread in public administration.Policy-makers should try to resolve this difficult situation.It will not be easy because it is a structural and deep-rooted problem.It results from low employment standards, underpaid working conditions and (often) inappropriate management behaviour.Such a situation has persisted over a long period of time.It has led to the development of a damaging 'culture of absence', where social norms provide widespread consent to abuse.It cannot be improved by a single measure, but requires a radical and multidimensional reorganisation of the way the public sphere functions. Another conclusion is that abuse of sickness absence is widespread in public administration.Policy-makers should try to resolve this difficult situation.It will not be easy because it is a structural and deep-rooted problem.It results from low employment standards, underpaid working conditions, and (often) inappropriate management behaviour.Such a situation has persisted over a long period and has led to the development of a harmful 'culture of absence', where social norms provide widespread consent to abuse.It definitely cannot be improved by a single measure but requires a radical and multidimensional reorganisation of the way the public sphere functions. Finally, the case of work motivation and its impact on the tendency to abuse sickness absence is the most intriguing results of this study.It turns out that highly motivated employees commit fraud more often than those who are less motivated.This contradicts the common belief that absenteeism can be reduced by increasing job satisfaction.It turns out that high satisfaction not only does not reduce abuse, but even increases it.Such finding is contradictory.Explanation of this paradox requires additional research.It should be clarified whether such effect applies only to Poland and results from some specific cultural or institutional conditions, or rather it is a general phenomenon. As for the research limitations, the source material used in the statistical analysis was the results of the survey.Therefore, the collected data do not reflect actual behaviour, but only the declarations of the respondents.It should be noted that respondents do not present facts as they really are, but as they perceive them and wish to present them.As the subject of research was morally reprehensible, it can be assumed that respondents were reluctant to reveal their true behaviour.Therefore, it seems necessary to develop this type of research in the future, but using other methods (e.g., experiments) that allow for a better recognition of the respondents' real inclinations. Disclosure statement No potential conflict of interest was reported by the author(s). Figure 1 . Figure 1.Abuse of sick leave absence according to circumstances.Source: own elaboration. Figure 2 . Figure 2. Results of the predictive model estimates for the abuse category COMPULSION (model 1:all variables, model 2: only the main variables).Note: The error whisker bars present 95% of the confidence interval for estimate B. Lines that cross one another represent the lack of differences between the predictors in the effect on the level of Compulsion.However, lines that do not cross one another represent important differences in the effect on the level of the Compulsion variable.Overlapped orange and blue whiskers means that there were no differences between estimates in two different models (model with [blue] and without [orange] confounding variables).Source: own elaboration. Figure 3 . Figure 3. Results of the predictive model estimates for the abuse category ESCAPE (model 1: all variables, model 2: only the main variables).Note: The error whisker bars present 95% of the confidence interval for estimate B.Lines that cross one another represent the lack of differences between the predictors in the effect on the level of Escape.However, lines that do not cross one another represent important differences in the effect on the level of the Escape variable.Overlapped orange and blue whiskers mean that there were no differences between estimates in two different models (model with [blue] and without [orange] confounding variables).Source: own elaboration. Figure 4 . Figure 4. Results of the predictive model estimates for the abuse category RECREATION (model 1:all variables, model 2: only the main variables).Note: The error whisker bars present 95% of the confidence interval for estimate B. Lines that cross one another represent the lack of differences between the predictors in the effect on the level of Recreation.However, lines that do not cross one another represent important differences in the effect on the level of the Recreation variable.Overlapped orange and blue whiskers mean that there were no differences between estimates in two different models (model with [blue] and without [orange] confounding variables).Source: own elaboration. Table 1 . Sample characteristics according to character of work and workplace. Table 2 . Respondents according to subjective assessment of employment conditions, own professional qualifications and characteristics of the employer. Table 3 . Categories of sick leave absence abuse. Table 4 . Results of structural model estimates for the dependent variable according to the three categories of abuse (RECREATION, ESCAPE and COMPULSION). Table 5 . Subfactors related to assessment of employment conditions and work atmosphere. ECON9.opportunity to fulfil one's passion and professional interests ECON10.motivation to work and level of engagement in delegated tasks ECON11.a sense of agency and an effect on events within the firm ECON12.accordance between work and personal interests Source: own elaboration. Table 6 . Results of structural model estimates for the independent variable from the second group: assessment of employment conditions and work atmosphere (with the two subfactors MOTIVATIONAL COND and SOCIAL COND). Table 7 . Results of structural model estimates for the independent variable from the third group: assessment of own professional qualifications (with one factor QUALIFICATIONS). Table 9 . Direction of effect of work-related factors on each category of sickness absence abuse. Description: " an increase in the factor value represents an increase in abuse in a given category.# an increase in the factor value represents a decrease in abuse in a given category.-relation close to zero for the level of abuse in a given category.� Statistically significant effect.Source: own elaboration. Table A2 . Abuse of sick leave absence and assessments of employment conditions and work atmosphere (in percent). Table A3 . Abuse of sick leave absence and assessment of own professional qualifications (in percent). Table A4 . Abuse of sick leave absence and assessment of characteristics of the employer (in percent).
2023-12-29T16:04:30.685Z
2023-12-25T00:00:00.000
{ "year": 2023, "sha1": "fdbc22fe340c7b603ec757ec86f15479a65b6fda", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/1331677X.2023.2296456?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "dae770f8068ec8f07e6584f31abf4787800b5bbd", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
250201600
pes2o/s2orc
v3-fos-license
Impacts of COVID-19 on Nutritional Intake in Rural China: Panel Data Evidence The COVID-19 pandemic introduced risks and challenges to global food and nutrition security. In this paper, we examine the impact of the COVID-19 pandemic on the nutritional intake of China’s rural residents using panel data and a fixed effects model. The data were collected in 2019 and 2020 and covered nine provinces and 2631 households in rural China. The results reveal that an increase of 100 confirmed cases in a county resulted in a 1.30% (p < 0.01), 1.42% (p < 0.01), 1.65% (p < 0.01), and 1.15% (p < 0.01) decrease in per capita intake of dietary energy, carbohydrates, fats, and proteins, respectively. Moreover, the COVID-19 pandemic had a significant and negative effect on dietary macronutrient intake in the low-income group at the 5% level of significance. Our study indicates that the potential insufficient nutrition situation, nutritional imbalance, and dietary imbalance of low-income rural residents should be addressed appropriately. Introduction The COVID-19 pandemic has been ongoing since January 2020 as a result of its rapid and widespread transmission and its difficulty in prevention and control [1][2][3]. As of 13 May 2022, there were 517,648,631 confirmed cases, including 6,261,708 deaths worldwide [4]. The epidemic has had a profound impact on the global economy and welfare, such as business shutdowns, job losses, disrupted supply chains, commodity price volatility, etc. [5][6][7][8]. Moreover, the pandemic introduced risks and challenges to global food and nutrition security [9,10] and made the pathway towards SDG2 even steeper [11], especially in rural areas of the developing world [12][13][14][15]. The channels through which the pandemic affects food and nutrition security comprise all four pillars of food security. Food availability and stability are affected by a lack of workers [16], delays in agricultural work [17], an increase in the price of food and materials [18][19][20][21], and trade restrictions [22]. Moreover, major threats to food access and utilization posed by COVID-19 are the loss of household income, reduced purchasing power [23][24][25], and supply chain disruptions caused by lockdown measures [26][27][28]. In this paper, we examine the impact of the COVID-19 pandemic on nutritional intake, a key aspect of food utilization and SDG2 [29]. Recent literature shows that the pandemic has had a significant but heterogeneous impact on nutritional intake. The consumption of nutrient-dense foods, such as vegetables, fruit, and animal-source food, has been reduced, while the consumption of carbohydrate-containing foods, such as bread, increases [30][31][32]. However, the lockdown policy led to an increase in fruit, vegetables, and fat consumption in some developed countries [33]. In terms of specific populations, the pandemic impact on Dutch older adults was negative [34], while the impact on Australian university students was positive [35]. Furthermore, evidence shows that the COVID-19 pandemic might affect the dietary structure and consumer behavior [36,37]. For example, consumers may prefer healthy diets [38,39], the demand for online food delivery may increase [40,41], panic buying may occur [42], and sustainable food consumption may be promoted [43][44][45]. The COVID-19 pandemic seriously affected rural China [32]. About 27% of the agrifood system's workers (about 46 million) lost their jobs due to COVID-19 during the lockdown phase (January 2020-March 2020) [46]. According to a survey in mid-February 2020, 23% of households who have been out of poverty since 2013 believed they might return to poverty [47]. However, only a few studies have evaluated the impacts of the COVID-19 pandemic on the dietary diversity [48,49] and food consumption of China's rural residents [50]. Tian et al. (2022) found that COVID-19 positively affected rural households' consumption of vegetables, aquaculture, and legumes, but COVID-19 significantly reduced rural households' dietary diversity [50]. To the best of our knowledge, the pandemic's impact on the nutritional intake of China's rural residents is still unknown. To fill the research gaps and enable a better understanding of how the COVID-19 pandemic affects nutritional intake, the specific objectives of the study are to: (i) investigate the COVID-19 pandemic impact on the nutritional intake of China's rural residents; and (ii) identify the heterogeneity of the pandemic impact among different income groups in addition to considering the different impacts of the pandemic on countries with different income levels. Moreover, since most similar studies use cross-sectional data [34,35] or non-national and small-size panel data [50], we use a nationwide panel data with nine provinces and 2,631 rural households and a fixed effects model following Amare et al. (2021) [23] to control for the unobserved factors, such as dietary preferences. Given China's food security concerns in the future [51,52], this study can provide policy recommendations for securing the basic application needs of rural residents. Study Design We empirically evaluated the impact of the COVID-19 pandemic on Chinese rural residents' nutritional intake using a multiple fixed effects (FE) model. The baseline regression is as follows. where the outcome variable Nutrition hcpt indicates the quantity of the nutritional intake of household h in county c, province p, and time t. In this paper, the outcome variable includes dietary energy, carbohydrate, fat, and protein. COV ID ct is the key explanatory variable, indicating the number of confirmed COVID-19 cases. X hcpt is a matrix of control variables, including the price of nutrients, expenditure, number of days spent performing non-farm work, presence of heavy workers, sports facilities in villages, total retail sales of consumer goods in counties, Internet access, and family size. α h is the household fixed effect, and ε hcpt is the error term. β 1 is the key parameter indicating the impact of COVID-19 on nutritional intake, indicating one more confirmed case in a county would result in a 100 × β 1 % change in nutrient intake in ceteris paribus condition. To control for the unobservable aspects that stay constant within the county, province, and time, we add three more parameters to Equation (1). where δ c is county fixed effect, which controls all time-invariant county-level characteristics. Moreover, θ p and γ t indicate the province and time fixed effect, respectively. Data Collection We used the 2019-2020 Survey for Agriculture and Village Economy (SAVE) data collected by the Institute of Agricultural Economics and Development, Chinese Academy of Agricultural Sciences [53][54][55]. The 2019-2020 SAVE data record the annual production, consumption, expenditure, and income of the rural households and cover 5818 observations in the Hebei, Jilin, Heilongjiang, Anhui, Fujian, Henan, Hunan, Sichuan, and Yunnan provinces of China ( Figure 1). Moreover, the number of accumulated confirmed COVID-19 cases in each county by the end of December 2020 was collected by Wind Info. We also used the consumer price index (CPI) data from the National Bureau of Statistics of China (NBSC). Data Collection We used the 2019-2020 Survey for Agriculture and Village Economy (SAVE) data collected by the Institute of Agricultural Economics and Development, Chinese Academy of Agricultural Sciences [53][54][55]. The 2019-2020 SAVE data record the annual production, consumption, expenditure, and income of the rural households and cover 5818 observations in the Hebei, Jilin, Heilongjiang, Anhui, Fujian, Henan, Hunan, Sichuan, and Yunnan provinces of China ( Figure 1). Moreover, the number of accumulated confirmed COVID-19 cases in each county by the end of December 2020 was collected by Wind Info. We also used the consumer price index (CPI) data from the National Bureau of Statistics of China (NBSC). Outcome Variables Since the SAVE data only contains at-home consumption information of households for 18 food items, we first divided the household food consumption by the family size to obtain the per capita food consumption (kg/year), then converted the per capita food consumption into per capita intake of dietary energy (kcal/day), carbohydrates (g/day), fat (g/day), and protein (g/day), based on the China Food Composition [56]. Outcome Variables Since the SAVE data only contains at-home consumption information of households for 18 food items, we first divided the household food consumption by the family size to obtain the per capita food consumption (kg/year), then converted the per capita food consumption into per capita intake of dietary energy (kcal/day), carbohydrates (g/day), fat (g/day), and protein (g/day), based on the China Food Composition [56]. However, this method may have underestimated the nutritional intake because it ignores other food (not included in the 18 categories) consumed at home and all food consumed away from home. We assumed that the nutritional content of other food consumed at home and all food consumed away from home was proportional to the 18 categories of food consumed at home as a function of expenditure [57]. Meanwhile, we assumed 50% of food expenditures away from home pertained to food quantities consumed [58]. Thus, the proportion of the 18 categories of food expenditure in the total food expenditure can be expressed as follows: where I = 1, . . . ,18; x i represents the expenditure on food item i; X OT indicates the expenditure on other food (not included in the 18 categories) consumed at home; X FAFH indicates the food expenditure away from home. Thus, the per capita daily intake of nutrient k is expressed as: where Nutrition k represents the total intake of nutrient k from all food items (Table 1); N ik is the intake of nutrient k obtained from food item i; q i represents the per capita consumption of food item i; and γ i represents the proportion of the edible parts of food item i. COVID-19 According to the Law on the Prevention and Control of Infectious Diseases of the People's Republic of China, the county government can take measures such as stopping work, restricting activities, or lockdown as necessary for public safety. Further, there have been differences in prevention and control policies among counties in China during the COVID-19 pandemic. Thus, we use the cumulative cases at the county level to measure the impact of COVID-19 (Table 1). Weighted Price of Nutrients Price is one of the major determinants of consumer behavior [59][60][61]. As a consequence of the lockdown policies implemented by COVID-19, the food purchase and nutrition intake of rural residents were strongly influenced by price fluctuations [18,62]. However, it was only possible to collect food prices (unit values), not nutrient prices, during the data collection process. Thus, a weighted nutrition price (P k ) is introduced in this paper to describe the price of nutrients. where P i is the price of food item i (Table 1); E i and Q i indicate the expenditure and consumed quantity of food item i, respectively. Further, P i N ik indicates the price (or the unit values) of nutrient k in food item i, and N ik Nutrition k indicates the proportion of nutrient k obtained from food item i in the total intake of nutrient k from all food items. Other Control Variables Income, expenditure, and family size are also important determinants of food consumption (Table 1) [59,[63][64][65]. In the single equation model of food consumption, either income or expenditure can be used. In this paper, the per capita annual expenditure was used since respondents usually do not provide their actual incomes. We also used an instrumental estimation of the fixed effect model and used expenditure as the instrumental variable of income. Moreover, activity level and food accessibility are also variables that could affect nutritional intake [26,41]. Though the variables are not available in the SAVE data, we selected some proxy variables ( Table 1). As proxies for activity level, we utilized the existence of sports facilities in villages, the number of days spent performing non-farm work by household laborers, and the presence of heavy workers in the industry, construction, and mining. As proxies for food accessibility, we chose the total retail sales of consumer goods in counties and whether households had access to the Internet. Data Processing and Cleaning First, we deleted some samples to construct balanced panel data. Second, we excluded samples with extreme values by winsorizing at the 2% quantile. Third, prices, incomes, and expenditures were deflated by China's annual CPI. After data processing and cleaning, we kept 2631 rural households, and the total observation was 5262 (Figure 1). Statistical Analysis As shown in Table 2, the average per capita daily intakes of carbohydrates, fat, protein, and dietary energy in 2019 were 252.88 g, 96.72 g, 48.56 g, and 2059.43 kcal, respectively. In 2020, the average carbohydrate intake decreased by 5.43 g, while fat and protein intakes increased by 2.01 g and 0.17 g, respectively. However, the differences in macronutrient intakes were not significant. In terms of data quality, the per capita daily intake of dietary energy was similar to the Report on the Nutrition and Chronic Disease Status of Chinese Residents (2020) [66]. However, the fat intake from the SAVE data was higher than that of the Report on the Nutrition and Chronic Disease Status of Chinese Residents (2020), while the carbohydrate and carbohydrate intakes from the SAVE data were lower. The average unit price of macronutrients has increased from 2019 to 2020. The average unit price of fat, protein, and dietary energy significantly increased in 2020, by 0.48 CNY, 0.98 CNY, and 0.03 CNY, respectively, whereas the difference in the average unit price of carbohydrates did not change significantly. Additionally, while per capita income in 2020 was essentially the same as it was in 2019, per capita expenditure was significantly higher. Compared with 2019, the proxy variables for the activity level in 2020 were stable. There were an average of 3.94 family members, 105 days of non-farm work were spent per year, 30% of household laborers engaged in heavy work including industry, construction, and extraction, and sports facilities were found in about 53% of villages. On the other hand, overall retail sales of consumer goods in counties decreased significantly by approximately 741 million CNY compared with 2019 due to embargo restrictions and the closure of some retail businesses. Meanwhile, a significant increase of 10 percentage points has been observed in the proportion of farmers with Internet access. Notes: ** p < 0.05, *** p < 0.01. Results In this section, we first presented the estimation results of the COVID-19 impact on dietary energy, carbohydrate, fat, and protein intakes, respectively. Then, we proved the robustness of the estimation results. Finally, we identified the heterogeneity in pandemic impact across different income groups. Table 3 sheds light on the impacts of COVID-19 on dietary energy intake. To explore the nonlinear relationship between dietary energy intake and expenditure, we added the square term of the expenditure into Equation (2). As shown in Table 3, a negative and significant coefficient of COVID indicates that an increase in COVID-19 cases in the counties will significantly reduce the per capita dietary energy intake of rural residents. Specifically, an increase of 100 confirmed cases in a county results in a 1.30% (p < 0.01) decrease in per capita dietary energy intake (Table 3). COVID-19 Impact on Dietary Energy Intake In addition, our results demonstrate that an increase in weighted energy price led to a decrease in dietary energy intake. The dietary energy intake will decrease by approximately 0.48% (p < 0.01) for every 1% increase in price (Table 3). Accordingly, the coefficient on the square term of the expenditure was significantly negative, which indicates that the impact of expenditure on dietary energy intake had an inverted U-shape. Furthermore, the results indicate that a larger family tended to reduce the dietary energy intake of family members, in line with previous research. Notes: Standard errors in parentheses; * p < 0.10, ** p < 0.05, *** p < 0.01. COVID-19 Impact on Carbohydrate, Fat, and Protein Intakes From Tables 4-6, the most important highlight is that the increased number of confirmed COVID-19 cases in a county caused a significant reduction in per capita carbohydrate, fat, and protein intake. For every 100 additional cases of COVID-19 in a county, the intake of carbohydrates, fats, and proteins declined by 1.42% (p < 0.01), 1.65% (p < 0.01), and 0.81% (p < 0.01), respectively. Thus, among the three major macronutrients, COVID-19 had the largest relative effect on fat intake in rural China. In addition, the own-price elasticities of the three macronutrients were negative, and the cross-price elasticities were positive (Tables 4-6). The own-price elasticities of carbohydrate, fat, and protein were −0.87 (p < 0.01), −0.76 (p < 0.01), and −0.69 (p < 0.01), respectively. The result indicates that Chinese rural residents were most sensitive to the price of carbohydrates, and the macronutrients had a significant substitution relationship. An increase in the price of one nutrient will result in the consumer switching to another nutrient to ensure adequate overall calorie intake. Notes: Standard errors in parentheses; * p < 0.10, ** p < 0.05, *** p < 0.01. Notes: Standard errors in parentheses; * p < 0.10, ** p < 0.05, *** p < 0.01. Notes: Standard errors in parentheses; * p < 0.10, ** p < 0.05, *** p < 0.01. Robustness Test First, we assessed the robustness of the estimation results using fixed effects models with various dimensions (Tables 3-6). In Columns (1) and (2), we only controlled for time and province fixed effects, while the standard errors in Columns (1) were not clustered robust. Then, in Columns (3), we added the county fixed effect. The results showed that the coefficients of variables were similar in all columns. Moreover, the estimation results using a fixed effect model (Table A1) were similar to those in Tables 3-6. Therefore, the estimated results are robust. Additionally, we replace expenditures with income in Equation (2). Due to the endogeneity associated with income measurement error, an instrumental estimation of the fixed effect model is constructed using expenditure as an instrumental variable. The results in Table A2 are generally consistent with those in Tables 3-6, supporting the robustness of our study. Heterogeneity Effect across Income Strata In addition to identifying the heterogeneity in pandemic impact across different income groups, we also examined how the pandemic impacted rural residents with different income levels. In this paper, the entire sample was divided into four categories based on the percentile of per capita income: low-, middle-low-, middle-high-, and high-income groups. Specifically, the low-income group consisted of households in the lowest 25% of income brackets, the middle-low-income group consisted of households ranging from 25% to 50% of income brackets, the middle-high-income group consisted of households ranging from 50% to 75% of income brackets, and the high-income bracket comprised the remainder. The estimation results for different income groups are shown in Table 7. The estimation results show that the COVID-19 pandemic had a significant and negative effect on the dietary macronutrient intake in the low-income group at the 5% level of significance. An increase of 100 confirmed cases in a county resulted in a 2.58% (p < 0.05), 2.18% (p < 0.05), 2.92% (p < 0.05), and 2.28% (p < 0.05) decrease in per capita dietary energy, carbohydrate, fat, and protein intake of the low-income rural residents. Furthermore, the fat intake of high-income rural residents decreased by 1.24% (p < 0.05) for every 100 confirmed cases in a county. (Tables S1-S4); standard errors in parentheses; * p < 0.10, ** p < 0.05. Discussion To the best of our knowledge, this paper is one of the first studies to investigate the COVID-19 pandemic's impact on the nutritional intake of China's rural residents. In order to prevent the spread of the virus, governments throughout China implemented a range of lockdown policies, including traffic control, production shut down, and restrictions on movement [46,50]. On the one hand, these measures disrupted agricultural production and food chain supplies and increased the cost of food storage and transportation [67,68]. On the other hand, disruptions in the supply of agricultural products, restrictions on human movement, and suspension of transportation and passenger transport caused the agricultural and non-farm incomes of rural residents to be reduced [69]. Additionally, rural residents' expected income decreased when faced with epidemic-induced uncertainty, and they were more likely to upsurge precautionary saving motives as a result [70]. As a result of these factors, there was a decrease in food availability, a decrease in farmers' willingness to consume, and consequently, a decrease in dietary energy intake. This provides an explanation for the main findings of this paper. Further, since the negative effects of the COVID-19 pandemic on the intake of macronutrients differed, it is likely that the structure of the intake of macronutrients was altered as a result of the pandemic. The main reason why the nutritional structure changed was the changing structure of foods consumed. Thus, there was a relatively small decline in the consumption of carbohydrate-rich cereals as residents maintained their basic dietary needs. Meanwhile, fat-rich foods such as pork were consumed less frequently. Note that in 2020, China was also affected by the African swine fever outbreak, which contributed to a significant rise in pork prices and, to some extent, to a reduction in meat consumption. Using a time fixed effect, the impact of the African swine fever epidemic was controlled for in this study and thus did not affect our conclusions. The study also found that low-income groups suffered significant and negative consequences in dietary intake from the COVID-19 pandemic. Since low-income groups have strong budget constraints and a high Engel coefficient, it was difficult to adjust the consumption structure when affected by the COVID-19 pandemic. Consequently, low-income households were less interchangeable across consumption types and food types. Under the influence of the pandemic, they could only reduce demand. Moreover, the impacts of the COVID-19 pandemic on the fat intake of high-income groups were also signifi-cant. However, their overall proportion of dietary energy from fat reached 42.9% in 2020, exceeding the maximum recommended value (30%) of the Dietary Guidelines for Chinese Residents (2016). Thus, the COVID-19 pandemic might improve the healthy diet of China's high-income rural residents. Several important policy implications are derived from the study. First, since the COVID-19 pandemic could exacerbate undernutrition among rural residents, particularly those with lower incomes, it is the government's responsibility to ensure that low-income rural residents have access to sufficient nutrition. Second, there needs to be attention given to nutritional balance and dietary balance in light of COVID-19. Third, it is important for the government to introduce supply-side policies to stabilize production, as well as provide policies to promote consumption and price stability to make the food system more resilient [71]. Finally, under the influence of the COVID-19, China's food security should focus on macro policy while focusing more on resident groups, families, and individuals. It should be noted that our study has several limitations. First, the SAVE data contained only data regarding food consumption at home, which, despite being processed using Equation (3), does not accurately reflect food consumption away from home. Second, because of the limitations of the SAVE data, COVID-19 can only be evaluated with regard to macronutrients, and its effect cannot be assessed on micronutrients such as vitamins and minerals. Third, studies have shown that farmers can increase production diversity in order to enrich dietary diversity [72], but this factor was not taken into consideration in this study. Accordingly, we suggest that future studies should concentrate on the effects of COVID-19 on food consumption away from home, micronutrients, and production diversity in rural China. Conclusions In summary, based on nationwide panel data and a fixed effects model, this paper provides insights into the nutritional intake of China's rural residents during the COVID-19 pandemic in 2020. We found that the COVID-19 pandemic negatively impacted the intake of dietary energy, carbohydrates, fats, and proteins. Furthermore, there was heterogeneity in the nutritional intake among different income groups, and the dietary intake of the low-income group was significantly affected by the COVID-19 pandemic. Therefore, the government should assist low-income groups in accessing sufficient nutrition during the COVID-19 epidemic. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/nu14132704/s1, Table S1: Estimation results of the COVID-19 impact on dietary energy intake by income groups. Table S2: Estimation results of the COVID-19 impact on carbohydrate intake by income groups. Table S3: Estimation results of the COVID-19 impact on fat intake by income groups. Table S4: Estimation results of the COVID-19 impact on protein intake by income groups. Data Availability Statement: The data presented in this study are available upon request from the corresponding author.
2022-07-02T15:26:15.043Z
2022-06-29T00:00:00.000
{ "year": 2022, "sha1": "ea9eff522ca682882a5041a288bfb45698d21a32", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/14/13/2704/pdf?version=1656503256", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bd435b29b2259678aa87d7da467e6201ea85cead", "s2fieldsofstudy": [ "Medicine", "Economics" ], "extfieldsofstudy": [ "Medicine" ] }
37950639
pes2o/s2orc
v3-fos-license
Structural Disorder and Properties of the Stuffed Pyrochlore Ho2TiO5 We report a structural and thermodynamic study of the"stuffed spin ice"material Ho2TiO5 (i.e., Ho2(Ti1.33Ho0.67)O6.67), comparing samples synthesized through two different routes. Neutron powder diffraction and electron diffraction reveal that the previously reported defect fluorite phase has short-range pyrochlore ordering, in that there are domains in which the Ho and Ho/Ti sublattices are distinct. By contrast, a sample prepared through a floating zone method has long range ordering of these sublattices. Despite the differences in crystal structures, the two versions of Ho2TiO5 display similar magnetic susceptibilities. Field dependent magnetization and measured recovered entropies, however, show a difference between the two forms, suggesting that the magnetic properties of the stuffed pyrochlores depend on the local structure. Introduction Pyrochlore compounds, with general formula A 2 B 2 O 7 , represent an important family of materials that display geometrically frustrated magnetism. The frustrated geometry arises from sublattices of corner sharing tetrahedra, which are present for both the A and B cations. Spin ice pyrochlores (Ln 2 M 2 O 7 , where Ln = Dy, Ho and M = Ti, Sn) have been of considerable interest and studied extensively [1][2][3][4][5][6][7][8][9] as unique examples of magnetic frustration where the spins have effective ferromagnetic interactions. The geometry of the rare earth sublattice, combined with crystal field effects that restrict the moments to be Ising-like, generate spin frustration that mimics the positional frustration of hydrogen atoms in water ice [8][9][10] : spins or hydrogen atoms sitting on the corners of tetrahedra seek a minimum energy configuration in the short range by freezing into a 'two-in, two-out' arrangement. 1 The large degeneracy that arises based on energetically equivalent arrangements of two-in and two-out on a single tetrahedron leads to overall long range disorder. The same measurable zero-point entropy for both spin ice 9 and water ice 11,12 exists according to the 'ice rules,' 13,14 and is directly attributable to this degeneracy. Dilution studies on spin ice materials, [15][16][17] where magnetic rare earth moments are replaced with a nonmagnetic species, reveal that decreasing the spin interactions does not destroy the ice-like state, but does suppress the magnitude of the freezing signature. It was recently shown that 'stuffed spin ice' -the opposite case where additional magnetic atoms are stuffed into the nonmagnetic Ti sites creating, for example, the series Ho 2 (Ti 2-x Ho x )O 7-x/2 , 0≤ x ≤ 0.67retains the same zero-point entropy as undoped spin ice and may possess accelerated spin dynamics. 18 Ho 2 (Ti 2-x Ho x )O 7-x/2 represents a continuous solid solution from Ho 2 Ti 2 O 7 (x=0) to Ho 2 (Ti 1. 33 Ho 0.67 )O 6.67 (x=0.67), or equivalently, Ho 2 TiO 5 . [19][20][21][22][23][24] The extra Ho in stuffed spin ice replaces Ti for small x, i.e., the excess Ho are located on the B-site sublattice of the pyrochlore structure. The B-sites form a sublattice of corner-sharing tetrahedra equal in size and atomic distances to the A-site sublattice but offset by (½, ½, ½). At lower doping levels, 0 ≤ x ≤ 0.3, the structure retains this pyrochlore ordering with a clear distinction between the A and B sublattices. The extra Ho is confined primarily to the B-site, and the A sublattice remains largely undisturbed. As more Ho is substituted in place of Ti (x > 0.3), some Ti begins mixing onto the Ho A-site, and both A and B-sites have mixed occupancy at x=0.67 doping. 19 The average structure is of the fluorite type, where the A and B cations are randomly mixed on metal sites. This transformation turns the magnetic lattice from corner sharing tetrahedra in the pyrochlore Ho 2 Ti 2 O 7 , to edge sharing tetrahedra in the fluorite Ho 2 TiO 5 . Here we report a comparison of the structure and magnetic properties of Ho 2 TiO 5 synthesized in two different ways. The first synthesis condition involves firing the starting materials to a high temperature followed by a rapid quench to room temperature. The average structure and magnetic properties of this variant have been reported previously. 18,19 This material is referred to as quenched (Q) Ho 2 TiO 5 . The second synthesis condition involves making Ho 2 TiO 5 with a floating zone crystal growth technique, where the material is melted and cooled at a much slower rate than that used in the Q synthesis. This material is referred to as floating zone (FZ) Ho 2 TiO 5 . The Q sample displays on average a complete disordering of the Ho and Ti cations to form the fluorite structure. 18,19 Here we report neutron powder diffraction and electron diffraction data that give evidence that short range pyrochlore ordering exists in this highly disordered material. This ordering takes the form of small pyrochlore domains in which the extra stuffed Ho mixes in a disordered fashion into the B-site sublattice, while the A-site sublattice, containing the usual pyrochlore Ho arrangement, is largely undisturbed. The domains have short correlation lengths, which we postulate to arise from an antiphase arrangement of the pyrochlore regions. These antiphase domains are postulated to form on cooling from a fully disordered cation array at high temperature, nucleating into pyrochlore-like domains where one or the other of the interpenetrating tetrahedral sublattices is selected locally to be the "A" site in the short range ordered pyrochlore. The average over these small pyrochlore domains appears as a disordered fluorite structure, as reported earlier. In contrast to the Q sample, we find the FZ sample to be a long range ordered pyrochlore phase, with the A lattice consisting almost entirely of Ho ions and the extra Ho mixing primarily on the Ti B-site. Here we show that the distinction in the long range versus short range cation ordering does not result in significant differences in magnetic properties between the FZ and Q variants. The recovered entropy measured for the FZ sample is, however, greater than that expected for ice-like materials, while the missing entropy of the Q version remains similar to that present in ordinary Ho 2 Ti 2 O 7 spin ice. Experimental Ho 2 (Ti 1.33 Ho 0.67 )O 6.67 , or Ho 2 TiO 5 , was prepared in two different ways. In both cases, Ho 2 O 3 (Cerac, 99.9%) and TiO 2 (Cerac, 99.9%) powders were thoroughly mixed in a 1:1 molar ratio with an agate mortar and pestle. For the quenched sample, the powders were pressed into a pellet, wrapped in molybdenum foil, heated at 1700 o C in a static argon atmosphere for 12 hours, and quenched to room temperature in approximately 30 minutes. The argon atmosphere was achieved in a vacuum furnace first evacuated to about 10 -6 torr and then back-filled with argon (Airgas, 99.9%) to room pressure. The floating zone sample required the same initial treatment as the quenched version to ensure chemical homogeneity. The sintered pellet was reground to a fine powder and formed into cylindrical rods in a sealed rubber tube pressed for 15 minutes at 70 MPa in a cold isostatic press. The polycrystalline rods were sintered in air at 1400 °C for 12 h before use in a Crystal Systems optical image floating zone furnace. The crystal was grown in flowing air at rates between 2.00 and 10.00 mm/h, and pulverized afterwards for structure and physical properties characterization. Both samples were analyzed for phase purity by powder X-ray diffraction (XRD) using Cu Kα radiation and a diffracted beam graphite monochromator. Neutron diffraction (ND) data were collected on both samples at the NIST Center for Neutron Research on the high resolution powder neutron diffractometer with monochromatic neutrons of wavelength 1.5403 Å produced by a Cu(311) monochromator. Collimators with horizontal divergences of 15′ and 20′of arc were used before and after the monochromator, and a collimator with a horizontal divergence of 7′ was used after the sample. Data were collected in the 2θ range of 3°-168° with a step size of 0.05°. Rietveld refinements of the structures were performed with the GSAS suite of programs. 25 The peak shape was described with a pseudo-Voigt function. The background was fit to 12 terms in a linear interpolation function. The neutron scattering amplitudes used in the refinements were 0.801, -0.344, and 0.580 (x10 −12 cm) for Ho, Ti, and O, respectively. Electron microscopy analysis was performed with a Philips CM200 electron microscope having a field emission gun and operated at 200 kV. Electron-transparent areas of specimens were obtained by crushing them slightly under ethanol to form a suspension and then by dripping a droplet of this suspension on a carbon-coated holey film on a Cu or Au grid. Magnetic and specific heat measurements were performed on pressed pellets in Quantum Design MPMS and PPMS cryostats. The magnetizations of the samples were measured down to T = 1.8 K and in fields up to H = 7 T. Fits to the Curie-Weiss law were performed to the DC susceptibility χ = M/H, using magnetization data taken at H = 0.1 T. Heat capacity measurements were performed using a standard semi-adiabatic heat pulse technique, and the addendum heat capacity was measured separately and subtracted. The samples used for heat capacity measurements were thoroughly mixed with Ag powder before being pressed into a pellet, to facilitate thermal conductivity throughout the sample. The contribution to the heat capacity from the Ag was subtracted off using previously published data. 26 All the samples for susceptibility were cut to needle-like shapes, and the long side was oriented along the direction of the applied field, in order to minimize demagnetization effects. Results and Discussion Rietveld refinement fits to the neutron powder diffraction (ND) data are shown in Figure 1 to contrast the difference in crystal structures of Ho 2 TiO 5 made using the two methods. The patterns reveal that both long range and short range order are present in the materials, seen in the presence of both narrow and broad diffraction peaks. The fluorite structure for Ho 2 TiO 5 is face centered cubic, with a ≈ 5.15Å. The pyrochlore structure, in contrast, is face centered cubic with a ≈ 10.3Ǻ, a 2x2x2 supercell of the fluorite structure due to the distinction between the A and B sites. Thus the diffraction patterns consist of a series of peaks from the fluorite-like structure (fluorite substructure peaks) with additional peaks (the pyrochlore superstructure peaks) that appear with increasing intensity as the pyrochlore-type ordering becomes more developed. If the pyrochlore ordering occurs over only a short range, then the pyrochlore superstructure peaks will be broadened. This is seen in Figure 1. The neutron data used in the refinements were taken at room temperature. A diffraction pattern of the Q sample at 4 K was virtually identical to the room temperature data, indicating the structure has no temperature dependence. In the present study, the significantly broadened superstructure peaks were omitted from the refinements, as a detailed analysis of neutron powder patterns in the Ho 2 (Ti 2-x Ho x )O 7-x/2 series, including fits to the broadened peaks, will be published elsewhere. 27 For the Q sample, all of the pyrochlore superstructure peaks were broadened. For the FZ sample, only the regions near the 331 and 422 pyrochlore peaks were excluded from the refinement. The data for both Q and FZ samples were refined within a pyrochlore structure model to better compare lattice parameters and cation site occupancies. Ho and Ti occupancies were allowed to refine freely on both the A and B-sites of the pyrochlore structure, with the constraint that the occupancies summed to 1 on each site (i.e. all metal sites are fully occupied). The oxygen occupancies were also allowed to refine freely. Thermal displacement parameters were constrained for cations mixed on the same site and for oxygen atoms in the similar 8b and 8a sites. In general, all of the atoms gave relatively large thermal displacement parameters in both samples when allowed to refine freely. This is due to the high degree of average positional disorder seen in the cation and oxygen lattices of both phases. The results from the refinements are displayed in Table I. The Q sample displays average long range disorder between Ho and Ti atoms. This is evidenced by the presence of sharp, well defined fluorite subcell peaks and the lack of equally sharp pyrochlore superstructure peaks. Considering only the sharp fluorite peaks, the refinement shows that Ho and Ti are randomly mixed in an approximately 2:1 ratio on both 16d and 16c cation sites, as expected from the material stoichiometry. However, the pyrochlore superstructure peaks are not entirely absent and are actually broadened. This is most apparent for the 331 peak, the highest intensity pyrochlore supercell reflection, and is true for all of the pyrochlore superstructure reflections. The broadened peaks show that there remains short range pyrochlorelike ordering in the Q sample. These broad peaks were not seen in the X-ray diffraction data (not shown) or mentioned in previous structural reports on Ho 2 TiO 5 , which employed X-ray diffraction. 18,19,[22][23][24] This indicates that the oxygen atoms, which are relatively transparent to Xrays, may contribute to the short range order scattering. In Ho 2 Ti 2 O 7 and other ordered pyrochlores, the oxygen atoms fully occupy 8b and 48f sites while the 8a site is entirely vacant. 28 In the fluorite structure, the A and B atoms are mixed and no longer distinguishable, and the oxygen atoms may occupy any of the 8b, 8a, or 48f sites. 28 The structure refinement reported here on the Q sample shows that the Ho-stuffing alters the oxygen lattice by introducing The FZ sample contrasts with the Q phase by displaying sharp, long range ordered peaks that are well described by a cubic pyrochlore model, although some diffuse scattering is still observed, particularly around the 331 and 422 peaks. Despite omitting those peaks from the refinement, the thermal displacement parameter of the 16c site refined to an unusually large value. However, by fixing the U iso to a reasonable value, the quality of fit did not change significantly, and the site occupations of the cations changed by less than 5%. We attribute this to the low intensity broad reflections present throughout the data. Small regions of short range ordered pyrochlore, as in the Q phase, could contribute to this scattering. However, a variety of low intensity peaks were not well fit by a simple pyrochlore model (see insets of Figure 1), suggesting the presence of an additional structural modulation, as described further below. As can be seen in Table I, refinement shows that the extra stuffed Ho in the FZ version mixes primarily on the Ti B-site, leaving the original Ho site largely unaffected. Figure 2 Electron diffraction patterns (EDP) comparing the Q and FZ samples are given in Figure 3, which shows projections along the <110> zone axis. The strongest spots in both cases represent the underlying fluorite sub-cell common to both structures. The Q EDP in Figure 3a displays superreflections situated halfway in between the fluorite spots (e.g. 111, 331 referred to the pyrochlore cell) providing additional evidence of pyrochlore ordering in the sample. These pyrochlore reflections are greatly elongated in the 111 direction, however, indicating that the ordering is on a short length scale, consistent with the broadening of the pyrochlore superstructure peaks in the neutron diffraction data. This structure model, which describes short range pyrochlore ordering within an average disordered fluorite, is similar to that observed by electron diffraction for cubic stabilized zirconia materials. [29][30][31][32][33][34][35][36][37] Given the large size difference between Ho and Ti, it is not surprising to find that despite the complete disorder of the average structure seen in the Q defect fluorite over long length scales, ordering still occurs on the local scale. Indeed, when allowed to cool from high temperature at a slower rate, as in the FZ sample, the ordering occurs over a longer range, yielding the sharper pyrochlore reflections seen in Figure 3b. Only a slight elongation of these spots exists along the 111 direction (the 111 peak in the neutron powder diffraction data is also slightly broadened). This indicates that the domains of pyrochlore ordering in the FZ material are on average much larger than the pyrochlore domains in the Q material. The additional weak reflections seen in the EDP show that the details of the pyrochlore ordering in both samples are actually more complicated. Additional weak reflections of this type indicate the presence of a minor structural modulation of the cubic pyrochlore structure in both cases. These reflections can be described as a 7-fold increase of the pyrochlore unit cell in the 662 direction. This 7x supercell is observed in both types of materials (Figure 3). An additional tripling of the pyrochlore unit cell in the 111 direction is seen only in the FZ sample. These reflections are also elongated, indicating the tripling occurs only in the short range. Nanodiffraction (spot size about 5 nm) and HREM imaging show that the diffraction patterns in We surmise that antiphase domains are responsible for the short range pyrochlore ordering described above. The Ho and Ti atoms in Ho 2 (Ti 1.33 Ho 0.67 )O 6.67 are randomly mixed on both the A and B pyrochlore sites at high temperature. As the material is quenched, Ho and Ti will naturally try to order separately from one another due to their large difference in size. This results in pure Ho nucleating out on one of the two interpenetrating sublattices of corner sharing tetrahedra: this becomes the A-site of the pyrochlore ordering in this local region of the material. The extra stuffed Ho is then forced onto the other pyrochlore cation site and mixed with the remaining Ti. The antisite domains, where one or the other of the interpenetrating sublattices is chosen locally by Ho, are frozen in by the quench, and short range pyrochlore ordering is the consequence. Taken as a powder average, the Ho and Ti appear disordered over both A and B sites resulting in an average fluorite structure. The FZ process allows the material to cool slowly and the antisite domains can anneal into just one type of site ordering. This is supported by the long range sharp pyrochlore peaks in the neutron pattern and the sharp pyrochlore spots in the EDP. The magnetic susceptibility of the FZ sample is given in Figure 4, showing no difference between the zero-field cooled and field cooled data. The lack of a bifurcation precludes the material from being a spin glass in the temperature regime studied, similar to the behavior reported previously for the Q sample. 18 A χ -1 (T) comparison between Q and FZ is displayed in the inset, with the Curie Weiss temperatures (θ w ) and effective magnetic moments (p) determined from fits to both the high temperature (50-150 K) and low temperature data (10-20 K) displayed in Table II. Both samples show no long-range magnetic order down to 2 K, and have similar negative θ w 's, which indicate dominant antiferromagnetic spin interactions. This is in contrast to undoped Ho 2 Ti 2 O 7 spin ice, which has weakly ferromagnetic interactions. 38 The determined p values are similar in both Q and FZ samples also, and are close to the expected value for a free Ho 3+ ion. The differences in long range vs. short range ordering between the Q and FZ samples therefore do not result in significant differences in the magnetic susceptibility. Figure 5 compares the field dependence of the magnetization for both the Q and FZ samples. Although they both reach similar magnetizations at 6 Tesla, the Q sample begins to saturate at a lower applied field. The FZ phase requires a stronger applied field before approaching the same magnetization, and appears would saturate at a higher magnetization if extrapolated to stronger fields. This is consistent with the slightly more negative θ w in the FZ sample, indicating slightly stronger antiferromagnetic interactions that would be harder to saturate with an external field. The M(H=6 T) values are given in Table II, and are approximately half of the full magnetization expected for a free Ho 3+ ion. In Ho 2 Ti 2 O 7 , the magnetization saturates at half as well 38 suggesting that despite the difference in average spin connectivity between Ho 2 Ti 2 O 7 and the Q and FZ Ho 2 TiO 5 , their local single-ion anisotropies may be similar. Specific heat data, C(T), for the FZ and Q samples are plotted in Figure 6a. Both samples show a magnetic peak around T = 1.6 K, followed by the beginning of a lower T Schottky peak, due to the hyperfine contributions from the Ho nuclei. The peak at T=1.6 K is much sharper in the FZ sample, suggesting that a longer range ordering is occurring in this material. Figure 6b compares the magnetic entropy, S(T), at H = 0 of the FZ and Q samples, determined by first subtracting the lattice and nuclear spin contributions from the total specific heat followed by integrating C magnetic (T)/T from low to high T. As reported previously, 18 the magnetic entropy of the Q sample remains ice-like, saturating at R[ln2 -1/2ln(3/2)] rather than the expected entropy for a two-state system, Rln2. This was unexpected as the average structure of the Q sample consists of an array of edge-sharing tetrahedra of Ho atoms, contrary to the corner-sharing tetrahedral arrangement in Ho 2 Ti 2 O 7 . The recovered magnetic entropy of the FZ sample is shown to be larger than that of the Q sample. Because of the differences in average crystal structure, a disparity in the measured entropy is not surprising. Conclusions The synthesis of a long range ordered pyrochlore-like phase of Ho 2 TiO 5 using the floating zone method is reported. The structure and properties of this FZ sample are compared with the previously reported fluorite-like Q phase of Ho 2 TiO 5 . The Q sample is actually pyrochlore-like on the local scale, where the extra Ho mixes primarily onto the Ti B-site while the A-site remains undisturbed. The FZ sample displays this pyrochlore-like ordering in the long range. Both materials exhibit an additional 7-fold structural modulation, but only the FZ phase shows another 3-fold modulation in the 111 direction. While the differences in structure of the two variants do not affect the magnetic susceptibility significantly, the field dependent magnetization and magnetic entropy are clearly influenced. One possibility is that the extra 3fold structural modulation observed in the FZ phase of Ho 2 TiO 5 breaks the cubic symmetry of the pyrochlore lattice significantly, resulting in our observation of a lower ground state entropy for that variant. The structural information reported here is important for the modeling of the magnetic behavior of stuffed spin ice: since we demonstrate that the "stuffed" Ho in these spin ices goes on the pyrochlore B sites, locally adding magnetic neighbors to an undisturbed pyrochlore lattice. Similar studies of other stuffed rare earth pyrochlores will lend insight into whether these structural observations can be generalized to the wide range of frustrated magnetic materials in this family. (U iso = isothermal temperature factor; Occ = occupancy.) Compound Atom Wyckoff position x y z U iso *100 Occ Ho 2 TiO 5 Ho (1) Inset: The inverse susceptibility versus temperature is compared between Q and FZ variants.
2017-09-07T23:00:28.118Z
2007-05-11T00:00:00.000
{ "year": 2007, "sha1": "60cbdfa1aa56aaf92fc8b6e892e97fa47be8d3b3", "oa_license": null, "oa_url": "https://repository.tudelft.nl/islandora/object/uuid:1857cf6c-b55b-4a3d-be52-5cdbea3ab12f/datastream/OBJ/download", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "60cbdfa1aa56aaf92fc8b6e892e97fa47be8d3b3", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
16392466
pes2o/s2orc
v3-fos-license
Motor phenotype is not associated with vascular dysfunction in symptomatic Huntington’s disease transgenic R6/2 (160 CAG) mice Whereas Huntington’s disease (HD) is unequivocally a neurological disorder, a critical mass of emerging studies highlights the occurrence of peripheral pathology like cardiovascular defects in both animal models and humans. The overt impairment in cardiac function is normally expected to be associated with peripheral vascular dysfunction, however whether this assumption is reasonable or not in HD is still unknown. In this study we functionally characterized the vascular system in R6/2 mouse model (line 160 CAG), which recapitulates several features of human pathology including cardiac disease. Vascular reactivity in different arterial districts was determined by wire myography in symptomatic R6/2 mice and age-matched wild type (WT) littermates. Disease stage was assessed by using well-validated behavioural tests like rotarod and horizontal ladder task. Surprisingly, no signs of vascular dysfunction were detectable in symptomatic mice and no link with motor phenotype was found. Huntington's disease (HD) is one of the most common non-curable rare diseases characterized primarily by a progressive loss of cognitive and motor function leading to severe disability and death in affected patients 1 . Expansion of the trinucleotide repeat (CAG) within the huntingtin (HTT) gene is recognized as the major cause 2 and its length is known to profoundly influence age at onset and disease phenotype 3,4 . Whereas HD is unequivocally a neurological disorder, there is a critical mass of emerging studies suggesting peripheral pathology as an important factor that might significantly contribute to the overall presentation and progression of the disease. Interestingly, multiple epidemiological studies report evidence of heart pathology in HD and describe cardiac failure as one of the more common causes of death among disease patients 5,6 . Subtle abnormalities of autonomic control of the cardiovascular system in HD have already been reported at pre-symptomatic and early stage 7,8 and described to gradually progress in range and magnitude as the disease advances 9 . Similar dysfunctional cardiac phenotype has been observed also in pre-clinical HD settings [10][11][12] , however still much is needed to fully understand whether there exists any direct association with peripheral vascular function. Despite the available studies examining this possible correlation in HD, there is currently insufficient evidence to make a definitive conclusion. While alterations in the structure of vascular network have clearly been implicated in brain pathology in HD 13 , definitive evidence of vascular homeostasis is still lacking. Aside from few studies reporting only partial evidence of deranged peripheral vascular function in some of the available HD models 14,15 , no comprehensive study that systematically investigated the functional vascular reactivity has been conducted so far. To this regard, here we sought to provide a more complete profile of vascular function and contractile properties of either resistance or capacitance vessels in both central and peripheral districts in R6/2 mouse model (160 CAG), the better characterized and the most used model when studying cardiac function in HD [11][12][13]15 , which recapitulates several features of motor and behavioural phenotype of early human pathology 16,17 . In this study, our thorough investigation of symptomatic 12 week old R6/2 mice has led to the first full characterization of vascular function in this model, in which we could not detect any vascular dysfunction or molecular defects in possible related signalling pathways like the one involving the synthesis of Nitric Oxide (NO), whose dysfunction has been previously hypothesized to pre-date the disease manifestation in HD models 15 . Methods Animal model. Transgenic HD R6/2 line, expressing exon 1 of the human huntingtin gene carrying approximately 160 + /− 5 GAC repeat expansions, was originally purchased from Jackson Laboratories (Bar Harbor, ME, USA) and the colony was maintained by breeding heterozygous R6/2 males with wild-type (WT) females from their background strain (B6CBA-Tg(HDexon)62oGpb/J) in the animal facility at IRCCS Neuromed. Genotyping was confirmed by PCR and performed at 3 weeks of age to determine study groups. Mixed gender F1 mouse generation was used in this study. Animals were housed in polycarbonate cages (15 × 23 × 17 cm) provided with a mouse house and aspen bedding and maintained under temperature (22-24 °C) and humidity-controlled (55%) conditions. Food and water were provided ad libitum. All efforts were made to minimize the number of animals used and their suffering. All the experiments reported in this study were performed on the same animal groups. All animal procedures were conformed to the guidelines for the Care and Use of Laboratory Animals published from Directive 2010/63/EU of the European Parliament and approved by the IRCCS Neuromed Animal Care Review Board and by "Istituto Superiore di Sanità" (permit number: 1163/2015-PR). Assessment of motor function and disease progression in R6/2 mice. Fine-motor skills and coordination were performed using well-validated motor tests according to the standard recommendations. All tests took place during the light phase of the light-dark cycle. Six mice per experimental group were used in each test. All mice received training for 2 consecutive days on each instrument and task before performing motor behavior measurements. Before training and testing, mice underwent a period of habituation to the testing room and equipment. Motor coordination and balance were tested on the rotarod apparatus as previously described 18 . Briefly, mice were tested at fixed speed (20 rpm) on the rotarod (Ugo Basile) for 1 min. Each mouse was tested in three consecutive trials of 1 min each, with 1 min rest between trials. The time spent on the rotarod in each of the three trials was averaged to give the overall time for each mouse. Skilled walking, limb placement and limb coordination were all assessed by the ladder rung walking task as previously described 18 . All tests were carried out once per week until the 11th week of age. Concomitant with the analysis of motor performance, animal body weight was also measured. All mouse cages were daily examined in order to determine disease progression and the overall wellbeing of mice. Blood pressure measurements. Blood pressure was measured using the BP-2000 instrument (Visitech systems). The tail cuff method was carried out as described previously 19 . After four days training period, basal systolic and diastolic blood pressure was daily measured for one week in conscious and unrestrained R6/2 mice at different time points during the symptomatic stage of the disease (7, 10 and 12 week old mice) and in age-matched WT littermates. Ex vivo vascular reactivity in resistance vessels. Vascular reactivity studies were carried out in second-order branches of the mesenteric arterial tree or in femoral arteries from symptomatic HD mice (12 week old) and age-matched WT littermates. Briefly, vessels were excissed from mice and adventitial fat was carefully removed under a dissection microscope (Nikon, SMZ645). Arteries were then mounted on pressure myograph (DMT Danish Myosystem) filled with Krebs solution (pH 7.4) maintained at 37 °C as previously described 19 . After an equilibration period of 60 minutes, vasoconstrictive response was assessed in presence of 80 mM KCl or in presence of increasing doses of phenylephrine (1 × 10 −9 to 10 −5 M) until a plateau was reached. Vessels were then washed at least three times in order to stabilize the vascular tissue. Endothelium-dependent and -independent relaxations were assessed in phenylephrine pre-constricted vessels by measuring the vasorelaxant response to cumulative concentrations of acetylcholine (1 × 10 −9 to 10 −5 M) or nitroglycerine (1 × 10 −9 to 10 −5 M), respectively. Moreover, in order to assess the contribution of NO-signaling to vascular function in our symptomatic HD mice, mesenteric arteries from both WT and R6/2 mice were pre-treated with the direct NOS inhibitor N G -nitro-L-arginine methyl ester (L-NAME, 300 μ M, 30 min) before performing the analysis of acetylcholine-induced vasorelaxation. Ex vivo vascular reactivity in capacitance vessels. To test the vascular response in the capacitance vessels from symptomatic HD mice (12 week old) and age-matched WT littermates, we studied aorta and carotid arteries. In detail, after the excision of vessels from mice, fat tissue was careful removed and vessels cross-sectioned into 2 mm long rings. Two stainless steel wires were inserted into the vascular lumen of aorta, placed in a chamber and connected to a force transducer (WPI). Carotid arteries were mounted in a wire myograph (model 410 A, Danish MyoTechnogy, Aarhus, Denmark) over 25-μ m tungsten wires and placed in organ baths filled with aerated Krebs solution connected to a force transducer. After an equilibration period of 60 minutes, vasoconstrictive response was assessed with 80 mM KCl or with increasing doses of phenylephrine (1 × 10 −9 to 10 −5 M) until a plateau was reached. Vascular response of phenylephrine pre-constricted vessels, to cumulative concentrations of acetylcholine and nitroglycerin was examined to determine the endothelium-dependent and -independent relaxation, respectively. Statistics. All data were expressed mean ± standard error of mean (SEM). Data were statistically analyzed by two-way ANOVA followed by Bonferroni post-hoc analysis, using a dedicated software (GraphPad Prism Software, version 5.0). Results Motor performance and disease progression in R6/2 mice. Rigorous evaluation of motor coordination to monitor disease progression and to precisely determine the advanced disease stage in our R6/2 line was performed using standardized procedures as previously described 18 . As expected, HD mice developed a measurable neurological phenotype by 7-8 weeks of age coherently with the original report 16 and, displayed a marked and progressive deterioration in motor performance when compared to WT mice (Fig. 1A,B) as the disease progressed. The severity of these abnormalities worsened gradually until 11 weeks of age, when the advanced disease stage-dependent neurological deterioration classically occurring in this model 20 , irreversibly affected the overall wellbeing of our mice (Two-way ANOVA. Rotarod: interaction, F (4, 40) = 8.796; p < 0.0001. Horizontal Ladder Task: interaction, F (4, 40) = 8.210; p < 0.0001) ( Fig. 1 and Supplementary Tables 1 and 2). As expected, body weight was gradually decreased in R6/2 mice as the disease progressed when compared with age-matched WT littermates (Supplementary Figure 1). Arterial blood pressure in R6/2 mice. With the aim of delineating a clearer picture of cardiovascular homeostasis in our HD model, measurement of arterial blood pressure was performed using tail-cuff technique in symptomatic R6/2 mice (7,10 and 12 weeks of age) and in age-matched WT controls. Average values of both systolic and diastolic arterial blood pressure, ranging from 85 to 105 mmHg, was not significantly different between the two groups across all the stages of the disease (Two-way ANOVA. SBP; F (6,35) = 0,4336; p = 0.8514; DBP; F(6,35) = 1,443; p = 0.2264) ( Fig. 2A,B and Supplementary Figure 2A-B). Arteries vascular function. In order to fully characterize the functional features of the vasculature in HD, vascular reactivity of both resistance and capacitance arteries from either R6/2 or WT mice at 12 weeks of age was systematically assessed. Analysis of vasodilator function in phenylephrine pre-contracted femoral (Fig. 3A,B) and mesenteric (Fig. 3C,D) arteries as well as in aortic (Fig. 3E,F) and carotid (Fig. 3G,H) arteries in response to the endothelium-dependent agonist, acetylcholine, and to the endothelium-independent agonist, nitroglycerine, showed no difference between the two genotypes and no signs of vascular dysfunction was detectable in any of the districts analyzed (Two-way ANOVA. (Fig. 4). eNOS signaling pathway in advanced stages of HD pathology in R6/2 mice. Nitric oxide is the main determinant of the endothelium-mediated vasorelaxant effects and normally synthesized by the endothelial Nitric Oxide Synthase (eNOS) [21][22][23] , whose phosphorilation at Ser 1177 residue directly enhances enzyme activity 23 . Despite the normal vascular function in arteries from symptomatic R6/2 mice, here we investigated whether eNOS pathway was indeed perturbed. Coherently with the evidence of unchanged vasorelaxant responses in advanced stage of HD (Fig. 3), no difference in either eNOS phosphorylation state or protein expression was observed between WT and R6/2 mice (Fig. 5). At functional level, the selective inhibition of NOS by L-NAME blunted acetylcholine-induced vasorelaxation to the same extent in both WT and R6/2 mice and further confirmed the lack of significant difference in the NO-dependent vasorelaxation in both animal groups (Two-way (Two-way ANOVA. F (24,160) = 36,88; p < 0,0001) (Fig. 5B). Discussion It is now well established that HD pathology is not only confined to the central nervous system (CNS), but rather comprises a large group of peripheral defects including cardiac phenotype 24 . In agreement with the evidence that neurological diseases are positively linked to cardiovascular pathology and that CNS abnormalities have a great impact on the pathogenesis of cardiac dysfunction 25 , it has been postulated that also HD-related cardiac alterations are likely driven by CNS dysfunctions. However, whether there exists any physiopathological interdependence between the progressive disease phenotype and the vascular function in HD has never been assessed before. While significant cardiac phenotype has been extensively described to typify HD pathology in both human patients 26 and animal models 27,28 , no definitive characterization of vascular function has been exhaustively performed so far. The only comprehensive study that has partially addressed this issue, has shown a defective contractility of some of the systemic arteries only at very advanced age and disease stages of transgenic R6/1 mice 14 , a HD mouse model with late age at onset, slow disease progression and high life expectancy 16 . Similar investigation has been recently performed by Kane et al., in a sub-line of R6/2 mouse model which displays a prolonged disease progression and longer lifespan 15 than prototypical parent R6/2 mice 16,17,29 , the most extensively studied and utilized mouse model of HD. Although a derangement in peripheral vascular reactivity has been described, the study of Kane et al., limited the analysis only to the femoral arteries and partially described the vascular phenotype these mice can display 15 . Here, with the aim of providing a more detailed functional characterization in HD, an accurate and systematic investigation into vascular reactivity in both central and peripheral arteries was performed in symptomatic parental R6/2 model 16,17,29 , which differs from that used by Kane et al., in CAG repeat length (160 ± 5 CAG vs 242 ± 1 CAG) and life expectancy (12-13 weeks of age vs. 24 weeks of age) 11,16,30 . Analysis of vascular function in our R6/2 line failed to reveal any difference between HD and control mice in all districts analyzed. Moreover, the unaffected vascular reactivity was also accompanied by regular blood pressure profile and unchanged endothelial-mediated NO vasorelaxation. The lack of any alteration in the endothelial function in our symptomatic R6/2 mice was corroborated by unperturbed eNOS signaling in vessels from the same mice. Curiously, our findings did not found full support on the previous study 15 . The reason for this discrepancy is not clear, however it could be likely attributable, at least for the vasoconstrictor function, to the different genotypes (CAG length) and the dependent and inverted U-shaped profile of disease progression and life expectancy 31 . On the other hand, although apparently different, endothelial function showed a comparable vasodilator profile in HD mice in both mouse lines at 12 weeks of age. The endothelial dysfunction described by Kane and collaborators in control mice at similar age, and its improvement with aging was rather unexpected. In this study, the overall absence of vascular dysfunction in our symptomatic R6/2 mice does not exclude that such a specific phenotype may develop later in the disease, that in the case of this mouse line is unlikely to happen because of the characteristic short lifespan and high rate of early death that can occur before peripheral perturbations appear. This hypothesis is also supported by the evidence of impaired vascular function only in aged diseased animal model 14 . From our perspective, the perfect overlapping profile of vascular reactivity and blood pressure as well as the lack of any change at molecular levels between WT and HD mice definitively bypasses the relative small sample size that could normally represent a limitation. To our knowledge, this study presents the first full characterization of vascular function in HD transgenic R6/2 mice (160 ± 5 CAG). Collectively, the combination of functional and biochemical experiments highlights a normal vascular phenotype in these mice and indicates that motor abnormalities do not depend on vascular dysfunction in our mouse model. Therefore, we can conclude that our findings delineate limitations on the usefulness of this mouse line when studying certain aspect associated with aging during the progression of the disease.
2018-04-03T05:29:06.148Z
2017-02-17T00:00:00.000
{ "year": 2017, "sha1": "e416cf3e5785a30e7a781b57bb1681f8c9246918", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep42797.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e416cf3e5785a30e7a781b57bb1681f8c9246918", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
231719378
pes2o/s2orc
v3-fos-license
Exploring Cross-Image Pixel Contrast for Semantic Segmentation Current semantic segmentation methods focus only on mining"local"context, i.e., dependencies between pixels within individual images, by context-aggregation modules (e.g., dilated convolution, neural attention) or structure-aware optimization criteria (e.g., IoU-like loss). However, they ignore"global"context of the training data, i.e., rich semantic relations between pixels across different images. Inspired by the recent advance in unsupervised contrastive representation learning, we propose a pixel-wise contrastive framework for semantic segmentation in the fully supervised setting. The core idea is to enforce pixel embeddings belonging to a same semantic class to be more similar than embeddings from different classes. It raises a pixel-wise metric learning paradigm for semantic segmentation, by explicitly exploring the structures of labeled pixels, which were rarely explored before. Our method can be effortlessly incorporated into existing segmentation frameworks without extra overhead during testing. We experimentally show that, with famous segmentation models (i.e., DeepLabV3, HRNet, OCR) and backbones (i.e., ResNet, HR-Net), our method brings consistent performance improvements across diverse datasets (i.e., Cityscapes, PASCAL-Context, COCO-Stuff). We expect this work will encourage our community to rethink the current de facto training paradigm in fully supervised semantic segmentation. Introduction Semantic segmentation, which aims to infer semantic labels for all pixels in an image, is a fundamental problem in computer vision. In the last decade, semantic segmentation has achieved remarkable progress, driven by the availability of large-scale datasets (e.g., Cityscapes [14]) and rapid evolution of convolutional networks (e.g., VGGNet [60], ResNet [29]) as well as segmentation models (e.g., fully convolutional network (FCN) [48]). In particular, FCN [48] is the cornerstone of modern deep learning techniques for segmentation, due to its unique advantage in end-to-end * The first two authors contribute equally to this work. 1 Our code will be available at https://github.com/tfzhou/ ContrastiveSeg. Cross-Image Pixel Contrast Figure 1: Main idea. Current segmentation models learn to map pixels (b) to an embedding space (c), yet ignoring intrinsic structures of labeled data (i.e., inter-image relations among pixels from a same class, noted with same color in(b)). Pixel-wise contrastive learning is introduced to foster a new training paradigm (d), by explicitly addressing intra-class compactness and inter-class dispersion. Each pixel (embedding) i is pulled closer ( ) to pixels ( ) of the same class, but pushed far ( ) from pixels ( ) from other classes. Thus a better-structured embedding space (e) is derived, eventually boosting the performance of segmentation models. pixel-wise representation learning. However, its spatial invariance nature hinders the ability of modeling useful context among pixels (within images). Thus a main stream of subsequent effort delves into network designs for effective context aggregation, e.g., dilated convolution [75,7,8], spatial pyramid pooling [79], multi-layer feature fusion [55,43] and neural attention [32,21]. In addition, as the widely adopted pixel-wise cross entropy loss fundamentally lacks the spatial discrimination power, some alternative optimization criteria are proposed to explicitly address object structures during segmentation network training [37,2,81]. Basically, these segmentation models (excepting [34]) utilize deep architectures to project image pixels into a highly non-linear embedding space ( Fig. 1(c)). However, they typically learn the embedding space that only makes use of "local" context around pixel samples (i.e., pixel dependencies within individual images), but ignores "global" context of the whole dataset (i.e., pixel semantic relations across images). Hence, an essential issue has been long ignored in the field: what should a good segmentation embedding space look like? Ideally, it should not only 1) address the categorization ability of individual pixel embeddings, but also 2) be well structured to address intra-class compactness and inter-class dispersion. With regard to 2), pixels from a same class should be closer than those from different classes, in the embedding space. Prior studies [46,57] in representation learning also suggested that encoding intrinsic structures of training data (i.e., 2)) would facilitate feature discriminativeness (i.e., 1)). So we speculate that, although existing algorithms have achieved impressive performance, it is possible to learn a better structured pixel embedding space by considering both 1) and 2). Recent advance in unsupervised representation learning [11,28] can be ascribed to the resurgence of contrastive learning -an essential branch of deep metric learning [36]. The core idea is "learn to compare": given an anchor point, distinguish a similar (or positive) sample from a set of dissimilar (or negative) samples, in a projected embedding space. Especially, in the field of computer vision, the contrast is evaluated based on image feature vectors; the augmented version of an anchor image is viewed as a positive, while all the other images in the dataset act as negatives. The great success of unsupervised contrastive learning and our aforementioned speculation together motivate us to rethink the current de facto training paradigm in semantic segmentation. Basically, the power of unsupervised contrastive learning roots from the structured comparison loss, which takes the advantage of the context within the training data. With this insight, we propose a pixel-wise contrastive algorithm for more effective dense representation learning in the fully supervised setting. Specifically, in addition to adopting the pixel-wise cross entropy loss to address class discrimination (i.e., property 1)), we utilize a pixel-wise contrastive loss to further shape the pixel embedding space, through exploring the structural information of labeled pixel samples (i.e., property 2)). The idea of the pixel-wise contrastive loss is to compute pixel-to-pixel contrast: enforce embeddings to be similar for positive pixels, and dissimilar for negative ones. As the pixel-level categorical information is given during training, the positive samples are the pixels belonging to a same class, and the negatives are the pixels from different classes ( Fig. 1(d)). In this way, the global property of the embedding space can be captured ( Fig.1(e)) for better reflecting intrinsic structures of training data and enabling more accurate segmentation predictions. With our supervised pixel-wise contrastive algorithm, two novel techniques are developed. First, we propose a region memory bank to better address the nature of semantic segmentation. Faced with huge amounts of highlystructured pixel training samples, we let the memory store pooled features of semantic regions (i.e., pixels with a same semantic label from a same image), instead of pixel-wise embeddings only. This leads to pixel-to-region contrast, as a complementary for the pixel-to-pixel contrast strategy. Such memory design allows us to access more representative data samples during each training step and fully explore structural relations between pixels and semantic-level seg- 40 50 60 70 Model Size (Number of parameters 10 Figure 2: Accuracy vs. model size on Cityscapes val [14]. Our contrastive method enables consistent performance improvements over state-of-the-arts, i.e., DeepLabV3 [8], HRNet [62], OCR [76], without bringing any change to base networks during inference. ments, i.e., pixels and segments belonging to a same class should be close in the embedding space. Second, we propose different sampling strategies to make better use of informative samples and let the segmentation model pay more attention to those segmentation-hard pixels. Previous works have confirmed that hard negatives are crucial for metric learning [36,57,59], and our study further reveals the importance of mining both informative negatives/positives and anchors in this supervised, dense image prediction task. In a nutshell, our contributions are three-fold: • We propose a supervised, pixel-wise contrastive learning method for semantic segmentation. It lifts current imagewise training strategy to an inter-image, pixel-to-pixel paradigm. It essentially learns a well structured pixel semantic embedding space, by making full use of the global semantic similarities among labeled pixels. • We develop a region memory to better explore the large visual data space and support to further calculate pixelto-region contrast. Integrated with pixel-to-pixel contrast computation, our method exploits semantic correlations among pixels, and between pixels and semantic regions. • We demonstrate that more powerful segmentation models with better example and anchor sampling strategies could be delivered instead of selecting random pixel samples. Our method can be seamlessly incorporated into existing segmentation networks without any changes to the base model and without extra inference burden during testing (Fig. 2). Hence, our method shows consistently improved intersection-over-union segmentation scores over challenging datasets (i.e., Cityscapes [14], PASCAL-Context [50], and COCO-Stuff [4]), using state-of-the-art segmentation network architectures (i.e., DeepLabV3 [8], HRNet [62] and OCR [76]) and famous backbones (i.e., ResNet [29], HRNet [62]). The impressive results shed light on the promises of metric learning in dense image prediction tasks. We expect this work to provide insights into the critical role of global pixel relationships in segmentation network training, and foster research on the open issues raised. Related Work Our work draws on existing literature in semantic segmentation, contrastive learning and deep metric learning. For brevity, only the most relevant works are discussed. Semantic Segmentation. FCN [48] greatly promotes the advance of semantic segmentation. It is good at end-toend dense feature learning, however, only perceiving limited visual context with local receptive fields. As strong dependencies exist among pixels in an image and these dependencies are informative about the structures of the objects [66], how to capture such dependencies becomes a vital issue for further improving FCN. A main group of follow-up effort attempts to aggregate multiple pixels to explicitly model context, for example, utilizing different sizes of convolutional/pooling kernels or dilation rates to gather multi-scale visual cues [75,79,7,8], building image pyramids to extract context from multi-resolution inputs, adopting the Encoder-Decoder architecture to merge features from different network layers [55,43], applying CRF to recover detailed structures [47,82], and employing neural attention [63] to directly exchange context between paired pixels [9,32,33,21]. Apart from investigating context-aggregation network modules, another line of work turns to designing context-aware optimization objectives [37,2,81], i.e., directly verify segmentation structures during training, to replace the pixel-wise cross entropy loss. Though impressive, these methods only address pixel dependencies within individual images, neglecting the global context of the labeled data, i.e., pixel semantic correlations across different training images. Through a pixel-wise contrastive learning formulation, we map pixels in different categories to more distinctive features. The learned pixel features are not only discriminative for semantic classification within images, but also, more crucially, across images. Contrastive Learning. Recently, the most compelling methods for learning representations without labels have been unsupervised contrastive learning [52,31,69,12,11], which significantly outperformed other pretext task-based alternatives [40,23,17,51]. With a similar idea to exemplar learning [18], contrastive methods learn representations in a discriminative manner by contrasting similar (positive) data pairs against dissimilar (negative) pairs. A major branch of subsequent studies focuses on how to select the positive and negative pairs. For image data, the standard positive pair sampling strategy is to apply strong perturbations to create multiple views of each image data [69,11,28,31,5]. Negative pairs are usually randomly sampled, but some hard negative example mining strategies [38,54,35] were recently proposed. In addition, to store more negative samples during contrast computation, fixed [69] or momentum updated [49,28] memories are adopted. Some latest studies [38,30,67] also confirm label information can assist contrastive learning based image-level pattern pre-training. We raise a pixel-to-pixel contrastive learning method for semantic segmentation in the fully supervised setting. It yields a new training protocol that explores global pixel relations in labeled data for regularizing segmentation embedding space. Though a few concurrent works also address contrastive learning in dense image prediction [71,6,65], the ideas are significantly different. First, they typically consider contrastive learning as a pre-training step for dense image embedding. Second, they simply use the local context within individual images, i.e., only compute the contrast among pixels from augmented versions of a same image. Third, they do not notice the critical role of metric learning in complementing current well-established pixelwise cross-entropy loss based training regime (cf. §3.2). Deep Metric Learning. The goal of metric learning is to quantify the similarity among samples using an optimal distance metric. Contrastive loss [25] and triplet loss [57] are two basic types of loss functions for deep metric learning. With a similar spirit of increasing and decreasing the distance between similar and dissimilar data samples, respectively, the former one takes pairs of sample as input while the latter is composed of triplets. Deep metric learning [19] has proven effective in a wide variety of computer vision tasks, such as image retrieval [61] and face recognition [57]. Although a few prior methods address the idea of metric learning in semantic segmentation, they only account for the local content from objects [26] or instances [15,1,19,39]. It is worth noting [34] also explores cross-image information of training data, i.e., leverage perceptual pixel groups for nonparametric pixel classification. Due to its clustering based metric learning strategy, [34] needs to retrieve extra labeled data for inference. Differently, our core idea, i.e., exploit inter-image pixel-to-pixel similarity to enforce global constraints on the embedding space, is conceptually novel and rarely explored before. It is executed by a compact training paradigm, which enjoys the complementary advantages of unary, pixel-wise cross-entropy loss and pair-wise, pixelto-pixel contrast loss, without bringing any extra inference cost or modification to the base network during deployment. Methodology Before detailing our supervised pixel-wise contrastive algorithm for semantic segmentation ( §3.2), we first introduce the contrastive formulation in unsupervised visual representation learning and the notion of memory bank ( §3.1). Preliminaries Unsupervised Contrastive Learning. Unsupervised visual representation learning aims to learn a CNN encoder f CNN that transforms each training image I to a feature vector v = f CNN (I) ∈ R D , such that v best describes I. To achieve this goal, contrastive approaches conduct training by distinguishing a positive (an augmented version of an- Groundtruth chor I) from several negatives (images randomly drawn from the training set excluding I), based on the principle of similarity between samples. A popular loss function for contrastive learning, called InfoNCE [24,52], takes the following form: where v + is an embedding of a positive for I, N I contains embeddings of negatives, '·' denotes the inner (dot) product, and τ > 0 is a temperature hyper-parameter. Note that all the embeddings in the loss function are 2 -normalized. Memory Bank. As revealed by recent studies [69,12,28], a large set of negatives (i.e., |N I |) is critical in unsupervised contrastive representation learning. As the number of negatives is limited by the mini-batch size, recent contrastive methods utilize large, external memories as a bank to store more navigate samples. Specifically, some methods [69] directly store the embeddings of all the training samples in the memory, however, easily suffering from asynchronous update. Some others choose to keep a queue of the last few batches [64,12,28] as memory. In [12,28], the stored embeddings are even updated on-the-fly through a momentumupdated version of the encoder network f CNN . Supervised Contrastive Segmentation Pixel-Wise Cross-Entropy Loss. In the context of semantic segmentation, each pixel i of an image I has to be classified into a semantic class c ∈ C. Current approaches typically cast this task as a pixel-wise classification problem. Specifically, let f FCN be an FCN encoder (e.g., ResNet [29]), that produces a dense feature I ∈ R H×W ×D for I, from which the pixel embedding i ∈ R D of i can be derived (i.e., i ∈ I). Then a segmentation head f SEG maps I into a categorical score map Y = f SEG (I) ∈ R H×W ×|C| . Further let y = [y 1 , · · ·, y |C| ] ∈ R |C| be the unnormalized score vector (termed as logit) for pixel i, derived from Y , i.e., y ∈ Y . Given y for pixel i w.r.t its groundtruth labelc ∈ C, the cross-entropy loss is optimized with softmax (cf. Fig. 3): where 1c denotes the one-hot encoding ofc, the logarithm is defined as element-wise, and softmax(y c ) = . Such training objective design mainly suffers from two limitations. 1) It penalizes pixel-wise predictions independently but ignores relationship between pixels [81]. 2) Due to the use of softmax, the loss only depends on the relative relation among logits and cannot directly supervise on the learned representations [53]. These two issues were rarely noticed; only a few structure-aware losses are designed to address 1), by considering pixel affinity [37], optimizing intersection-over-union measurement [2], or maximizing the mutual information between the groundtruth and prediction map [81]. Nevertheless, these alternative losses only consider the dependencies between pixels within an image (i.e., local context), regardless of the semantic correlations between pixels across images (i.e., global structure). Pixel-to-Pixel Contrast. In this work, we develop a pixelwise contrastive learning method that addresses both 1) and 2), through regularizing the embedding space and exploring the global structures of training data. We first extend Eq. (1) to our supervised, dense image prediction setting. Basically, the data samples in our contrastive loss computation are training image pixels. In addition, for a pixel i with its groundtruth semantic labelc, the positive samples are other pixels also belonging to the classc, while the negatives are the pixels belonging to the other classes C\c. Our supervised, pixel-wise contrastive loss is defined as: where P i and N i denote pixel embedding collections of the positive and negative samples, respectively, for pixel i. Note that the positive/negative samples and the anchor i are not restricted to being from a same image. As Eq. (3) shows, the purpose of such pixel-to-pixel contrast based loss design is to learn an embedding space, by pulling the same class pixel samples close and by pushing different class samples apart. The pixel-wise cross-entropy loss in Eq. (2) and our contrastive loss in Eq. (3) are complementary to each other; the former lets segmentation networks learn discriminative 2)) and (right) our pixel contrast based optimization objective (i.e., L SEG in Eq. (4)) on Cityscapes val [14]. Features are colored according to class labels. As seen, the proposed L SEG begets a well-structured semantic feature space. pixel features that are meaningful for classification, while the latter helps to regularize the embedding space with improved intra-class compactness and inter-class separability through explicitly exploring global semantic relationships between pixel samples. Thus the overall training target is: where λ > 0 is the coefficient. As shown in Fig. 4, the learned pixel embeddings by L SEG become more compact and well separated. This suggests that, by enjoying the advantage of unary cross-entropy loss and pair-wise metric loss, segmentation network can generate more discriminative features, hence producing more promising results. Quantitative analyses are later provided in §4.2 and §4.3. Pixel-to-Region Contrast. As stated in §3.1, memory is a critical technique that helps contrastive learning to make use of massive data to learn good representations. However, since there are vast numbers of pixel samples in our dense prediction setting and most of them are redundant (i.e., sampled from harmonious object regions), directly storing all the training pixel samples, like traditional memory [11], will greatly slow down the learning process. Maintaining several last batches in a queue, like [64,12,28], is also not a good choice, as recent batches only contain a limited number of images, reducing the diversity of pixel samples. Thus we choose to maintain a pixel queue per category. For each category, only a small number, i.e., V , of pixels are randomly selected from each image in the latest mini-batch, and pulled into the queue, with a size of T V . In practice we find this strategy is very efficient and effective, but the under-sampled pixel embeddings are too sparse to fully capture image content. Therefore, we further build a region memory bank that stores more representative embeddings absorbed from image segments (i.e., semantic regions). Specifically, for a segmentation dataset with a total of N training images and |C| semantic classes, our region memory is built with size |C| × N × D, where D is the dimension of pixel embeddings. The (c, n)-th element in the re-gion memory is a D-dimensional feature vector obtained by average pooling all the embeddings of pixels labeled as c category in the n-th image. The region memory brings two advantages: 1) store more representative "pixel" samples with low memory consumption; and 2) allow our pixelwise contrastive loss (cf. Eq.(3)) to further explore pixel-toregion relations. With regard to 2), when computing Eq.(3) for an anchor pixel i belonging toc category, stored region embeddings with the same classc are viewed as positives, while the region embeddings with other classes C\c are negatives. For the pixel memory, the size is |C|×T ×D. Therefore, for the whole memory (denoted as M), the total size is |C|×(N +T )×D. We examine the design of M in §4.2. In the following sections, we will not distinguish pixel and region embeddings in M, unless otherwise specified. Hard Example Sampling. Prior research [57,36,38,54,35] found that, in addition to loss designs and the amount of training samples, the discriminating power of the training samples is crucial for metric learning. Considering our case, the gradient of the pixel-wise contrastive loss (cf. Eq. (3)) w.r.t. the anchor embedding i can be given as: where p i +/− ∈ [0, 1] denotes the matching probability between a positive/negative i +/− and the anchor i, i.e., p i +/−= exp(i·i +/− /τ ) i ∈P i ∪N i exp(i·i /τ ) . We view the negatives with dot products (i.e., i · i − ) closer to 1 to be harder, i.e., negatives which are similar to the anchor i. Similarly, the positives with dot products (i.e., i·i + ) closer to −1 are considered as harder, i.e., positives which are dissimilar to i. We can find that, harder negatives bring more gradient contributions, i.e., p i − , than easier negatives. This principle also holds true for positives, whose gradient contributions are 1 − p i +. Kalantidis et al. [35] further indicate that, as training progresses, more and more negatives become too simple to provide significant contributions to the unsupervised contrastive loss (cf. Eq.(1)). This also happens in our supervised setting (cf. Eq. (3)), for both negatives and positives. To remedy this problem, we propose the following sampling strategies: • Hardest Example Sampling. Inspired by hardest negative mining in metric learning [3], we first design a "hardest example sampling" strategy: for each anchor pixel embedding i, only sampling top-K hardest negatives and positives from the memory bank M, for the computation of the pixel-wise contrastive loss (i.e., L NCE in Eq. (3)). • Semi-Hard Example Sampling. Some studies propose to make use of harder negatives, as optimizing with the hardest negatives for metric learning likely leads to bad local minima [57,70,20]. Thus we further design a "semi-hard example sampling" strategy: for each anchor embedding i, we first collect top 10% nearest negatives (resp. top 10% farthest positives) from the memory bank M, from which we randomly then sample K negatives (resp. K positives) for our contrastive loss computation. • Segmentation-Aware Hard Anchor Sampling. Rather than mining informative positive and negative examples, we develop an anchor sampling strategy. We treat the categorization ability of an anchor embedding as its importance during contrastive learning. This leads to "segmentationaware hard anchor sampling": the pixels with incorrect predictions, i.e., c =c, are treated as hard anchors. For the contrastive loss computation (cf. Eq. (3)), half of the anchors are randomly sampled and half are the hard ones. This anchor sampling strategy enables our contrastive learning to focus more on the pixels hard for classification, delivering more segmentation-aware embeddings. In practice, we find "semi-hard example sampling" strategy performs better than "hardest example sampling". In addition, after employing "segmentation-aware hard anchor sampling" strategy, the segmentation performance can be further improved. See §4.2 for related experiments. Detailed Network Architecture Our algorithm has five major components (cf. Fig.3): • FCN Encoder, f FCN , which maps each input image I into dense embeddings I = f FCN (I) ∈ R H×W×D . In our algorithm, any FCN backbones can be used to implement f FCN and we test two commonly used ones, i.e., ResNet [29] and HRNet [62], in our experiments. • Segmentation Head, f SEG , that projects I into a score map Y =f SEG (I) ∈ R H×W×|C| . We conduct evaluations using different segmentation heads in mainstream methods (i.e., DeepLabV3 [8], HRNet [62], and OCR [76]). • Project Head, f PROJ , which maps each high-dimensional pixel embedding i ∈ I into a 256-d 2 -normalized feature vector [11], for the computation of the contrastive loss L NCE . f PROJ is implemented as two 1 × 1 convolutional layers with ReLU. Note that the project head is only applied during training and is removed at inference time. Thus it does not introduce any changes to the segmentation network or extra computational cost in deployment. • Memory Bank, M, which consists of two parts that store pixel and region embeddings, respectively. For each training image, we sample V = 10 pixels per class. For each class, we set the size of the pixel queue as T = 10N . The memory bank is also discarded after training. • Joint Loss, L SEG (cf. Eq. (4)), that takes the power of representation learning (i.e., L CE in Eq. (2)) and metric learning (i.e., L NCE in Eq. (3)) for more distinct segmentation feature learning. In practice, we find our method is not sensitive to the coefficient λ (e.g., when λ ∈ [0.1, 1]) and empirically set λ as 1. For L NCE in Eq. (3), we set the temperature τ as 0.1. For sampling, we find "semihard example sampling" + "segmentation-aware hard an-chor sampling" performs the best and set the numbers of sampled instances (i.e., K) as 1,024 and 2,048 for positive and negative, respectively. For each mini-batch, 50 anchors are sampled per category (half are randomly sampled and the other half are segmentation-hard ones). [44]. It is split into 9,000 and 1,000 images for train and test. It provides rich annotations for 80 object classes and 91 stuff classes. Training. As mentioned in §3.3, various backbones (i.e., ResNet [29] and HRNet [62]) and segmentation networks (i.e., DeepLabV3 [8], HRNet [62], and OCR [76]) are exploited in our experiments to thoroughly validate the proposed algorithm. We follow conventions [62,76,13,72] for training hyper-parameters. For fairness, we initialize all backbones using corresponding weights pretrained on Ima-geNet [56], with the remaining layers being randomly initialized. For data augmentation, we use color jitter, horizontal flipping and random scaling with a factor in [0.5, 2]. We use SGD as our optimizer, with a momentum 0.9 and weight decay 0.0005. We adopt the polynomial annealing policy [8] to schedule the learning rate, which is multiplied by (1− iter total iter ) power with power = 0.9. Synchronized batch normalization is enabled during training. Moreover, for Cityscapes, we use a mini-batch size of 8 in 4 GPUs, and an initial learning rate of 0.01. All the training images are augmented by random cropping from 1024×2048 to 512×1024. For the experiments over val and test, we follow [62] to train for 100K iterations on train and train + val, respectively. Note that we do not use any extra training data (e.g., Cityscapes coarse [14]). For PASCAL-Context and COCO-Stuff, we opt a mini-batch size of 16, an initial learning rate of 0.001, and crop size of 520×520. We train for 60K iterations over their train sets. Testing. Following general protocol [62,76,58], we average the segmentation results over multiple scales with flipping, i.e., the scaling factor is 0.75 to 2.0 (with intervals of 0.25) times of the original image size. Note that, during testing, there is no any change or extra inference step introduced to the base segmentation models, i.e., the projection head, f PROJ , and memory bank, M, are directly discarded. Diagnostic Experiment We first study the efficacy of our core ideas and essential model designs, over Cityscapes val [14]. We adopt HRNet [62] as our base segmentation network (denoted as "Baseline (w/o contrast)" in Tables 1-3). To perform extensive ablation experiments, we train each model for 40K iterations while keeping other hyper-parameters unchanged. Inter-Image vs. Intra-Image Pixel Contrast. We first investigate the effectiveness of our core idea of inter-image pixel contrast. As shown in Table 1, additionally considering cross-image pixel semantic relations (i.e., "Inter-Image Contrast") in segmentation network learning leads to a substantial performance gain (i.e., 2.9%), compared with "Baseline (w/o contrast)". In addition, we develop another baseline, "Intra-Image Contrast", which only samples pixels from same images during the contrastive loss (i.e., L NCE in Eq. (5)) computation. The results in Table 1 suggest that, although "Intra-Image Contrast" also boosts the performance over "Baseline (w/o contrast)" (i.e., 78.1%→78.9%), "Inter-Image Contrast" is more favored. Memory Bank. We next validate the design of our memory bank. The results are summarized in Table 2. Based on "Baseline (w/o contrast)", we first derive a variant, "Mini-Batch w/o memory": only compute pixel contrast within each mini-batch, without outside memory. It gets 79.8% mIoU. We then provision this variant with pixel and region memories separately, and observe consistent performance gains (79.8% → 80.5% for pixel memory and 79.8% → 80.2% for region memory). This evidences that i) lever- aging more pixel samples during contrastive learning leads to better pixel embeddings; and ii) both pixel-to-pixel and pixel-to-region relations are informative cues. Finally, after using both the two memories, a higher score (i.e., 81.0%) is achieved, revealing i) the effectiveness of our memory design; and ii) necessity of comprehensively considering both pixel-to-pixel contrast and pixel-to-region contrast. Hard Example Mining. Table 3 presents a comprehensive examination of various hard example mining strategies proposed in §3.2. Our main observations are the following: i) For positive/negative sampling, mining meaningful pixels (i.e., "hardest" or "semi-hard" sampling), rather than "random" sampling, is indeed useful; ii) Hence, "semi-hard" sampling is more favored, as it improves the robustness of training by avoiding overfitting outliers in the training set. This corroborates related observations in unsupervised setting [68] and indicates that segmentation may benefit from more intelligent sample treatment; and iii) For anchor sampling, "seg.-aware hard" strategy further improves the performance (i.e., 80.1%→81.0%) over "random" sampling only. This suggests that exploiting task-related signals in supervised metric learning may help develop better segmentation solutions, which has remained relatively untapped. Comparison to State-of-the-Arts Cityscapes. Table 4 provides the comparison results with several representative methods on Cityscapes val [14] in terms of mIoU and training speed. We find that, by equipping with cross-image pixel contrast, the performance of baseline models enjoy consistently improvements (1.2/1.1/0.8 points gain over DeepLabV3, HRNetV2, OCR, respectively). In addition, the contrastive loss computation brings negligible training speed decrease, and does not incur any additional overhead when performing inference. Table 5 lists the scores on Cityscapes test, under two widely used training settings [62] (trained over train or train+val). Again, our approach leads to impressive performance gains over two strong baselines (i.e., HR-NetV2 and OCR), and sets a new state-of-the-art. [21] D-ResNet-101 39.7 SpyGR 20 [42] ResNet-101 39.9 ACNet 19 [22] ResNet-101 40.1 OCR 20 [76] HRNetV2-W48 40. OCR+ Ours HRNetV2-W48 41.0 (+0.5) PASCAL-Context. Table 6 presents comparisons results on PASCAL-Context test [50]. Our approach improves the performance of base networks by solid margins (i.e., 54.0 → 55.1 for HRNetV2, 56.2 → 57.2 for OCR). This is particularly impressive considering the fact that improvement on this extensively-benchmarked dataset is very hard. COCO-Stuff. Table 7 reports performance comparison of our method against six competitors on COCO-Stuff test [4]. As we find that OCR+Ours yields an mIoU of 41.0%, which leads to a promising gain of 0.5% over its counterpart (i.e., OCR with a 40.1% mIoU). Qualitative Results. Fig. 5 depicts qualitative comparisons of OCR+Ours against OCR over representative examples from three datasets (i.e., Cityscapes, PASCAL-Context and COCO-Stuff). As seen, our method is capable of producing more accurate segment across various challenge scenarios. Conclusion and Discussion This article raises a new supervised learning paradigm for semantic segmentation, enjoying the complementary advantages of unary classification and structured metric learning. Through pixel-wise contrastive learning, it fully exploits the global semantic relations between training pixels, guiding pixel embeddings towards cross-image categorydiscriminative representations that eventually improve the segmentation performance. Our method generates promising results and shows great potential in a variety of dense image prediction tasks, such as pose estimation and medical image segmentation. It also comes with new challenges, in particular regarding smart data sampling, metric learning loss design, class rebalancing during training, and multilayer feature contrast. Given the massive number of technique breakthroughs over the past few years, we can expect a flurry of innovation towards these promising directions.
2021-01-29T02:16:15.426Z
2021-01-28T00:00:00.000
{ "year": 2021, "sha1": "64f16d43358b08cf9522dac2d0bc2683f8d4a1ab", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "64f16d43358b08cf9522dac2d0bc2683f8d4a1ab", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
418264
pes2o/s2orc
v3-fos-license
Sip1, an AP-1 Accessory Protein in Fission Yeast, Is Required for Localization of Rho3 GTPase Rho family GTPases act as molecular switches to regulate a range of physiological functions, including the regulation of the actin-based cytoskeleton, membrane trafficking, cell morphology, nuclear gene expression, and cell growth. Rho function is regulated by its ability to bind GTP and by its localization. We previously demonstrated functional and physical interactions between Rho3 and the clathrin-associated adaptor protein-1 (AP-1) complex, which revealed a role of Rho3 in regulating Golgi/endosomal trafficking in fission yeast. Sip1, a conserved AP-1 accessory protein, recruits the AP-1 complex to the Golgi/endosomes through physical interaction. In this study, we showed that Sip1 is required for Rho3 localization. First, overexpression of rho3 + suppressed defective membrane trafficking associated with sip1-i4 mutant cells, including defects in vacuolar fusion, Golgi/endosomal trafficking and secretion. Notably, Sip1 interacted with Rho3, and GFP-Rho3, similar to Apm1-GFP, did not properly localize to the Golgi/endosomes in sip1-i4 mutant cells at 27°C. Interestingly, the C-terminal region of Sip1 is required for its localization to the Golgi/endosomes, because Sip1-i4-GFP protein failed to properly localize to Golgi/endosomes, whereas the fluorescence of Sip1ΔN mutant protein co-localized with that of FM4-64. Consistently, in the sip1-i4 mutant cells, which lack the C-terminal region of Sip1, binding between Apm1 and Rho3 was greatly impaired, presumably due to mislocalization of these proteins in the sip1-i4 mutant cells. Furthermore, the interaction between Apm1 and Rho3 as well as Rho3 localization to the Golgi/endosomes were significantly rescued in sip1-i4 mutant cells by the expression of Sip1ΔN. Taken together, these results suggest that Sip1 recruits Rho3 to the Golgi/endosomes through physical interaction and enhances the formation of the Golgi/endosome AP-1/Rho3 complex, thereby promoting crosstalk between AP-1 and Rho3 in the regulation of Golgi/endosomal trafficking in fission yeast. Introduction In eukaryotic cells, Rho family small GTPases play a crucial role in numerous important cellular functions, including polarized growth through reorganization of the actin cytoskeleton, regulation of secretory vesicle transport, and gene transcription [1,2]. Most Rho proteins act as switches by cycling between active (GTP-bound) and inactive (GDP-bound) conformations [3]. Guanine nucleotide exchange factors (GEFs) promote the exchange of GTP for GDP. GTPaseactivating proteins (GAPs) enhance intrinsic GTP-hydrolysis activity, leading to GTPase inactivation. Guanine-nucleotidedissociation inhibitors (GDIs) bind to prenylated GDP-bound Rho proteins and allow translocation between membranes and the cytosol [1,3]. Most small G proteins are localized either in the cytosol or on membranes, and each small G protein is localized to a specific membrane [1]. This localization is mediated by posttranslational modifications with lipid; the mechanism involves prenylation of small G proteins [4], and this modification is necessary for proper localization as well as function of small G proteins [5]. Thus, the mechanism(s) that regulate the intracellular location and localized activation of Rho GTPases, including prenylation, form another important means by which the Rho family is regulated. Although detailed information is available on numerous Rho target proteins that mediate Rho signaling, Rho-interacting proteins that affect Rho-dependent signaling processes through spatial control are relatively unknown. The budding yeast Saccharomyces Strains, Media and Genetic and Molecular Biology Methods Schizosaccharomyces pombe strains used in this study are listed in Table 1. The complete and minimal media used were yeast extract-peptone-dextrose (YPD) and Edinburgh minimal medium (EMM), respectively. Standard genetic and recombinant DNA methods [16] were used unless otherwise stated. FK506 was provided by Astellas Pharma, Inc. (Tokyo, Japan). Genomic DNA clones were provided by the National Bio Resource Project, Yeast Genetic Resource Center (Graduate School of Science, Osaka City University). Cloning of the rho3 + genes The sip1-i4 mutant was transformed using an S. pombe genomic DNA library constructed in the vector pDB248. Leu+ transformants were replica-plated onto YPD plates at 36°C, and the plasmid DNA was recovered from transformants that exhibited plasmid-dependent rescue. The plasmids that complemented the temperature sensitivity of the sip1-i4 mutant were cloned and sequenced. The suppressing plasmids fell into 2 classes: 1 containing sip1 + and the other containing rho3 + (SPAC23C4.08). Plasmid Construction The sip1-i4 mutation gene (sip1-i4) was amplified with Vent DNA polymerase by polymerase chain reaction (PCR) using the genomic DNA of wild-type (wt) cells as a template. The sense and antisense primers were 5′-GAA GAT CTT ATG TCG TTA GCA TCA TTG CCG CTC G-3′, and 5′-GAA GAT CTG CGG CCG CCT AAA GTA GCA ATA CGA AG-3′, respectively. The amplified product containing sip1 was subcloned into BglII/ NotI sites of BlueScriptSK (+) (Stratagene). The amino-terminal truncation of Sip1 (Sip1∆N) was amplified with Vent DNA polymerase by PCR using the genomic DNA of wt cells as a template. The sense and antisense primers were 5′-CGG GAT CCC ATG ATC AGC TCT GCT TTT AGT TCC-3′, and 5′-CGG GAT CCG CGG CCG CCC TCA ACA TTT TGT ATT AAG-3′, respectively. The amplified product containing Sip1∆N was subcloned into BamHI sites of BlueScriptSK (+) (Stratagene). The thiamine-repressible nmt1 promoter was used for ectopic protein expression [17]. Expression was repressed by the addition of 4 µM thiamine to EMM. To assess subcellular localization, Sip1-i4 mutant protein (Sip1∆C) and Sip1∆N proteins were tagged at their C termini with green fluorescent protein (GFP) carrying the S65T mutation [18]. Similarly, Sip1∆C and Sip1∆N proteins were tagged at their C termini with glutathione-S-transferase (GST). These constructs were confirmed by restriction digestion and sequence analysis. The functionality of the obtained proteins was verified by complementation of the sip1-i4 mutant cells. Protein Expression and Site-Directed Mutagenesis The thiamine-repressible nmt1 promoter was used for protein expression in yeast [17]. Protein expression was repressed by the addition of 4 µg/ml thiamine to EMM and was induced by washing and incubating the cells in EMM without thiamine. The GST-or GFP-fused gene was subcloned into the pREP1 vector to obtain maximum expression of the fused gene using pREP1 of the nmt1 promoter. The site-directed mutagenesis was performed using the Quick Change Site-Directed Mutagenesis Kit (Stratagene). Microscopy and Miscellaneous Methods Light microscopy methods (e.g., fluorescence microscopy) were performed as described previously [12]. Photographs were taken using AxioImager A1 (Carl Zeiss, Germany) equipped with an AxioCam MRm camera (Carl Zeiss, Germany) and AxioVision software (Carl Zeiss). Images were processed with the CorelDRAW software (Corel, Ottawa, Ontario, Canada). Furthermore, FM4-64 labeling, the localization of GFP-Syb1, and measurements of acid phosphatase secretion were performed as described previously [12]. Image Quantification All image quantification analyses were performed for 3 individual datasets, which summed up to 150 counted cells. Rho3 Antibodies Monoclonal antibodies against Rho3 were raised by using purified Rho3 from S. pombe. For the first immunization, female F344/N rats at seven weeks of age (Shimizu Animal Farm, Kyoto, Japan) were housed in a controlled environment at 22°C in a specific-pathogen-free facility, and were administered an intraperitoneal injection of recombinant GSTfused Rho3 protein from S. pombe (GST-Rho3; 100 µg in 500 µl of saline in each rat) emulsified with an equal volume of complete Freund's adjuvant (Difco, Detroit, MI) followed by a booster intraperitoneal injection of GST-Rho3 (100 µg in 500 ml of saline) without adjuvant at a 10-day interval. Four days later, rats were sacrificed, antisera were collected, and the immune spleen cells (1.0 ×10 8 ) were fused with P3×63Ag8.653 mouse myeloma cells (2.5 × 10 7 ) using 50% polyethylene glycol 1540 (Roche, Penzberg, Germany). After the cell fusion, hybridoma cells were selected in 7% FBS-containing RPMI 1640 medium (Sigma-Aldrich, St. Louis, MO) supplemented with hypoxanthine, aminopterin and thymidine (50 × HAT; Invitrogen, Carlsbad, CA). Nine to twelve days later, hybridoma antibodies in the culture medium were assessed for positive reaction to GST-Rho3 and negative reaction to GST by enzyme-linked immunosorbent assay (ELISA). Rats were used with the approval of the Committee for the Care and Use of Laboratory Animals at Kinki University. Immunostaining of Whole Cells Cells were cultured in YES medium for 20h, and then fixed by adding methanol on -80 °C. After fixed the cells were washed 3 times with PEM (100 mM PIPES 1 mM EGTA, 1 mM MgCl 2 , pH 6.9). Cells were treated with PEMS (PEM + 1.2 M sorbitol) containing zymolyase 20T (0.5 mg per mL) at 37°C until approximately 10% of the cells lost their cell walls as observed under a microscope. Subsequently, the cells were washed with PEM 3 times and were incubated for 2 h at room temperature with 100 µL of 1% PEMBAL (PEM + 1% BSA. 0.1% sodium azide, 1% L-Lysine hydrochloride) containing anti-Rho3 antibodies. After incubation, the cells were washed 3 times with PEMBAL and treated with 1:100-diluted FITCconjugated goat anti-rat immunoglobulin (Jackson Research Laboratories) in 50 µL of PEMBAL in the dark for 2 h at room temperature. The cells were washed 3 times with PEMBAL and mounted on slides in PBS for observation by fluorescence microscopy. Identification of rho3 + as a multicopy suppressor of sip1-i4 mutants In a previous report, we showed that the sip1-i4 mutant strain was thermosensitive ( Figure 1A, sip1-i4 + vector) [13]. To identify novel genes involved in Sip1 function or Sip1-mediated membrane trafficking, we screened a fission yeast genomic library to isolate genes that when overexpressed, could suppress the temperature sensitivity of the sip1-i4 mutant cells. One of these genes was rho3 + , which encodes a member of the Rho family of small GTPases. As shown in Figure 1A, overexpression of rho3 + suppressed the temperature-sensitive growth of the sip1-i4 mutants ( Figure 1A; sip1-i4 +rho3 + , 36°C). The rho3 + gene overexpression also suppressed phenotypes associated with the sip1-i4 mutants, including the sensitivity to FK506, a specific inhibitor of calcineurin phosphatase, MgCl 2 , the cell wall-damaging agent micafungin, and valproic acid (VPA; Figure 1A) [13]. To determine the specificity of Rho3 in the suppression of the sip1-i4 mutants, we investigated the effects of other genes that encode members of the Rho family of small GTPases present in the fission yeast genome. To test the suppression capabilities of all the 6 fission yeast Rho family members, the sip1-i4 mutant cells transformed with rho1 + , rho2 + , rho3 + , rho4 + , rho5 + , or cdc42 + were tested for growth at 36°C or in media containing FK506, 0.3 M MgCl 2 , and 6 mM VPA. Rho3 overexpression, but not that of the other Rho family members, could suppress the sensitivities of sip1-i4 mutant cells to temperature (36°C), the immunosuppressive drug FK506, MgCl 2 , and VPA ( Figure 1B). These results clearly indicated that Rho3 exhibits highly specific suppression of various sensitivities of sip1-i4 mutant cells in all the members of the fission yeast Rho family. Sip1 interacts with Rho3 signaling To investigate the functional relationship between Sip1 and Rho3 signaling, we examined the effects of various mutant forms of Rho3 on the temperature -sensitivity of the sip1-i4 mutant cells. The following mutants were used: a GDP-locked variant of Rho3 in which the conserved (among Rho3 proteins) Thr27 was replaced with Asn (Rho3T27N), a GTP-locked variant of Rho3 (Rho3G22V) in which the conserved Gly22 was replaced with Val, and an effector domain mutant Rho3 (Rho3E48V) in which the conserved Glu48 was replaced with Val [10]. Similar to wild-type Rho3, overexpression of the dominant-active Rho3GV mutant suppressed the temperaturesensitive growth of the sip1-i4 mutant cells ( Figure 3A, sip1-i4 + rho3GV). In contrast, both Rho3T27N and Rho3E48V overexpression failed to suppress the sensitivities of all the sip1-i4 mutant cells ( Figure 3A, sip1-i4 + rho3TN, sip1-i4 + rho3EV). Next, we assessed whether various forms of Rho3 are associated with Sip1. For this purpose, wild-type Rho3, the nucleotide-locked forms of Rho3 [GTPases in either the GTPbound (Rho3G22V) or GDP-bound (Rho3T27N) confirmation] and a Rho3E48V effector domain mutant were fused to GFP and expressed using an inducible nmt1 promoter. These cells were used to prepare lysates that were then used in binding experiments in which purified full-length Sip1 was fused to GST protein. The results showed that the Sip1-GST protein bound to each of the various forms of Rho3 ( Figure 3B upper panel). GST protein did not associate with each Rho3 protein ( Figure S1A). Quantification of the 3 independent experiments showed that Sip1 binding to Rho3EV was a little weaker than that to wild-type Rho3, and Sip1 bound to Rho3TN to almost the same degree as that to wild-type Rho3 ( Figure 3B lower panel). Thus, binding strengths between Sip1 and these mutant forms of Rho3 versus wild-type Rho3 did not differ significantly as compared with the clear difference in the ability of each Rho3 mutant to rescue the sip1-i4 mutant cells ( Figure 3A). This was different from the binding between Apm1 and Rho3, which showed nucleotide -dependence and effector domain sensitivity [10]. Therefore, we hypothesized that although Sip1 could bind to Rho3, Sip1 may not serve as the effector of Rho3. Sip1 is required for Rho3 localization at the Golgi/ endosomes We have previously demonstrated that the endosomal localization of the AP-1 complex, including Apm1 ( Figure 4A, arrowheads) was nearly abolished in the sip1-i4 mutant cells [13], demonstrating a conserved role of Sip1 in recruiting the AP-1 complex to the Golgi/endosomes. This led us to investigate the effect of the sip1-i4 mutation on the intracellular localization of Rho3. For this, we used GFP-tagged Rho3 that was chromosomally expressed in the wild-type and sip1-i4 mutant cells. In wild-type cells, GFP-Rho3 protein was localized in the Golgi/endosomes in addition to the plasma membrane and division site [10]. GFP-Rho3 fluorescence was observed as dot-like structures that co-localized with FM4-64positive structures ( Figure 4B arrowheads) as well as at the plasma membrane and division site ( Figure 4B, arrows) in the wild-type cells. In contrast, in the sip1-i4 mutant cells, the localization of GFP-tagged Rho3 to the division site was greatly impaired ( Figure 4B, sip1-i4). In addition, in the sip1-i4 mutant cells, Rho3 was observed as large clusters in the cytoplasm ( Figure 4B, sip1-i4, double arrowheads) and the number of Rho3 dots that co-localized with FM4-64 was markedly lower compared with the wild-type cells ( Figure 4B). The above findings were also supported by the quantification of Rho3 localization at the division site ( Figure 4C) and Rho3 dots colocalization with FM4-64 ( Figure 4D) in the wild-type and sip1-i4 cells. We raised antibodies against Schizosaccharomyces pombe Rho3 protein and examined the expression level of endogenous Rho3 in wild-type and sip1-i4 mutant cells. The immunoblotting data showed that the amount of Rho3 protein in sip1-i4 mutant cells was slightly less, by 20%, than in wildtype cells ( Figure S2). This raises the possibility that Rho3 protein may become unstable if it fails to localize in Golgi/ endosomes. We next examined the intracellular localization of endogenous Rho3 protein by performing immunostaining using the polyclonal-and monoclonal-Rho3 antibodies. However, numerous dot-like structures were observed both in wild-type and Rho3-deleted cells ( Figure S3). Furthermore, the expected Rho3 localization to the plasma membrane was not observed Figure S3). We, therefore, concluded that the Rho3 antibodies did not properly recognize endogenous Rho3 protein in vivo and that the numerous dots detected by the Rho3 antibodies may include somewhat artificial structures. The Sip1-i4 mutant protein can interact with the AP-1 complex and Rho3 In our previous study, we showed that Sip1 played a role in recruiting the AP-1 complex to the Golgi/endosomes through physical interaction, and here we showed that in the sip1-i4 mutant cells both Rho3 and the AP-1 complex were mislocalized. Therefore, we examined the effect of the sip1-i4 mutation on the physical interaction between Sip1/Rho3 and the Sip1/AP-1 complex. The sip1-i4 mutation resulted in a truncated protein product that lacked 485 amino acids at the Cterminus of the Sip1 protein ( Figure 5A, Sip1-i4). To monitor the interaction of AP-1 complex with the Sip1-i4 mutant protein, we generated a GST-tagged mutant lacking the 485 amino acids at the C-terminus (Sip1-i4-GST). We examined whether Sip1-i4 mutant protein associate with the AP-1 complex. For this purpose, purified Sip1-i4-GST protein was used in binding experiments with lysates prepared from the cells expressing Apm1, Apl2, Apl4, and Aps1 fused to GFP or a control GFP protein. These results showed that the C-terminally truncated Sip1-i4 mutant protein bound to the AP-1 complex ( Figure 5B). GST protein did not associate with the AP-1 complex ( Figure S4). We also tested whether the binding between Sip1 and the AP-1 complex is dependent on the C-terminal region of Sip1. For this purpose, we expressed the C-terminal region of Sip1, lacking the 1414 amino acids at the N-terminus ( Figure 5A Sip1ΔN), and performed the binding experiment using purified Sip1ΔN-GST ( Figure 5C). The results showed that the Cterminal portion of the Sip1 protein bound to the AP-1 complex ( Figure 5C). Similar binding experiments were performed using the 2 truncated Sip1 mutant proteins fused to GST and various versions of GFP-fused Rho3 proteins or the control GFP. The results showed that various forms of Rho3 bound to both the truncated versions of the Sip1 protein ( Figure 5D, E). Thus, either the N-terminal or the C-terminal part of Sip1 can bind to Rho3 and the AP-1 complex. As shown in Figure 5A, the 2 truncated Sip1 mutant proteins harbour multiple HEAT (Huntington-elongation-A subunit-TOR) interacting domains, the interaction between Sip1 and these proteins may be achieved through these domains. Therefore, we reasoned that the mislocalization of Rho3 in the sip1-i4 mutant cells may not be derived from the loss of association between Rho3 and the Sip1-i4 mutant proteins and may involve mechanism other than protein interaction. The C-terminus of Sip1 is important for its Golgi/ endosomal localization To search for the mechanism by which Sip1 can recruit Rho3 at the Golgi/endosomes, we analyzed the effect of the sip1-i4 mutation on its localization. For this purpose, we expressed the Sip1-i4 mutant protein fused to GFP (Sip1-i4-GFP). Full-length Sip1 was localized to dot-like structures that mostly co-localized with FM4-64 ( Figure 6A, Sip1-GFP arrowheads). In contrast, Sip1-i4-GFP failed to localize to the Golgi/endosomes because the specific dot-like structures were rarely observed ( Figure 6A, Sip1-i4-GFP). Instead, they were diffusely localized in the cytoplasm. To further investigate whether the C-terminal region of Sip1 that is deleted in the Sip1-i4 mutant protein plays a critical role in its localization, we expressed Sip1ΔN as a GFP-fusion protein (Sip1ΔN-GFP). Notably, Sip1ΔN-GFP was localized in the dot-like structures similar to those of full-length Sip1-GFP ( Figure 6A, Sip1ΔN-GFP). In addition, the Sip1ΔN-GFP dots co-localized with FM4-64-positive structures during an early stage of endocytosis ( Figure 6A, Sip1ΔN-GFP, arrowheads). The amount of these 2 truncated Sip1 mutant proteins and the full-length Sip1 protein did not differ significantly (data not shown) indicating that the reduced level of localized signal of Sip1 in the Sip1-i4-GFP is not due to a decrease in overall Sip1 levels in the Sip1-i4 mutant. These results suggest that the Cterminus of Sip1 is required and sufficient for its Golgi/ endosomal localization. We also assessed the ability of these truncated mutant Sip1 fragments to rescue the phenotypes of the sip1-i4 mutant cells. The results revealed that the sip1-i4 mutant fragment failed to suppress the phenotypes ( Figure 6B, +sip1-i4), whereas the Cterminal region of Sip1 rescued the mutant phenotypes ( Figure 6B, +sip1ΔN), including the sensitivity to heat and FK506. This indicated that there is a correlation between the suppression ability and correct localization of these truncated gene products. Sip1 links Rho3 to AP-1 complex Because the sip1-i4 mutation affected the localization of both the AP-1 complex and Rho3 protein to the Golgi/endosomes and our previous findings showed that Rho3 formed a complex with Apm1 in the Golgi/endosomes [10], we investigated whether Sip1 is required for the association of between Apm1 and Rho3. Therefore, we examined the binding of Apm1-GST and GFP-Rho3 in wild-type and sip1-i4 mutant cells. The results showed that the binding of Apm1 to Rho3 in the sip1-i4 mutant cells was greatly impaired as compared in wild-type cells ( Figure 7A), suggesting that Sip1 is required for the physical interaction between Apm1 and Rho3. It should be noted that many bands with smaller molecular weights than that of the full-length GFP-fused Rho3 protein were detected by SDS-PAGE in the sip1-i4 mutant cells ( Figure 7A). We hypothesized that Rho3 protein may become unstable when it fails to be localized to the Golgi/endosomal membranes. In support of this possibility, immunoblotting data using antibodies raised against Rho3 protein showed that the amount of Rho3 protein in sip1-i4 mutant cells was slightly less, by 20%, than that in wild-type cells ( Figure S2). We also examined whether the interaction between Apm1 and Rho3 could be rescued in sip1-i4 cells by Sip1ΔN expression. The results showed that the expression of Sip1ΔN significantly rescued the binding between Apm1 and Rho3 ( Figure 7). Furthermore, we examined the effect of Sip1ΔN expression on the intracellular localization of Rho3 and its colocalization with FM4-64. Notably, Sip1ΔN expression also rescued Rho3 co-localization with FM4-64, namely, Golgi/ endosome localization (arrowheads) as well as localization to the septa (arrows) (Figure 7B-D). We also performed IP experiments shown in Figure 3B Figures 5B, 5C, 5D, 5E, and Figure 7A, using GFP-fused Rho2, and demonstrated that Rho2 protein did not interact with the GST-fused full-length Sip1, Sip1-i4, Sip1ΔN or Apm1 protein, thus confirming the specificity of the interactions detected in each experiment ( Figure S5). Discussion In this study, we present a novel role for Sip1, a conserved AP-1 accessory protein, in recruiting the Rho3 small GTPase to Binding assay involving Apm1 and Rho3 in wild-type, or sip1-i4 cells harboring the control vector or Sip1ΔN. GST pull-down experiments were performed using Apm1-GST expressed in wild-type (wt) and sip1-i4 mutant (sip1-i4) cells, which were transformed with the pDB248 multi-copy vector or the vector containing sip1ΔN expressed under the control of the nmt1 promoter. Cells that expressed GFP alone or GFP-Rho3 were harvested, and their lysates were incubated with purified full-length Apm1 fused to GST. Proteins bound to glutathione beads were analyzed by SDS-PAGE and visualized by autoradiography. Right panel: Quantitation of GFP-Rho3 beads protein levels by densitometry of the expressed bands against that of the lysate protein levels in wild-type cells, sip1-i4 cells or sip1-i4 cells with Sip1ΔN expression as shown in A. Data from at least three independent experiments are expressed as means ± standard deviations. (B) Subcellular localization of GFP-Rho3 in wildtype cells, sip1-i4 cells or sip1-i4 cells with Sip1ΔN expression. GFP-Rho3 expressed in wild-type (wt) and sip1-i4 mutant (sip1-i4) cells, which were transformed with the pDB248 multi-copy vector or the vector containing sip1ΔN expressed under the control of the nmt1 promoter. Cells were cultured in YPD medium at 27°C, following which they were incubated with FM4-64 dye for 5 min at 27°C to visualize the Golgi/endosomes. the Golgi/endosomes, on the basis of our discovery of the functional and physical interactions between Rho3 and Sip1. It has been established that Sip1 recruited the AP-1 complex to the Golgi/endosomes physical interaction [13] and that Rho3 is involved in the regulation of the Golgi/endosomal trafficking by functionally and physically interacting with Apm1 [10]. The findings that Sip1 regulates proper localization of Rho3 and the AP-1 complex and that Sip1 is associated with Rho3 and the AP-1 complex suggested that Sip1 links Rho3 signaling to AP-1 complex-mediated Golgi/endosomal trafficking. Sip1 is highly conserved throughout evolution, with homologs from human to yeast. A previous study in S. cerevisiae and humans also demonstrated the role of the AP-1 accessory proteins Laa1 (large AP-1 accessory) and p200 [15] in AP-1mediated transport, and Laa1 is involved in AP-1 localization to the trans-Golgi network (TGN) [14]. Interestingly, aftiphilin, another AP-1 interacting protein, co-elutes with two other AP-1 binding partners, p200a and γ-synergin [20], and it has been suggested that the aftiphilin/p200/γ-synergin complex may have additional functions along with its role in facilitating AP-1 function [15]. Notably, there are differences in the phenotypes associated with the loss of function of the AP-1 accessory protein in both yeasts. The sip1 + gene is essential for growth, whereas the budding yeast laa1 null cells are viable and exhibited synthetic growth defects when combined with gga1Δgga2Δ [14]. Furthermore, the temperature-sensitive sip1-i4 mutants displayed distinct phenotypes ranging from defects in Golgi/endosomal trafficking and vacuole fusion [13] to cytokinesis defects [21] even at the permissive temperature, whereas laa1 deletion alone did not impair the secretion of mature α-factor and transport of CPY and showed synthetic effects when combined with gga1gga2 double deletion [14]. The reason for the differences could be that S. pombe Sip1 might be required for the proper function of protein(s) other than AP-1. Our phenotypic screening using the temperaturesensitive growth defect of the sip1-i4 allele was successful in identifying rho3 + as a multi-copy suppressor of the sip1 mutant and revealed an additional functional interaction between the AP-1 accessory protein and Rho3. How overexpression of rho3 + can suppress the phenotypes of sip1-i4 cells even in the absence of clear Golgi/endosome localization remains unclear. However, Rho3 overproduction suppressed the sip1-i4 mutant phenotypes, indicating that Rho3 exerts its effects in the absence of its clear Golgi/ endosome localization. Rho3 was previously isolated as a multi-copy suppressor of several mutant alleles of a component of the exocyst complex including Sec4 and Sro7 in budding yeast and Sec8 in fission yeast [22]. The exocyst complex is highly conserved from yeasts to mammals and is involved in the late stages of exocytosis by targeting and tethering post-Golgi vesicles to the plasma membrane. Thus, Rho3 overproduction may stimulate secretion via the components of the exocyst by locally increasing the activity of the exocytic apparatus, which leads to the suppression of mutant strains with defective exocytosis including sip1-i4 mutant cells. However, we prefer the alternate possibility that, even though there is no clearly visible Rho3 protein co-localizing with FM4-64 in the sip1-i4 mutant cells, an extremely small amount of Rho3 protein may still exist in Golgi/endosomes, which can be augmented by the overproduction of Rho3 and result in the suppression of the sip1-i4 mutant phenotypes. In support of this possibility, the forced overexpression of GFP-Rho3, by culturing the cells in the absence of thiamine (induced condition), visualized some Rho3 dots co-localizing with FM4-64 on the sip1-i4 mutant background (data not shown). Our previous reports revealed that Rho3 associates with the AP-1 complex in a GTP and effector-dependent manner [10], whereas the association of Sip1 with Rho3 appears to be GTPindependent. Notably, several studies in higher eukaryotes reported that Rac can directly interact with PIP5K isoforms in a GTP-independent manner [23][24][25], and unlike the interaction of Rac with most other effectors, the interaction between Rac and PIP5K requires the C-terminal polybasic region of Rac. We then investigated the localization dependency between Rho3/ Sip1. Reciprocal experiments illustrated that Sip1-GFP colocalization with FM4-64 ( Figure S6A, arrowheads), and with the trans-Golgi protein Sec72-mCherry ( Figure S6B, arrowheads), were observed in Δrho3 cells similar to those observed in wild-type cells. Thus, although the sip1-i4 mutation affects Rho3 localization to the division site and the Golgi/ endosomes, Rho3 deletion did not affect Sip1 localization to the Golgi/endosomes. These data are consistent with the proposed role of Sip1 as a regulator, but not as an effector of Rho3. If Sip1 is providing a physical link between Rho3 and AP-1 complex and the interaction with Rho3 is nucleotide independent, then how the Rho3/AP-1 interaction would be dependent on the GTP-bound form of Rho3? We then examined and demonstrated the importance of the C-terminal region of Sip1 for its intracellular localization. The sip1-i4 mutation resulted in a termination codon at amino acid position 1434 located within the highly conserved region (HCR), which contains an approximately 200-amino acid segment conserved throughout evolution in this protein family [14]. Notably, the Sip1 C-terminal region (Sip1ΔN) was sufficient for its Golgi/ endosomal localization, suppression for the sip1-i4 mutant cells, and for the association of Sip1 with Rho3 and the AP-1 complex (Figures 5, 6). In addition, although the Sip1-i4 mutant protein that failed to localize in the Golgi/endosome ( Figure 6A) maintained its ability to bind to Rho3 and AP-1, the Sip1-i4 mutant protein lost its ability to suppress the sip1-i4 mutant cells (Figures 5, 6). Thus, we hypothesize that Sip1 can bind to Rho3 and the AP-1 complex in the cytosol and recruit them to the Golgi/endosomes, thereby enhancing the formation of the Rho3/AP-1 complex, which is dependent on the GTP-bound form of Rho3. Consistently, rho3 + overexpression suppressed the phenotypes of the sip1-i4 mutant cells in a GTP-and effector-dependent manner, as Rho3TN and Rho3EV failed to suppress the sip1-i4 mutant cells, even though the binding between Rho3 and Sip1 was GTP-independent. Because our previous findings indicated that rho3 + overexpression can suppress apm1-1 mutant cells and that Rho3/AP-1 binding was GTP-dependent [10], the suppression of the sip1 mutant phenotypes by Rho3 overproduction may reflect the increase in the amount of the Rho3/AP-1 complex in the Golgi/endosomes. Sip1 possesses 5 HEAT (Huntington-elongation-A subunit-AP-1 Accessory and Rho3 in S. pombe TOR) repeat domains implicated in protein-protein interactions, and 2 of the HEAT repeat domains localize in the C-terminal region [13]. Therefore, the interaction of Sip1 with unidentified protein(s) through HEAT repeat domains in the C-terminus may direct Sip1 to the Golgi/endosomes. The prenylation of small GTPases including Rho, is a wellknown mechanism for targeting the Rho family proteins to the membrane and their proper cellular location. However, specific targeting factors for each small GTPase have not been completely characterized. In the present study, we presented a novel role of Sip1 in Rho localization to specific membranes, and given the high conservation of the AP-1 accessory protein and small GTPases, our discovery may shed light on the understanding of the regulatory mechanisms of the membrane trafficking system mediated by Rho and the clathrin adaptor complex. Figure S1. Binding assay involving GST and various mutant forms of Rho3. GST pull-down experiment was performed using GST expressed under the control of the nmt1 promoter. Cells that expressed GFP alone or various GFPtagged mutant forms of Rho3 were harvested, and their lysates were incubated with the purified GST protein. GST was precipitated with glutathione beads, washed extensively, subjected to SDS-PAGE, and immunoblotted using anti-GFP or anti-GST antibodies. (TIF) Figure S2. Expression level of endogenous Rho3 in wildtype cells and sip1-i4 mutant cells. (A) Immunoblot analysis of the Rho3 protein in Rho3-deletion (Δrho3), wild-type (wt) and sip1-i4 mutant (sip1-i4) cells. The whole-cell lysates were analyzed by immunoblotting with polyclonal anti-Rho3 antibodies. (B) Quantitation of Rho3 protein levels by densitometry of the expressed bands against that of the tubulin protein levels in wild-type and sip1-i4 cells shown in A. (TIF) Figure S3. Immunofluorescent Localization of Rho3 in wild-type and. Rho3-deletion cells. The wild-type (wt) and Rho3-deletion cells (Δrho3) were cultured in YES medium at 27°C. Cells were fixed and stained with ployclonal rat anti-Rho3 antibodies (A) and monoclonal rat anti-Rho3 antibody (B), and examined by fluorescence microscopy. Bar, 10 µm. (TIF) Figure S4. Supporting Information Binding assay involving GST and the 4 subunits of the AP-1 complex. GST pull-down experiment was performed using GST, expressed under the control of the nmt1 promoter. Cells that expressed GFP alone or GFP-tagged to the 4 subunits of the AP-1 complex were harvested, and their lysates were incubated with the purified GST protein. GST was precipitated with glutathione beads, washed extensively, subjected to SDS-PAGE, and immunoblotted using anti-GFP or anti-GST antibodies. (TIF) Figure S5. Binding assay involving GFP-Rho2 and various GST fusion proteins. GST pull-down experiment was performed using GST-Sip1, Sip1-i4-GST, Sip1ΔN-GST and Apm1-GST, expressed under the control of the nmt1 promoter. Cells that expressed GFP-Rho2 alone were harvested, and their lysates were incubated with the purified various GST fusion proteins. GST-fused proteins were precipitated with glutathione beads, washed extensively, subjected to SDS-PAGE, and immunoblotted using anti-GFP or anti-GST antibodies. (TIF) Figure S6. Subcellular localizations of Sip1-GFP in Rho3deletion cells are similar to that in wild-type cells. (A) Subcellular localizations of Sip1-GFP in wild-type (wt) and Rho3-deletion cells (Δrho3). Cells that expressed chromosome-borne Sip1-GFP were cultured in YPD medium at 27°C. They were incubated with FM4-64 dye for 5 min at 27°C to visualize Golgi/endosomes. Arrowheads indicate the localization of Sip1-GFP at Golgi/endosomes. Bar, 10 µm. (B) Sip1-GFP partially co-localized with the trans-Golgi marker Sec72-mCherry, but did not co-localize with the cis-Golgi marker Anp1-mCherry in Rho3-deletion cells (Δrho3). Rho3deletion cells expressed chromosome-borne Anp1-mCherry and Sip1-GFP, or chromosome-borne Sec72-mCherry and Sip1-GFP. Cells were cultured in YPD medium at 27 o C and examined by fluorescence microscopy. Arrowheads indicate the co-localization of Sip1-GFP with Sec72-mCherry at trans-Golgi. Bar, 10 µm. (TIF)
2017-08-31T13:27:19.438Z
2013-07-01T00:00:00.000
{ "year": 2013, "sha1": "ef09bdbfa0c7602b1a9abe5290f2c330ff79f41d", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0068488&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ef09bdbfa0c7602b1a9abe5290f2c330ff79f41d", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
263725344
pes2o/s2orc
v3-fos-license
Mathematical Model for Growth and Rifampicin-Dependent Killing Kinetics of Escherichia coli Cells Antibiotic resistance is a global health threat. We urgently need better strategies to improve antibiotic use to combat antibiotic resistance. Currently, there are a limited number of antibiotics in the treatment repertoire of existing bacterial infections. Among them, rifampicin is a broad-spectrum antibiotic against various bacterial pathogens. However, during rifampicin exposure, the appearance of persisters or resisters decreases its efficacy. Hence, to benefit more from rifampicin, its current standard dosage might be reconsidered and explored using both computational tools and experimental or clinical studies. In this study, we present the mathematical relationship between the concentration of rifampicin and the growth and killing kinetics of Escherichia coli cells. We generated time-killing curves of E. coli cells in the presence of 4, 16, and 32 μg/mL rifampicin exposures. We specifically focused on the oscillations with decreasing amplitude over time in the growth and killing kinetics of rifampicin-exposed E. coli cells. We propose the solution form of a second-order linear differential equation for a damped oscillator to represent the mathematical relationship. We applied a nonlinear curve fitting solver to time-killing curve data to obtain the model parameters. The results show a high fitting accuracy. ■ INTRODUCTION Antibiotic resistance is a global health threat; particularly, it emerges during wars, mass migrations, and pandemic conditions.Under these circumstances, to combat infections, emergency strategies need to be applied. 1These strategies can be discovery of an antibiotic, 2,3 invention of an alternative to antibiotics, 4−8 or reformulation of the administration of current antibiotics by a better understanding of bacterial behavior. 9,10Currently, a limited number of antibiotics are in the treatment repertoire existing for bacterial infections. 11−22 In the literature, several clinical and computational studies have been conducted to shorten therapy time, increase effectiveness, and reduce the negative side effects and financial costs of rifampicin treatments. 23,24Several lines of evidence suggest that under conditions of tolerability and safety, intensified regimens incorporating elevated dosages of rifampicin may shorten the treatment duration or contribute to the management of infections linked to high mortality rates.Nevertheless, a consensus remains elusive regarding assessing pharmacokinetic parameters, efficacy clarification, and compound toxicity.In this study, we propose a mathematical model between the concentration of rifampicin and the growth and killing kinetics of Escherichia coli (E.coli) cells.We particularly focus on modeling the fluctuations in the antibiotic responses of cells and propose to use the solution of the damped oscillator equation to obtain the mathematical relationship between the rifampicin concentrations and the growth and killing kinetics of E. coli cells.Along this line, upon obtaining time-killing curves of E. coli cells in the presence of 4, 16, and 32 μg/mL rifampicin exposures, we use the lsqcurvefit function in MATLAB to find model parameters of damped oscillatory behavior of cells.To the best of our knowledge, this is the first study that has modeled the fluctuations in the growth and killing kinetics of E. coli cells in the presence of rifampicin with high accuracy.We believe that our results might contribute to elaborate interrogations on better understanding of initial antibiotic killing phases of antibiotic treatments in the context of antibiotic resistance. ■ LITERATURE Rifampicin has been an important medicine since it was discovered in 1965.It is still on the World Health Organization's List of Essential Medicines.Rifampicin inhibits bacterial DNA-dependent RNA polymerase and suppresses RNA synthesis to kill bacteria. 12,13It is widely used for the treatment of tuberculosis, 14 leprosy, 15 acute bacterial meningitis, 16−18 pneumonia, 19,20 and biofilm-related infections. 21,22n pursuit of enhanced therapeutic outcomes, several clinical and computational studies have been conducted to shorten the therapy time, increase its effectiveness, and reduce the negative side effects and financial costs of rifampicin. 23,24In the context of antibiotic administration, generally, lower cure rates can be attributed to the increased emergence of resistance within infections. 25Nonetheless, when tolerability and safety of the elevated dosages of rifampicin are achieved, it might contribute to the management of infections with high mortality rates. Developing new computational models might help guide experiments and clinical tests to determine the optimal dosage for achieving favorable cure rates, reduced relapse rates, minimal toxicity, and lower mortality rates.−33 The landmark study by Weinstein and Zaman reported the evolution of rifampin resistance in E. coli and Mycobacterium smegmatis due to substandard drugs.−36 Along the same line, Regoes and co-workers focused on developing mathematical models to describe the relationship between the bacterial net growth rates and the concentration of antibiotics. 37They presented a pharmacodynamic function based on a Hill function and exhibited that pharmacodynamic parameters might influence the microbiological efficacy of treatment.The descriptive model developed by Guerillot and co-workers considers the lag phase, the initial number of bacteria, the limit of effectiveness, and the bactericidal rate of antimicrobial agents. 38Their model was applied to compare the time-killing curves of amoxicillin, cephalothin, nalidixic acid, pefloxacin, and ofloxacin against two E. coli strains. −28 Despite these considerations, there has been very limited systematic study exploring the quantitative relationship between the concentration of rifampicin and the growth and killing kinetics of E. coli cells. 34,37,39METHODOLOGY Bacterial Strain and Growth Curves.The strain used in this study is American Type Culture Collection (ATCC) 10536 derived from E. coli K-12.Bacteria from the glycerol stocks were inoculated into 2 mL of Miller's Luria−Bertani Broth (LB) and incubated at 37 °C by shaking at 200 rpm for ∼16 h.A spectrophotometer (BIOCHROM, WPA Biowave II ultraviolet/visible (UV/vis), U.K.) was used to determine the turbidity of cultures by measuring their optical densities (absorbance at 600 nm).To obtain growth kinetics, E. coli cells were grown to OD 600 of 0.5−1 and then diluted to OD 600 of 0.05 in a fresh LB medium.To obtain the growth kinetics of the antibiotic-treated bacterial cultures, 200 μL of bacterial cultures was transferred into the microplate wells with the following rifampicin concentrations: 4, 16, 32 μg/mL.Following the preparation of microplate wells, the plates were sealed and positioned within a microplate reader (TECAN Multimode Microplate Reader) for further analysis.During the measurements, the plates were rotated at 200 rpm with 20 min intervals at 37 °C for 4500 min. Antimicrobial Agent.Rifampicin molecule (Sigma-Aldrich, catalog no.R-120) was carefully weighted to obtain 0.1 g and dissolved in dimethyl sulfoxide (DMSO, PanReac Applichem, catalog no.P100C16), resulting in a concentration of 10 mg/mL.Subsequently, this prepared stock solution was employed to obtain the subsequent rifampicin concentrations: 4, 16, and 32 μg/mL.In our experiment, the doses of 32 μg/ mL were negligible compared to toxic levels of rifampicin. 40ence, adverse reactions and toxicity were not considered, as they were beyond the bounds of this study. Colony-Forming Unit (CFU) Assay.Serial 10-fold dilutions of bacteria culture were made using 100 μL of bacteria culture and 900 μL of 1× phosphate-buffered saline (PBS)�0.025%Tween 20 solution in 5 mL polypropylene tubes.Previous studies have demonstrated that Tween 20 did not significantly inhibit the growth of E. coli cells at concentrations up to 2% Tween 20. 41,42 In our study, it improved the formation of dispersal colonies on the LB plate.Next, 100 μL of appropriate dilutions were plated onto agarbased media to ensure that the serial dilutions would give at least one countable plate (30−300 countable colonies per plate).Then, the plates were incubated at 37 °C to enumerate the colonies. Kill Curves.E. coli cells were grown overnight to the mid log phase (OD 600 of 0.5−1) and diluted to OD 0.05, corresponding to ∼10 7 CFU/ml in fresh LB medium.The concentrations of rifampicin used in the CFU assays were 4, 16, and 32 μg/mL.Next, CFU assays were performed, and the plates were incubated at 37 °C for 4 days until the colonies were enumerated. Minimum Inhibitory Concentration (MIC).E. coli K-12 strain was grown in 5 mL of LB broth for 20 h at 37 °C with constant shaking at 200 rpm.Next, the cell culture was diluted in 10 6 CFU/mL into a fresh LB medium as a working inoculum in a 15 mL tube.Then, 50 μL of culture was streaked over an LB agar surface, including rifampicin with the concentrations of 4, 16, and 32 μg/mL.Plates were incubated in an incubator overnight at 37 °C for 20 h.The MIC value was determined as the lowest concentration of antibiotic that inhibits the visible growth of bacteria.We confirmed the MIC value with three independent experiments.Mathematical Model.Damped oscillations are commonly observed in various natural and engineered systems, including mechanical systems, electrical circuits, and economics.In the literature, damped oscillations refer to repetitive, back-andforth motions or vibrations in a system that gradually decrease in amplitude over time due to energy dissipation.In other words, the oscillations gradually lose energy, causing the system to come to its resting position.A similar oscillation with decreasing amplitude over time is also observed in the growth and killing kinetics of rifampicin-exposed E. coli cells.Damped oscillations can be mathematically represented using various equations depending on the properties and specifications of a system.One common way to represent damped oscillations is using a second-order linear differential equation.Hence, its solution provides the mathematical expression for the damped oscillations over time.To define the relationship between the rifampicin concentrations and the growth and killing kinetics of E. coli cells, we proposed to use the equation presented in (eq 1) Here, t is the given time, y is the measured normalized CFU data, F(θ, t) is the nonlinear model function given in (eq 1), and N is the total number of sample points.The implementation was performed in MATLAB R2020a computing environment by using lsqcurvefit function. 43 ■ RESULTS We performed optical density measurements to obtain growth kinetics of E. coli cells in the absence and presence of rifampicin, as detailed in the Methodology Section.The MIC value of E. coli cells exposed to rifampicin was 0−12 μg/mL as reported in ref 35. Figure 2a illustrates the growth kinetics of E. coli cells for the control and 4, 16, and 32 μg/mL rifampicin exposures. To obtain the killing kinetics of E. coli cells, we performed a rifampicin killing assay as previously explained in the Methodology Section.We determined the number of colonies in the absence (control) and presence of a set of rifampicin doses for 14 h, as explained in the Methodology Section. Figure 2b shows the oscillatory behavior of E. coli cells in the early phase of the rifampicin treatment.As discussed in the Mathematical Model Section, we used (eq 1) to model this oscillatory behavior of E. coli cells.The results of the parameter search for the proposed model for the different concentrations of rifampicin are given in Table 1. The obtained first-order polynomial fit that presents the linear relationship between the model parameters and the concentration levels of rifampicin with a 95% prediction interval is shown in Figure 4.The corresponding linear relationships between the concentration level of rifampicin (C, μg/mL) and the model parameters are given in (eqs 3−7). ( −21 It has a rapid killing rate in the first 6 h of treatment.−46 To benefit more from rifampicin, its current standard dosage might be reconsidered and deeply explored by using both computational tools and experimental or clinical studies.Our study can only give rise to various questions about how the initial killing kinetics of rifampicin influence its killing profile and, using mathematical models, whether we can predict its dose-dependent killing pattern.We believe that an improved standard dosage of rifampicin will increase cure rates, lower relapse rates, and, as a consequence, decrease mortality rates of several infectious diseases. 23,34In the literature, most of the mathematical models have been focused on modeling steady state antibiotic killing patterns of antimicrobials and using exponentially changing functions in modeling.Most of these models rely on conventional CFU data with a large sampling time (time points every 24 h) that is inadequate to observe the oscillatory behavior of antibiotic-treated bacteria. 34Here, our focus was to enhance the mathematical models to reveal better the response of E. coli cells to rifampicin exposure in the early phase.The initial growth of bacteria in the presence of rifampicin might contribute to the appearance of antibiotic-resistant cells or relapse of cells upon antibiotic treatment.Here, we obtained growth parameters of E. coli cells under regular growth conditions (control, without antibiotics) and in the presence of rifampicin exposure for 4, 16, and 32 μg/mL.In our experiments, the killing profile of rifampicin was biphasic; initially, cells grew more than being killed.Thereafter, rapid killing was followed by the growth of the cells in the presence of rifampicin, Figure 2. The MIC value of rifampicin for E. coli was reported as 0−12 μg/mL in the literature. 34First, we performed OD measurements of E. coli cells in the absence and presence of 4, 16, and 32 μg/mL concentrations of rifampicin via 2 h of sampling time for 14 h.We obtained a consistent biphasic profile of rifampicin killing in Figure 2a.To confirm, we performed a CFU assay using 4, 16, and 32 μg/mL concentrations of rifampicin, Figure 2b.Contrary to OD measurements, we obtained regrowth of E. coli cells in the presence of 4 μg/mL rifampicin after 6 h.Besides, the decline of CFU at a 32 μg/mL concentration of rifampicin was higher. Since bacteria debris might be optically detected and contribute to density measurements of the cells, we used the data generated by the CFU assay for the mathematical modeling.We applied the nonlinear least-squares algorithm in MATLAB to obtain the model parameters.We generated the mathematical model in (eq 1) with the listed model parameters in Table 1. Figure 3 demonstrates the numerical and experimental data of the growth and killing kinetics when we simulate rifampicin exposure.Figure 4 displays the firstorder polynomial fit for these models with 95% prediction intervals.Figure 1 shows that the oscillatory behavior of the rifampicin-treated E. coli population can be described using (eq 1).The results presented here might raise more questions about the rifampicin killing profile, such as the underlying reasons for the increased and then decreased colony counts in the initial period of rifampicin treatment.Mostly, the sampling time of CFU assays in the literature is inadequate to exhibit the oscillatory behavior of antibiotic killing phase, which might contribute to the confer of antibiotic resistance. ■ CONCLUSIONS Kinetics of antimicrobial actions are generally used to evaluate and compare new drugs and study the differences and changes in the antimicrobial susceptibilities of bacterial populations.In our experiments, the killing profile of rifampicin was biphasic: initially, cells were growing more than being killed.Subsequently, rapid killing was followed by the growth of cells in the presence of rifampicin.To model the killing behavior of the cells at various rifampicin concentrations, we proposed to use the damped harmonic oscillator's equation of motion and obtained model parameters by applying a nonlinear curve fitting solver in the MATLAB computing environment.The proposed mathematical model presents high fitting accuracy between the numerical results and time-killing curve data of E. coli cells for rifampicin exposure. Our future work aims to enhance mathematical models for a unified description of E. coli cell survival with varying rifampicin concentrations, followed by the development of an open-source modeling library predicting dose-dependent antibiotic killing across diverse bacterial species for different types of antibiotics. ( 1 ) Here, Y 1 , Y 2 , Y 3 , Y 4 , and Y 5 are the model parameters that will be estimated.The parameter Y 1 corresponds to the final value of Y(t) after diminishing all oscillations, Y 2 shows the initial amplitude of the oscillations, Y 3 accounts for the damping of oscillations, and Y 4 and Y 5 correspond to the frequency and phase shift of the oscillations, respectively.Figure1shows the effect of different parameter settings on the function Y(t). Figure 1a shows Y(t) for three different values of Y 1 while other parameters were at their default values; Y 2 = 20, Y 3 = 0.15, Y 4 = 0.8, and Y 5 = 1.Each curve in Figure 1a shows oscillation, while the amplitude decreases exponentially and eventually reaches its equilibrium point (final value); Y 1 .Figure 1b shows the effect of the damping coefficient, Y 3 , on function Y(t).As Y 3 increases, the amplitude of oscillations decreases much faster.The frequency and phase shift of the oscillations depend on the parameters Y 4 and Y 5 , respectively.Figure 1c,1d shows the period and phase changes in Y(t) for different values of Y 4 and Y 5 .A nonlinear least-squares algorithm was used to obtain model parameters.Mathematically, the parameter vector θ = [Y 1 , Y 2 , Y 3 , Y 4 , Y 5 ] was obtained by solving (eq 2) (2) Figure 2 . Figure 2. Optical density measurements and percentage of survival of E. coli cells.(a) Optical density measurements at the wavelength of 600 nm.(b) CFU killing assay of E. coli culture in the absence and presence of rifampicin 4, 16, and 32 μg/mL for 14 h.The symbols represent the average values, and the vertical bars at each data point indicate the standard deviations obtained from three separate experiments. Figure 3 . Figure 3. Curve fitting results for the response of E. coli cells to rifampicin: (a) in the absence of rifampicin, (b) 4 μg/mL, (c) 16 μg/mL, (d) 32 μg/mL rifampicin concentrations.The bacterial counts are plotted against time (hours).At time zero, the number of CFU is 100 under conditions (a−d).The model parameters are listed as θ = [Y 1 , Y 2 , Y 3 , Y 4 , Y 5 ].The solid black line shows the fitted curve of the model.Exp, the abbreviation for experiment.Three independent experiments were performed. Figure 4 . Figure 4. First-order polynomial fit for the model parameters.Change of (a) Y 1 , (b) Y 2 , (c) Y 3 , (d) Y 4 , and (e) Y 5 values according to rifampicin concentrations.The black dots represent the sample points; the black solid line depicts first-order polynomial fit, and the red dashed lines show the 95% prediction intervals.
2023-10-07T15:13:35.105Z
2023-10-05T00:00:00.000
{ "year": 2023, "sha1": "c05525ccd8f6facf1063fcdb8cd828b4ebea593c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1021/acsomega.3c05233", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4eec9bbb90910673a24d7a98cf34c540f0b56a74", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
257729442
pes2o/s2orc
v3-fos-license
Severe Cutaneous Adverse Reaction Caused by Carbamazepine and Levofloxacin After Varicella Zoster Virus Infection Abstract Severe cutaneous adverse reactions (SCARs) to drugs are associated with morbidity, mortality, healthcare costs, and challenges in drug development. It is important to identify the SCAR type early by using strict diagnostic criteria because they may require different treatments, follow-ups, and short- or long-term prognoses. A 68-year-old woman admitted to our hospital presented with fever and rashes for 10 days. This case exhibited many features that suggested acute generalized exanthematous pustulosis (AGEP). However, the course of treatment and verified clinical features led to a diagnosis of AGEP and drug rash with eosinophilia and systemic symptoms (DRESS) syndrome that was induced by carbamazepine and levofloxacin after a herpes zoster infection. AGEP combined with DRESS syndrome is a complicated and rare drug-induced dermatological eruption that follows a course similar to DRESS syndrome and more recalcitrant than the course seen with typical AGEP. The associated factors for the SCARs in our patient included age, history of allergy, viral infection, and drugs interacting with specific HLA loci. Improving our understanding of these factors can improve the treatment and prevention of SCARs in these patients. Introduction Adverse cutaneous reactions to drugs are common and affect 2-3% of all hospitalized patients. About 2% of these are classified as severe cutaneous adverse reactions (SCARs). 1 SCARs to drugs are clinically heterogeneous with atypical symptoms in their initial stage. A large area of skin lesions may lead to skin damage and/or infection and can involve various visceral symptoms. 2 Therefore, it is important to pay attention to cutaneous and internal changes in response to SCARs because early identification can improve support and treatment of the affected patients. 3 Acute generalized exanthematous pustulosis (AGEP) is a less severe SCAR that is characterized by rapidly arising nonfollicular pustules caused by drugs such as antibiotics and is considered a self-limiting disease with a good prognosis. 4,5 More severe SCARs include Stevens-Johnson syndrome (SJS), toxic epidermal necrolysis (TEN), and drug reaction with eosinophilia and systemic symptoms (DRESS) syndrome, all of which are even potentially life threatening. 2 It was rarely reported that the AGEP and DRESS syndrome coexisted in the same patient except for few cases owing to several antibiotics 6,7 COVID-19 vaccines, 8,9 and other causative drugs [10][11][12][13] (Table 1). Here, we report a case of AGEP overlapping with DRESS syndrome caused by carbamazepine (CBZ) and levofloxacin treatment after varicella zoster virus infection. the rash spread rapidly and diffusely 9 days ago. Morphological features included oral and vulval mucosal inflammation, erosion with pus on the mucosal surface, diffuse pruritic erythematous eruptions with nonfollicular pinhead-sized pustules on the neck, trunk, and both upper extremities, along with purpura on the lower extremities ( Figure 1). Three months before admission, she had a herpes zoster infection. One month after infection, she had post-herpetic neuralgia for which she took CBZ for 1 week to treat the pain. Her previous medical history included allergic reactions to rifampin and levofloxacin 6 years ago. Laboratory analysis revealed leukocytosis (46.35×10 9 ), neutrophilia (26.21×10 9 ), achroocytosis (12.25×10 9 ), eosinophilia (1.82×10 9 ), anti-RO/SSA52(+++), anti-RO/SSA60(+++), Epstein-Barr virus (EBV) DNA load (+,832 copies/mL), and a total immunoglobulin E (IgE)>1000. Her blood test showed normal functioning of her kidney, liver, and heart. A chest computerized tomography scan showed multiple enlarged lymph nodes in the mediastinum, bimaxillary region, and cardio-diaphragmatic angle, as well as interstitial lung disease. An ultrasound showed splenomegaly. A skin biopsy revealed subcorneal pustules with epidermal spongiosis and perivascular inflammation of lymphocytes and eosinophils in the upper dermis and papillary edema ( Figure 2). Based on the clinical and histopathological findings, the patient was first diagnosed with AGEP. With an initial dose of methylprednisolone (80 mg/d), the pustules became desquamations after three days and most of the rashes disappeared after 2 weeks. However, red itching papules beside the flexures erupted again when the dose was decreased to 32 mg/day. 1707 Based on the recurring rash and laboratory examination, we considered a diagnosis of DRESS syndrome. To address this, we increased her dose of methylprednisolone back to 80 mg/day together with intravenous immunoglobulin (IVIG) for approximately 10 days, after which the rashes and itching improved. We proceeded cautiously with the withdrawal of the glucocorticoid according to our clinical observations (Figure 3). She was discharged from our hospital on the 52nd day on a regimen of oral methylprednisolone 50 mg/day. After 18 months of treatment, we reduced her cortisone dose to 10 mg/day. According to the RegiSCAR criteria and EuroSCAR AGEP validation score, 4,8 the clinical features of our patient could be classified as "probable" for both AGEP and DRESS syndrome. Therefore, our final diagnosis was AGEP that overlapped with DRESS syndrome. Discussion In this case, our initial diagnosis of AGEP was based on her medical history of allergy to levofloxacin along with her presentation of rash, hyperleukocytosis, and neutrophilia, and the almost resolved lesions after the 15-day treatment with methylprednisolone. However, the subsequent disease course with papules and itching was unexpected. Combined with a history of oral CBZ for herps zoster infection, morphological features, and positivity for EBV, we considered the possibility of DRESS syndrome. In support of this, the EuroSCAR AGEP and RegiSCAR DRESS scores for this case were 6 and 5, respectively, which classified the condition as "probable" for both AGEP 4 and DRESS syndrome. 14 Overlapping SCARs are defined as cases that fulfill the criteria for a definite or probable diagnosis of at least two SCARs according to these scoring systems. 13 Therefore, we diagnosed this patient with AGEP overlapping with DRESS syndrome caused by CBZ and levofloxacin. Despite the limited possibility of internal organ involvement in AGEP, there are some reports showing lymph node enlargement, slight reduction in creatinine clearance, or slight elevation in liver enzyme levels. 4,15 DRESS syndrome is a severe systemic drug eruption commonly associated with antibiotics and other drugs 1,16 such as CBZ, which is the most common cause for this condition. 17 Patients with DRESS syndrome typically present with fever, facial edema, skin rashes, eosinophilia, and internal organ involvement, which can be life threatening and carry a non-negligible risk of severe sequelae. 1,[16][17][18] The skin manifestations in DRESS syndrome can be polymorphous, the most common of which is a morbilliform rash that is characterized by a diffuse, pruritic, and macular exanthema. 18 The pathogenesis of SCARs is still not yet completely understood. Recent studies have found a link between SCARs and genetic biomarkers, particularly genetic variants of human leukocyte antigens (HLA), T cell receptors (TCR), drug-metabolizing enzymes, and drug transporters. 19 Commonly, culprit drugs are found to be phenotype-and ethnicity-specific. CBZ is an aromatic and antiepileptic drug and is a common culprit drug in the Chinese population associated with HLA-B*15:02 and HLA-A*31:01 alleles. 19,20 HLA screening can be performed to prevent disease in susceptible patients and may help identify additional culprit drugs in the future. However, in our case, these tests could not be performed because of the patient's poor condition. In our case, two drugs commonly implicated in DRESS syndrome include levofloxacin, a fluoroquinolone antibiotic that can cause fever, and CBZ, which is used to treat herpes zoster neuralgia. 20 There have been various reports of SCARs, especially DRESS syndrome, being caused by antimicrobial agents, including vancomycin, β-lactams, fluoroquinolones, and other antibiotics, while the severity of the cases is controversial. 21 It seems that fluoroquinoloneinduced SCARs are milder with a shorter latency period and can be controlled a few days after drug discontinuation without any other intervention. 21 Another potential contributing factor is the role of viral infection or reactivation in SCARs. Numerous studies demonstrate the reactivation of human herpes virus 6, EBV, and cytomegalovirus, or even sequential reactivation of several of these herpes family viruses associated with both short-and long-term morbidity and mortality. 20,22 Consistent with this, our case had a history of varicella zoster virus infection and EBV duplication that may have been implicated in the pathophysiology of her disease. Although AGEP and DRESS syndrome have different clinical presentations, they have a common T-cell-mediated pathogenesis. 23 We assume that this mechanism, together with EBV and herpes virus infection, allowed for the presentation of overlapping features of these two SCARs. The coexistence of AGEP and DRESS syndrome in the same patient is rare as there are few reports of overlapping features in the literature. Conclusions We report a case of AGEP overlapping with DRESS syndrome that was aggravated after its initial control, likely caused by the patient's previous treatment with CBZ and levofloxacin. This case demonstrates the importance of reviewing the complete medication history of patients with suspected SCARs because the culprit drug may have been taken months before admission. It is also important to check the history of drug allergies and virus infections with suspected DRESS syndrome because they may increase the severity of the disease. Early identification of factors involved in the pathogenesis of AGEP and DRESS syndrome is an important step in determining the appropriate management of the patient. Furthermore, identifying different SCARs affecting the patient is also important to help determine the appropriate course of treatment, follow-up, and short-and long-term prognosis.
2023-03-25T15:10:05.972Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "dcb24f167e33090d27d5a57f3472a4392b2705ba", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.2147/idr.s402267", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "26d0ba19a32e7c225284ce2224d1dc594a896138", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
237490338
pes2o/s2orc
v3-fos-license
Multi-Objective Optimization of ReRAM Crossbars for Robust DNN Inferencing under Stochastic Noise Resistive random-access memory (ReRAM) is a promising technology for designing hardware accelerators for deep neural network (DNN) inferencing. However, stochastic noise in ReRAM crossbars can degrade the DNN inferencing accuracy. We propose the design and optimization of a high-performance, area-and energy-efficient ReRAM-based hardware accelerator to achieve robust DNN inferencing in the presence of stochastic noise. We make two key technical contributions. First, we propose a stochastic-noise-aware training method, referred to as ReSNA, to improve the accuracy of DNN inferencing on ReRAM crossbars with stochastic noise. Second, we propose an information-theoretic algorithm, referred to as CF-MESMO, to identify the Pareto set of solutions to trade-off multiple objectives, including inferencing accuracy, area overhead, execution time, and energy consumption. The main challenge in this context is that executing the ReSNA method to evaluate each candidate ReRAM design is prohibitive. To address this challenge, we utilize the continuous-fidelity evaluation of ReRAM designs associated with prohibitive high computation cost by varying the number of training epochs to trade-off accuracy and cost. CF-MESMO iteratively selects the candidate ReRAM design and fidelity pair that maximizes the information gained per unit computation cost about the optimal Pareto front. Our experiments on benchmark DNNs show that the proposed algorithms efficiently uncover high-quality Pareto fronts. On average, ReSNA achieves 2.57% inferencing accuracy improvement for ResNet20 on the CIFAR-10 dataset with respect to the baseline configuration. Moreover, CF-MESMO algorithm achieves 90.91% reduction in computation cost compared to the popular multi-objective optimization algorithm NSGA-II to reach the best solution from NSGA-II. I. INTRODUCTION Resistive random access memory (ReRAM) has emerged as a promising nonvolatile memory technology due to its multi-level cell, small cell size, and low access time and energy consumption. Prior work has shown that the crossbar structure of ReRAM arrays can efficiently execute matrix-vector multiplication [1], [2], the predominant computational kernel associated with deep neural networks (DNNs). ReRAM-based accelerators for fast and efficient DNN training and inferencing have been extensively studied [3]- [8]. However, a key challenge in executing DNN inferencing [9]- [11] on ReRAM-based architecture arises due to nonidealities of ReRAM devices, which can degrade the accuracy of inferencing. Since DNN inferencing involves a sequence of forward computations over DNN layers, errors due to device nonidealities can propagate and accumulate, resulting in incorrect predictions. The nonidealities of ReRAM crossbars can be classified into two broad categories. The first category includes device defects (e.g., stuck-at-high or stuckat-low resistance [12]) and device reliability issues (e.g., retention failure [13] and resistance drift [14]) that are mostly deterministic in nature and have been addressed in prior work [15]- [20]. The second category includes stochastic noise in ReRAM devices that includes thermal noise [21], shot noise [22], random telegraph noise (RTN) [23], and programming noise [24]. These nonidealities have not been studied for DNN inferencing in prior work. This paper studies the impact of stochastic noise on DNN inferencing and shows that there is a significant degradation in inferencing accuracy due to the high amplitude of noise and reduced noise margin of high-resolution ReRAM cells. Prior algorithmic solutions [25], [26] mitigate the accuracy degradation due to programming variations, but they are not effective in the presence of stochastic noise [27]. To overcome this challenge, we propose a ReRAM-based Stochastic-Noise-Aware DNN training method (ReSNA) that considers both hardware design configurations and stochastic noise. For DNN inferencing on ReRAM using ReSNA, key efficiency metrics include hardware area, execution time (latency), and energy consumption. Therefore, we need to solve a complex multi-objective optimization (MOO) problem to achieve robust DNN inferencing on ReRAM crossbars. The input space consists of different ReRAM crossbar configurations, e.g., ReRAM cell resolution, crossbar size, temperature, and operational frequency. The output space consists of the accuracy of DNN inferencing and hardware efficiency metrics, e.g., hardware area, execution time, and energy consumption. The main challenge in solving this optimization problem is that the input space over ReRAM configurations contains a large number (up to 10 7 ) of available data points, and evaluation of each candidate ReRAM configuration involves executing the ReSNA method, which is computationally prohibitive (e.g., it takes nearly 30 GPU days to run the training on the crossbar simulator [27] for 100 configurations). Our goal is to efficiently uncover the Pareto optimal set of solutions representing the best possible trade-offs among multiple objectives. To solve this challenging MOO problem, we propose an information-theoretic algorithm referred to as Continuous-Fidelity Max-value Entropy Search for Multi-Objective Optimization (CF-MESMO). We formulate the continuous-fidelity evaluation by varying the number of training epochs for ReSNA to establish an appropriate trade-off between computation cost and accuracy. In each MOO iteration, the candidate ReRAM design and fidelity (number of iterations of ReSNA training) pair is selected based on the maximization of the information gained per unit computation cost about the optimal Pareto front. We perform comprehensive experiments on benchmark DNNs and datasets to evaluate the proposed algorithms. Our results show that ReSNA can significantly increase DNN inferencing accuracy in the presence of stochastic noise on ReRAM crossbars, and CF-MESMO can achieve faster convergence and efficiently uncover highquality Pareto fronts when compared to prior methods, including NSGA-II [28] and a state-of-the-art single-fidelity multi-objective optimization method called MESMO [29]. The main contributions of this paper are as follows. • A hardware-aware training method, referred to as ReSNA, to overcome stochastic noise and improve DNN inferencing accuracy. • An efficient multi-objective optimization algorithm, referred to as CF-MESMO, to approximate optimal Pareto fronts in terms of inferencing accuracy and hardware efficiency. • Experimental results on a diverse set of benchmark DNNs and datasets to demonstrate the effectiveness of ReSNA and CF-MESMO and their superiority over state-of-the-art methods. The remainder of this paper is organized as follows. Section II discusses related prior work. Section III explains the problem setup, and Section IV highlights the impact of stochastic noise. Section V presents the ReSNA approach, and Section VI presents the CF-MESMO algorithm. Section VII presents the experimental results. Section VIII concludes this paper. II. RELATED PRIOR WORK We review related prior work on two key aspects of this papermitigating device stochastic noise for DNN inferencing and multiobjective optimization for hardware design. There is limited prior work on mitigating the DNN inferencing accuracy loss due to stochastic noise. Yan et al. [30] proposed a closed-loop circuit that utilizes the inferencing results to stabilize the DNN weights, but the effectiveness of this method was demonstrated only on small DNNs. Long et al. [25] injected Gaussian noise during training to mimic programming noise, and Joshi et al. [26] incorporated device programming variation extracted from experiments during training. However, these methods only considered programming noise while neglecting the other types of noise. Importantly, all these prior methods overlooked the impact of hardware configurations, such as the crossbar size and the resolution of the digital-to-analog converter (DAC) and the analog-to-digital converter (ADC). He et al. [27] investigated the integration of stochastic noise during the training process, but their method failed to reach the desired DNN inferencing accuracy. Consequently, they suggested lowering the operational frequency such that the noise amplitude is low. In contrast to prior work, we consider all the four types of stochastic noise and propose a ReRAM hardware-aware training method to increase the inferencing accuracy even under high operational frequencies. Considering both inferencing accuracy and hardware efficiency, we have a complex MOO problem for ReRAM-based hardware design. Candidate MOO algorithms for ReRAM design optimization can be classified into two broad categories. The first category of MOO algorithms has objective functions that are cheap to evaluate. AMOSA [31] and NSGA-II [28] are two popular evolutionary algorithms that belong to this category. NSGA-II evaluates the objective functions for various combinations of input variables and organizes the candidate inputs into a hierarchy of subgroups based on the ordering of Pareto dominance. This method takes advantage of the similarity between members of each subgroup and the Pareto dominance and moves towards the promising area of the input space. Unfortunately, NSGA-II requires the evaluation of a large number of candidate inputs and is not suitable for our problem setting, where objectives are expensive. Second, for expensive objective functions, Bayesian optimization (BO) [32] is an effective framework. The key idea is to build a cheap statistical model from past function evaluations and use it to intelligently explore the input space for finding (near-)optimal solutions. Much of the prior work on BO is for single-objective optimization. There is limited work on multi-objective BO [33]- [35], and MESMO [29] is the state-of-the-art algorithm. In contrast to III. BACKGROUND AND PROBLEM SETUP In this section, we first explain how a trained DNN model is deployed on the ReRAM crossbars to perform inferencing. Subsequently, we describe the MOO problem to perform robust DNN inferencing using ReSNA. Table I summarizes the notation associated with the relevant parameters. A. DNN Inferencing on ReRAM Crossbars x Software training. For a given DNN architecture and training dataset, we first perform the training in software. The quantizationaware training technique [36]- [38] can be used to quantize the activations and weights. y Deterministic mapping. The objective of this step is to map the weight matrix of DNN W and the set of activations A to the conductance of the ReRAM cells Gquan and crossbar input voltages Vquan, according to the resolutions of ReRAM devices and DACs, respectively. When the ReRAM cell resolution is less than the number of bits used in quantization, i.e., Res cell < Bitquan, Bitquan/Res cell cells are used to represent one weight. A kernel in a convolutional (Conv) layer needs to be first unrolled and mapped to a matrix. As kernels are reused many times during convolution, the kernel in a Conv layer is typically duplicated and deployed on multiple crossbars. Therefore, multiple inputs can be processed simultaneously, increasing parallelism and improving throughput [5]. For a fully-connected (FC) layer, each weight is associated with only one input neuron. Hence, duplication is not necessary. z Stochastic noise injection. This step mimics the influence of stochastic noise on the conductance values. The noise is modeled using probability distributions [22], [27]. Here Gnoisy denotes the cell conductance in the presence of thermal noise, shot noise, RTN, and programming noise together. Section IV-A provides more details. { ReRAM-based computation. This process accumulates the results obtained from ReRAM crossbars and employs ADCs to generate outputs. Here Ynoisy denotes the final output of DNN inferencing. Note that the last three steps together emulate the deployment of DNN inferencing on ReRAM-based hardware. The deterministic mapping needs to be carried out only once. Typically, we need to perform the third and fourth steps multiple times (e.g., ten times) to mimic multiple independent ReRAM deployments on the same device. Based on the multiple runs, we obtain an estimate of DNN inferencing accuracy. B. MOO problem for ReRAM-based Designs Our goal is to find ReRAM-based designs with suitable DNN weights to optimize multiple objectives, including inferencing accuracy, hardware area, execution time, and energy consumption. ReRAM design space. The ReRAM design configuration influences the output objectives. For example, the parameters Bitquan, ResDAC , ResADC listed in Table I influence the data precision and overall inferencing accuracy. For area overhead, Bitquan/Res cell is proportional to the number of cells used in the ReRAM-based design to represent the weights, and Xbarsize determines the subarray unit size. For execution time, note that F req is inversely proportional to the clock cycle. The read voltage Vr and the ReRAM cell resistance range [Ron,R of f ] affect the ReRAM crossbar energy consumption. MOO formulation. We formulate the MOO problem for robust inferencing on hardware-efficient ReRAM crossbars with stochastic noise as follows. Our input space consists of two parts: the ReRAM design space and the DNN weights. Let X ⊆ R d be the ReRAM design configuration space, which includes the design variables as explained above and also shown in Fig. 1. Each design variable can take values from a bounded candidate set. We need a candidate pair consisting of ReRAM design configuration(x ∈ X ) and DNN weights to be able to evaluate all the output objectives. Without loss of generality, we consider maximizing four objective functions: DNN inferencing accuracy, hardware area, execution time, and energy consumption denoted by f1(x), f2(x), f3(x), f4(x), respectively. For each candidate ReRAM design, we execute ReSNA to obtain the DNN weights that give rise to maximum accuracy. Subsequently, we evaluate the objective functions f1(x), f2(x), f3(x), f4(x). A design configuration x is determined to Pareto dominate another design x if fi(x) ≥ fi(x ) ∀i and there exists some j ∈ {1, 2, 3, 4} such that fj(x) > fj(x ). An optimal solution of a MOO problem is a set of designs X * such that no design x ∈ X \ X * paretodominates a design x ∈ X * . The solution set X * is called the Pareto set, and the corresponding set of objective function values is called the Pareto front. Our goal is to achieve a high-quality Pareto front for hardware design while minimizing the total computation cost of function evaluations. IV. UNDERSTANDING THE IMPACT OF STOCHASTIC NOISE In this section, we first discuss the modeling of ReRAM stochastic noise. Next, we demonstrate the impact of stochastic noise on DNN inferencing for specific ReRAM design configurations. Finally, we show that the naïve approach of adding random Gaussian noise cannot improve the robustness of DNN inferencing in the presence of stochastic noise. Thermal noise is generated due to the thermal agitation of the charged carriers inside the conductor [39]. Shot noise is an electronic noise that originates from the discrete electrons in the current flow. The thermal and shot noise directly affect the current through a device. We convert the change in current to the equivalent conductance change and model these two noise sources using Gaussian distributions [22]: where G and V denote the conductance and terminal voltage respectively. As shown in Table I, F req denotes the operational frequency, and T denotes the temperature. KB denotes the Boltzmann constant, and q denotes the electron charge. Random telegraph noise (RTN) is generated in semiconductors and ultra-thin oxide films. It can be modeled as a Poisson process [23], with the parameters for RTN amplitude (∆Grtn) reported in [27]. Programming noise is introduced by the programming variation when values are written to a ReRAM device. The programming noise can be estimated using a Gaussian distribution with ∆Gprog = N (0, σprogG) with standard deviation σprog = 0.0658, according to the experimental study reported in [40]. Relative noise (∆G/G) measures the noise amplitude divided by the absolute conductance. Thermal and shot noise have similar patterns: the relative noise is the largest at the smallest conductance level; it then decreases steadily as G increases. The relative RTN presents a sharp peak at the first conductance level, while other conductance levels have much smaller relative RTN. The programming noise increases with the conductance level in the absolute value. B. Impact of ReRAM Stochastic Noise We observe from Fig. 2(c) that the overall amplitude is much higher than each individual case, especially when G is small. Moreover, it is well known that the trained weights of DNNs are concentrated at small values [41]. Fig. 2(d) shows an example distribution of nonzero weights, which is extracted from the 19 th layer of ResNet20 with 8bit quantization. A large number of the weights in this layer are zero; these are not included in the figure. Note that at small values of G, the first three types of stochastic noise dominate. Specifically, the thermal and shot noise are sensitive to the operational frequency. Hence, it is challenging to incorporate high-frequency noise into training. Lowering the operational frequency and reducing noise amplitude could be an option, as suggested in previous work [27]. However, such an approach will seriously constrain the use and potential of ReRAM-based hardware due to the exclusion of high operational frequency and short execution time for DNN inferencing. Utilizing high-resolution cells is another challenge. For example, increasing the cell resolution from 2 to 8 bits can reduce the number of crossbar arrays by 75% (assuming all other settings are the same), leading to a smaller area and lower latency. However, the noise margin drops by 64× as the cell noise margin is inversely proportional to the number of conductance levels (i.e., 2 Res cell ). C. Performance of Existing Training Approaches We next evaluate the DNN inferencing accuracy of several previously proposed ReRAM hardware-aware training methods with ResNet20 and VGG13 and show the results in Fig. 3. Weights and activations are quantized to 8 bits. We consider two different ReRAM cell resolutions, 2 bits and 8 bits. Models are tested by including the stochastic noise in ReRAM-based hardware. The baseline configuration considers training with no noise. RGN denotes a naïve noise-aware approach, which injects random Gaussian noise into the system during training. The noise standard deviation is related to the maximum absolute weight value, more specifically, ∆GRGN = N (0, 0.01 · max(G)). PROG considers only the programming noise ∆Gprog with σprog = 0.0658 [40] for training. We make the following observations from the results shown in Fig. 3. 1) The baseline system suffers from stochastic noise, resulting in poor inferencing accuracy. 2) Previous training methods, such as RGN and PROG, cannot guarantee the mitigation of DNN inferencing accuracy degradation due to stochastic noise. This result is mainly due to the mismatch between Gaussian noise and the actual device stochastic noise, as shown in Fig. 2(a). 3) Increasing the cell resolution from 2 to 8 bits exacerbates the degradation in DNN inferencing accuracy due to the reduced noise margin of the ReRAM devices. V. RESNA: HARDWARE-AWARE TRAINING APPROACH In this section, we describe the proposed ReRAM-based stochasticnoise-aware (ReSNA) training method that incorporates stochastic noise to improve DNN inferencing accuracy. We start with a pre-trained model to initialize noise-aware training. Batch normalization layers are included after each Conv layer [42], [43]. The computation during the training process considers the hardware configurations. In each iteration, a new set of emulated device noise is applied in the stochastic noise injection step. Thus, the loss calculated at the end of the forward pass reflects the influence of stochastic noise. During the backpropagation, gradients of trainable DNN weights are calculated with respect to the loss. We keep a copy of the noise-free weight values and perform the gradient updates on this copy. The quality of training degrades due to the distortion of the loss function induced by the variation of weight parameters [44]. When the variation is large enough, the gradient update during the backpropagation step can be derived from the expected convergence path. The error due to stochastic noise can propagate and accumulate through the forward path, and similarly, the error in the gradient can propagate and accumulate through the backward path. Thus, the convergence of the loss function can be affected by the accumulation of the gradient error. Moreover, our experiments show that Conv layers are less sensitive to device stochastic noise than FC layers. Let δc denote the ReRAM cell's stochastic noise. We assume that one cell is used to represent one weight for simplicity. During the ReRAM-based FC layer computation involving one weight for a total of n times, the accumulation of the cell's stochastic noise can be approximated as n 2 δc (assuming a Gaussian distribution). As mentioned above, Conv kernels are typically duplicated to improve the throughput of the ReRAM-based hardware. These copies have identical weights, but the noise can be approximated to have independent statistical distributions. As the computation involving one weight for n times will be distributed to multiple copies, the accumulation of these cells' stochastic noise can be reduced to n 2 δc/k. The value of k is related to the number of duplicate copies as well as the correlations between the stochastic noise of these devices. Thus, the device stochastic noise affects FC layers more significantly than Conv layers due to the duplication of Conv kernels. To overcome this challenge, we propose two techniques to improve the stability of DNN inferencing accuracy. Applying smaller noise to FC layers. Since computing the loss function is critical for the backpropagation step and FC layers are more sensitive to the stochastic noise, we propose to lower the noise level on FC layers for improving the stability of inferencing. Table II compares the performance of ReSNA under different noise configurations. We consider the ResNet20 model with an 8-bit weight and activation quantization, along with a crossbar size of 128×128 in this experiment. The temperature is set to 350 K, and the operational frequency is set to 500 MHz. Table II shows that without including any noise in training (Conv ideal + FC ideal ), the DNN inferencing accuracy on this ReRAM design in the presence of stochastic noise is only 69.61%. Including a high-amplitude noise to the entire network (Conv500MHz,350K + FC500MHz,350K) makes the training unstable. Applying the device stochastic noise to Conv layers while appropriately reducing the noise level on FC layers (e.g., Conv500MHz,350K + FC100MHz,300K) helps in maintaining the stability of training and improving the DNN inferencing accuracy. Majority vote in the classification layer. Alternatively, FC layers can be duplicated and deployed on different crossbar arrays. The stochastic noise of a single weight parameter across these copies is not independent but is less correlated than the variations due to accessing the same device multiple times. We can feed the same input to these duplication layers and take the majority vote (Voting) to determine the predicted output to compensate for the FC layers' impact on the DNN inferencing accuracy. To minimize the area overhead, we apply Voting to only the classification layer (i.e., the last FC layer) with a small number of copies (e.g., 3). The results in Table II demonstrate that this technique can further improve DNN inferencing accuracy, even under the combination of high-amplitude noise and high-resolution cells. In summary, ReSNA incorporates stochastic noise and enhances stability, leading to better inferencing accuracy than the baseline and previous work, as shown in Fig. 3. VI. CF-MESMO: EFFICIENT MOO ALGORITHM Evaluating DNN inferencing accuracy and hardware efficiency requires execution of the ReSNA training for each ReRAM design configuration; this step is, however, computationally expensive (e.g., taking over seven hours to execute 100 training epochs for one ReRAM design configuration for ResNet20 with CIFAR10 data). To address this challenge, we propose an efficient information-theoretic MOO algorithm referred to as Continuous-Fidelity Max-value Entropy Search for Multi-objective Optimization (CF-MESMO). Two key innovations here are: first, we formulate continuous-fidelity evaluation of objective functions by varying the number of training epochs of ReSNA. Second, we propose a principled approach to intelligently select the ReRAM configurations and fidelity of ReSNA for evaluation guided by learned statistical models. A. MOO Formulation with Continuous-Fidelity Evaluations For each candidate ReRAM design x ∈ X , we need to execute the ReSNA method to obtain the DNN weights. Subsequently, we evaluate the objective functions f1(x) (inferencing accuracy), f2(x) (hardware area), f3(x) (execution time), f4(x) (energy consumption). The cost of evaluation of each ReRAM design configuration can be reduced by making an approximation of the objective function(s). We propose to vary the number of training epochs in ReSNA to trade-off computation cost and accuracy of objective function evaluations (i.e., continuous-fidelity evaluation): small training epochs correspond to lower-fidelity evaluation and vice versa. Therefore, we formulate this problem as a continuous-fidelity MOO problem where we have access to an alternative function gj(x, zj) for all j ∈ {1, 2, 3, 4}. Function gj(x, zj) can make cheaper approximations of fj(x) by varying the fidelity variable zj ∈ Z. Without loss of generality, let Z = [0, 1] be the fidelity space. Fidelities for each function vary in the amount of computational resources consumed and the accuracy of evaluation, where zj = 0 and z * j = 1 refer to the lowest and highest fidelity, respectively. At the highest fidelity z * j , gj(x, z * j ) = fj(x). Let Cj(x, zj) be the cost of evaluating gj(x, zj), i.e., runtime to perform training using ReSNA for the selected number of training epochs. Evaluation of each ReRAM design configuration x ∈ X with fidelity vector z = [z1, z2, z3, z4] generates the evaluation vector y ≡ [y1, y2, y3, y4], where yj = gj(x, zj), and the normalized cost of evaluation is C(x, z) = 4 j=1 Cj(x, zj)/Cj(x, z * j ) . Our goal is to approximate the Pareto set X * by minimizing the overall cost of evaluating candidate ReRAM designs. B. Overview of CF-MESMO CF-MESMO learns a surrogate model using data obtained from past ReRAM design evaluations and then intelligently selects the next candidate ReRAM design and the fidelity of ReSNA pair for evaluation by trading-off exploration with exploitation to quickly direct the search towards Pareto-optimal solutions. We perform the following steps in each iteration of CF-MESMO as shown in Algorithm 1: 1) Select the ReRAM design and fidelity of ReSNA for evaluation that maximizes the information gain per unit cost about the optimal Pareto front based on the current surrogate model. 2) Execute the hardwareaware training approach ReSNA to evaluate objective functions with the selected ReRAM design and fidelity pair. 3) Employ the new training example in the form of ReRAM design configurations (i.e., input) and four objective function evaluations (i.e., output) to update the surrogate model. After convergence is achieved (i.e., Pareto front solution doesn't change in several consecutive iterations), we compute the Pareto front from the aggregate set of objective evaluations and obtain the ReRAM design configurations and DNN weights corresponding to the Pareto front as the resulting solution. 5: F * s ← Solve cheap MOO over (f1, · · · ,f K ) 6: Select ReRAM design and fidelity pair: (xt, zt) ← arg max x∈X ,z∈Z αt(x, z, F * ) Equation (9) 7: Perform ReSNA training of DNN π with ReRAM design and fidelity pair (xt, zt) 8: Evaluate objectives f1, f2, f3, f4 for trained DNN on ReRAM design xt 9: Update the total cost: Ct ← Ct + C(xt, zt) 10: Aggregate training data: D ← D ∪ {(xt, yt, zt)} 11: Update surrogate statistical models GP 1 , · · · , GP 4 12: t ← t + 1 13: end 14: return Pareto set and Pareto front of objective functions f1(x), · · · , f4(x) Surrogate models for continuous-fidelity. Surrogate models guide the selection of candidate ReRAM designs to quickly uncover highquality Pareto fronts. Our training data D for surrogate models after t iterations consists of t training examples of input-output pairs. We employ Gaussian processes (GPs) [45] as our choice of the statistical model due to their superior uncertainty quantification ability. We learn four surrogate statistical models GP 1 , · · · , GP 4 from D, where each model GP j corresponds to the jth function gj. Continuous-fidelity GPs (CF-GPs) are capable of modeling functions with continuous fidelities within a single model. Hence, we employ CF-GPs to build surrogate statistical models for each function [46]. A CF-GP is a random process defined over the input space and the fidelity space, characterized by a mean function µ : X × Z → R and a covariance or kernel function κ : (X × Z) 2 → R. We denote the posterior mean and standard deviation of gj by µg j (x, zj) and σg j (x, zj). We denote the posterior mean and standard deviation of the highest fidelity functions fj(x) = gj(x, z * j ) by µ f j (x) = µg j (x, z * j ) and σ f j (x) = σg j (x, z * j ), respectively. C. Selecting ReRAM Design to Evaluate via Information Gain The effectiveness of CF-MESMO critically depends on the reasoning mechanism to select the candidate ReRAM design and fidelity of ReSNA pair for evaluation in each iteration. Therefore, we propose an information-theoretic approach to perform this selection. The key idea is to find the ReRAM design and fidelity pair {xt, zt} that maximizes the information gain (I) per unit cost about the Pareto front of the highest fidelities (denoted by F * ), where {xt, zt} represents a candidate ReRAM design configuration xt evaluated at fidelities zt at iteration t. CF-MESMO performs the joint search over the input space X and the fidelity space Z: (xt, zt) ← arg max where αt(x, z) = I({x, y, z}, F * |D)/C(x, z). (2) In this joint search, the computation cost C(x, z) is considered in Equation (2). The information gain in Equation (2) is the expected reduction in entropy H(.) of the posterior distribution P (F * |D) due to the evaluation of the ReRAM design x at fidelity vector z. According to the symmetric property, the information gain can be rewritten as follows: (3) The first term in Equation (3) is the entropy of a four-dimensional Gaussian distribution that can be computed as follows: The second term in Equation (3) is an expectation over F * and can be approximated using Monte-Carlo sampling: where S denotes the number of samples, and F * s denotes a sample Pareto front achieved over the highest fidelity functions sampled from the surrogate models. To solve Equation (5), we provide solutions to construct Pareto front samples F * s and to compute the entropy of a given Pareto front sample F * s . Computation of Pareto front samples: We sample the highest fidelity functionsf1, · · · ,f4 from the posterior CF-GP models. Then, we solve a cheap MOO problem over the sampled functions with the NSGA-II algorithm [28] and compute the sample Pareto front F * s . Entropy computation for a given Pareto front sample: Let F * s = {v 1 , · · · , v l } be the sample Pareto front, where l denotes the size of the Pareto front and each element v i = {v i 1 , · · · , v i 4 } is evaluated at the sampled highest-fidelity function. The following inequality holds for each component yj of y in the entropy term H(y|D, x, z, F * s ): Essentially, this inequality means that the j th component of y is upper-bounded by the maximum of j th components of sample Pareto front F * s . The proof of Equation (6) falls in two cases 1 : a) If yj is evaluated at the highest fidelity (i.e, zj = z * j and yj = fj), we prove by contradiction. Suppose there exists some component fj of f such that fj > f j * s . However, by definition, since no point dominates f in the jth dimension, f is a non-dominated point. This results in f ∈ F * s , which is a contradiction. Thus, Equation (6) holds. b) If yj is evaluated at one of the lower fidelities (i.e, zj = z * j ), we refer to the assumption that the value of an objective evaluated at lower fidelity is smaller than that evaluated at higher fidelity, i.e., yj ≤ fj ≤ f j * s . This assumption is true in our problem setting, where the DNN inferencing accuracy improves with more training epochs of ReSNA. Following Equation (6) and the independence of CF-GP models, we further decompose the entropy of a set of independent variables according to the entropy measure property [47]: Equation (7) requires the entropy computation of p(yj|D, x, zj, f j * s ). This conditional distribution can be expressed as H(yj|D, x, zj, yj ≤ f j * s ). As Equation (6) states that yj ≤ f j * s holds under all fidelities, the entropy of p(yj|D, x, zj, f j * s ) can be approximated by the entropy of a truncated Gaussian distribution as: 1 For ease of notation, we drop the dependency on x and z. We use f j to denote f j (x) = g j (x, z * j ) the evaluation of the highest fidelity z * j and y j to denote g j (x, z j ) the evaluation of g j at a lower fidelity z j = z * j . where γ (g j ) s = f j * s −µg j σg j . Functions φ and Φ are the probability density and cumulative distribution function of the standard normal distribution, respectively. From Equations (4), (5), and (8), we get the expression as shown below: )). (9) Therefore, in Algorithm 1, we select the next ReRAM design and the fidelity of ReSNA pair that maximizes the information gain per unit cost about the optimal Pareto front based on Equation (9). VII. EXPERIMENTS AND RESULTS In this section, we first explain the details of the experimental setup. Next, we evaluate the effectiveness of ReSNA in improving the inferencing accuracy. Finally, we show that CF-MESMO can achieve high-quality Pareto fronts for DNN inferencing on ReRAM crossbars and analyze the Pareto sets for different DNN models. A. Experimental Setup We evaluate ReSNA with five different DNNs-ResNet20, ResNet32, ResNet44 [48], VGG11, and VGG13 [49] on the CIFAR-10 dataset [50]. The CIFAR-10 dataset contains 50, 000 training images and 10, 000 testing images, which belong to 10 classes. Furthermore, to validate the scalability of our method, we also evaluate the performance of ResNet18 [48] using the CIFAR-100 dataset [50]. The number of training and testing images in CIFAR-100 is the same as in CIFAR-10, but these images belong to 100 classes. The image size is 28 × 28 × 3, and the training and testing batch size is 64. Table III(a) summarizes deep neural network configurations, including the numbers of channels in Conv layers, the inferencing accuracy with unquantized weights and activations, and the inferencing accuracy for 8-bit weights and activations. Note that testing on diverse DNNs is more important to test the effectiveness of our approach. Hence, due to space constraints, we provide results on limited datasets noting that our methodology and findings are general. We implement the ReSNA method on ReRAM crossbars with stochastic noise using the PytorX simulator [27]. ReSNA uses stochastic gradient descent with a learning rate of 0.001 and a momentum of 0.9. The maximum number of training epochs is 100. For ResNet18, we decay the learning rate by 0.2 for every 20 epochs. Each inferencing test consists of 10 independent runs with stochastic noise, and we report the average inferencing accuracy. All the training and inferencing are conducted on NVIDIA Titan RTX GPU with a memory of 24 GB and a memory bandwidth of 672 GB/s. [51] along with the 32 nm technology node parameters to evaluate the hardware area, execution time, and energy consumption. The ReRAM crossbar and peripheral configurations are adopted from [52]. For ReSNA with Voting, we duplicate the kernels of the classification layer. For the MOO problem, we run CF-MESMO for a maximum of 100 iterations with 10 available fidelity selections. The baselines for evaluating the efficiency of CF-MESMO are the random search and NSGA-II. We utilize the NSGA-II implementation from Platypus python library. B. ReSNA Results Inferencing accuracy. We show representative results for ReSNA inferencing accuracy with multiple temperature and frequency settings with respect to the baseline. Recall that the baseline configuration considers training with no noise and performs inferencing in the presence of stochastic noise. Fig. 4(a)-(c) show the inferencing accuracy for ResNet20 with 8-bit cell resolution and 64×64 crossbars. On average, ReSNA without Voting outperforms the baseline by 1.62% over all the test conditions. ReSNA with Voting increases the overall inferencing accuracy by 2.57%. Considering the extreme design configuration, i.e., at 1000 MHz and 400 K, ReSNA with Voting outperforms the baseline by 5.47%. Note that this extreme case assumes that in the future, ReRAM-based hardware will run at this high frequency. We also validate the performance of the ReSNA method with the ResNet18 on the CIFAR-100. With a frequency of 500 MHz and a temperature of 350 K, ReSNA achieves 70.80% inferencing accuracy compared with the baseline inferencing accuracy of 69.37%. With a frequency of 1000 MHz and a temperature of 350 K, ReSNA achieves 68.23% inferencing accuracy compared with the baseline inferencing accuracy of 64.45%. In summary, ReSNA can achieve considerable inferencing accuracy improvement under stochastic noise with different ReRAM design configurations, DNNs, and datasets. Design trade-offs considering different objectives. We further explore the design trade-offs considering the inferencing accuracy, hardware area, execution time, and energy consumption. As we have explored the effects of frequency and temperature on inferencing accuracy in the previous analysis, we focus on cell resolution and crossbar size in this discussion. Fig. 5 shows the impact of cell resolution and the crossbar size on ResNet20 at 500 MHz and 350 K. Fig. 5(a) shows that inferencing accuracy using the ReSNA method. When other settings remain the same, the inferencing accuracy steadily increases as the cell resolution reduces due to the improved noise margin. As discussed in Section III, the ReRAM crossbar array outputs are accumulated across the columns, and hence the crossbar size affects the noise accumulation. Therefore, a large crossbar with high cell resolution is not an optimal design choice with this setting from the inferencing accuracy perspective. From the area perspective, the 32×32 crossbar with 8-bit cell resolution is the best, while 64×64 crossbar has the least area when the cell resolution is 4-bit. The area evaluation reveals a high correlation between the cell resolution and crossbar size. Fig. 5(c) shows that the minimum latency is achieved by the 64×64 crossbar with the cell resolution of 2-bit, though this particular configuration incurs a relatively larger area overhead than the configuration with 4bit cell resolution. Fig. 5(d) shows that the energy consumption with large cell resolution and small crossbar size is relatively modest. C. Results on Using CF-MESMO to Optimize ReRAM Crossbars Fig. 5 also indicates that different objectives have different optimal design configurations, and a global optimal design configuration is not achievable. Note that the operational frequency and temperature considered above are discrete data points selected for initial performance evaluation. However, the temperature can take any value from 300 K to 400 K. Assuming the temperature resolution to be 0.1 K, we can get 1, 000 data points. Similarly, by considering other inputs, we can estimate the number of all design configurations to be 1.485 × 10 7 . While any MOO framework can search over this enormous space to achieve the Pareto front to establish the suitable design trade-offs, the computation cost associated with this search is prohibitively high (e.g., it takes nearly 30 GPU days to run the ReSNA training on the PytorX simulator [27] for 100 configurations). In contrast, the proposed CF-MESMO framework does not traverse through all the data points but can achieve a high-quality Pareto front with significantly reduced computation cost. We use the hypervolume, which measures the volume between the Pareto front and a reference point [53], to indicate the quality of the Pareto front. CF-MESMO vs. NSGA-II and random search. Fig. 6(a) illustrates the hypervolume result of CF-MESMO compared with NSGA-II and random search under the same computation cost. The unit cost is defined as the runtime for executing 10 training epochs of the ReSNA method (e.g., 42.5 minutes based on our setting for ResNet20). We observe that 1) CF-MESMO can achieve a higher-quality Pareto optimal set for the same total computation cost for ReRAM design evaluation; 2) CF-MESMO can produce higher quality Pareto front using significantly lower cost compared to NSGA-II and random search; 3) CF-MESMO achieves 90.91% and 91.21% reduction in computation cost to reach the same quality Pareto front as NSGA-II and random search, respectively. Continuous fidelity vs. single maximum fidelity. We utilize a continuous fidelity setting in CF-MESMO. As a comparison, we run our optimization method with the single maximum fidelity setting (100 training epochs of ReSNA to evaluate each ReRAM design), denoted as MESMO. Fig. 6(b) compares the hypervolume of these two methods considering the computational cost. The continuousfidelity setting in CF-MESMO can guarantee higher quality Pareto front with lower computation cost when compared to the single maximum fidelity algorithm MESMO. Specifically, CF-MESMO lowers the computation cost by 78.18% to reach the same quality Pareto front as MESMO. Note that both CF-MESMO and MESMO can outperform NSGA-II and random search, as we observe from Fig. 6(a)-(b). These results validate the effectiveness of the proposed CF-MESMO algorithm for ReRAM based MOO optimization. Fig. 6(c) shows the fidelity index (small index means ReRAM design evaluation using ReSNA with a small number of training iterations) selection over iterations in CF-MESMO. The optimization starts with a large fidelity index, e.g., 7 and 9 at the 1 st and 2 nd iterations in the initialization. During the 3 rd to 25 th iterations, a small fidelity index is preferred to get a fast approximation of the Pareto front by identifying the promising areas of the ReRAM design space. After that, a larger fidelity index is selected to approach the optimal Pareto front by evaluating candidates from this set of promising ReRAM designs. By using this continuous-fidelity setting, we can exclude the non-promising configurations in an early stage. As shown in Equation (2), high fidelity is only utilized when the predicted information gain per unit cost is large. Optimization results for different DNNs. We use the proposed CF-MESMO framework to achieve robust DNN inferencing with an efficient ReRAM-based hardware platform. For the ResNet20 network, CF-MESMO obtains 11 Pareto optimal designs over 100 iterations. Fig. 7(a) represents these designs in the output space, including the inferencing accuracy, latency, energy consumption, and area overhead. Each data point is labeled with the corresponding iteration number. Although all the 11 design instances appear at the Pareto front, some of them can be excluded due to efficiency constraints. For instance, designs '56' and '58' consume 113.3% and 105.2% more energy compared to the average of the other designs, while design '72' requires 57.4% more execution time compared to the average of the other designs. We mark the highenergy design points with red and the high-latency design point with magenta in Fig. 7. According to the input space shown in Fig. 7(b), a relatively low temperature (300K-350K) and small and medium crossbar sizes (32×32 and 64×64) are recommended for ResNet20. Fig. 8 shows the Pareto fronts for ResNets and VGGs. Note that the area evaluation dimension is not included in the plot for ease of illustration, and latency is normalized under the same area constraint. It should be noted that ResNet32 and VGG13 achieve the best inferencing accuracy for each network class. Comparing Fig. 8(a) with Fig. 8(b), we see that the Pareto fronts for ResNet and VGG have different distributions, while there is a large overlap among the various clusters within the same network class. e.g., the three clusters in Fig. 8(a). These results imply that the network structure is not the only factor to determine hardware efficiency. In designing ReRAM-based accelerators, we should first set the expected inferencing accuracy and hardware efficiency target and then choose the network using the Pareto front. Based on the Pareto set results from all the evaluations, we make the following observations: 1) Designs with high temperature can appear in the Pareto front, but the number of those cases is low. 2) For small DNN models, the channel number is relatively small. Large-sized crossbar results in a low utilization rate and thus is not an optimal choice. As the model size increases, a large-sized crossbar becomes a preferred choice. 3) As ReSNA improves the inferencing accuracy, high ReRAM cell resolution and high frequency become the preferred candidates for robust DNN inferencing. VIII. CONCLUSIONS We have presented a ReRAM-based accelerator design and optimization framework to achieve robust DNN inferencing in the presence of stochastic noise. The efficiency of this framework depends on uncovering the Pareto-optimal ReRAM designs to establish a suitable trade-off considering inferencing accuracy, area overhead, execution time, and energy consumption. We have solved this challenging multiobjective optimization (MOO) problem by introducing a Continuous-Fidelity Max-value Entropy Search-based MOO framework, called CF-MESMO. CF-MESMO is aided by a hardware-aware training method to handle stochastic noise, called ReSNA. The CF-MESMO framework provides a high-quality Pareto front for robust DNN inferencing on hardware-efficient ReRAM crossbars with stochastic noise. On average, ReSNA achieves 2.57% inferencing accuracy improvement for ResNet20 on the CIFAR-10 dataset with respect to the baseline configuration. Moreover, the CF-MESMO framework achieves 90.91% reduction in computation cost compared with the popular MOO framework NSGA-II to reach the same quality Pareto front as NSGA-II.
2021-09-14T01:16:01.484Z
2021-09-12T00:00:00.000
{ "year": 2021, "sha1": "38b90dbc6d78bcaf031fa864813c58cb0da6eaaa", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "38b90dbc6d78bcaf031fa864813c58cb0da6eaaa", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
27780713
pes2o/s2orc
v3-fos-license
Vortex Formation by Interference of Multiple Trapped Bose-Einstein Condensates We report observations of vortex formation as a result of merging together multiple $^{87}$Rb Bose-Einstein condensates (BECs) in a confining potential. In this experiment, a trapping potential is partitioned into three sections by a barrier, enabling the simultaneous formation of three independent, uncorrelated condensates. The three condensates then merge together into one BEC, either by removal of the barrier, or during the final stages of evaporative cooling if the barrier energy is low enough; both processes can naturally produce vortices within the trapped BEC. We interpret the vortex formation mechanism as originating in interference between the initially independent condensates, with indeterminate relative phases between the three initial condensates and the condensate merging rate playing critical roles in the probability of observing vortices in the final, single BEC. In a superfluid, long-range quantum phase coherence regulates the dynamics of quantized vortices [1,2] and provides routes to vortex formation that are inaccessible with classical fluids. For example, in dilute-gas Bose-Einstein condensates (BECs), quantized vortices can be created using quantum phase manipulation [3,4]. Vortices in BECs have also been created using methods more analogous to those in classical fluid dynamics [5], namely through rotating traps [6,7,8,9], turbulence [10], and dynamical instabilities [11,12]. Yet in contrast with classical fluid dynamics, to our knowledge vortex generation via the mixing of initially isolated superfluids remains an unexplored research area. Due to the availability and relative ease of microscopic manipulation and detection techniques, BECs are well-suited to address open questions regarding superfluid mixing and associated vortex generation, along with the possible accompanying roles of phase-coherence and matter-wave interference. In this Letter, we describe experiments demonstrating that the mixing or merging together of multiple condensates in a trap can indeed lead to the formation of potentially long-lived quantized vortices in the resulting BEC. We ascribe the vortex generation mechanism to matterwave interference between the initially spatially isolated but otherwise identical BECs, and show that vortex formation may be induced even for slow mixing time scales. While it is now well-known that matter-wave interference may occur between BECs [13], our experiment demonstrates a physical link between interference and vortex generation, providing a new paradigm for vortex formation in superfluids. We emphasize that no stirring or phase engineering steps are involved in our work, nor are any other means for controllably nucleating vortices in the trapped atomic gas; the vortex formation process itself is stochastic and uncontrollable, and depends on relative condensate phases that are indeterminate prior to condensate mixing. The vortex formation mechanism identified here may be particularly relevant when defects or roughness are present in a trapping potential, or when multiple condensates are otherwise joined together. Our experiment may also illuminate aspects of vortex formation at site defects in other superfluids, for which microscopic studies may be exceedingly difficult and questions regarding vortex formation mechanisms are unresolved. To illustrate the basic concept underlying our experiment, we first consider our atom trap, which is formed by the addition of a time-averaged orbiting potential (TOP) trap [14] and a central repulsive barrier created with bluedetuned laser light that is shaped to segment the harmonic oscillator potential well into three local potential minima. Figure 1(a) shows an example of potential energy contours in a horizontal slice through the center of our trap. We will assume throughout the ensuing descriptions that the energy of the central barrier is low enough that it has negligible effect on the thermal atom cloud; such is the case in our experiment. However, the central barrier does provide enough potential energy for an independent condensate to begin forming in each of the three local potential minima from the one thermal cloud. If the central barrier is weak enough, condensates with repulsive interatomic interactions will grow together during evaporative cooling; if the barrier is strong enough, the condensates will remain independent. In this latter case, the central barrier height may be lowered while keeping the condensed atoms held in the TOP trap. Overlap and interference between the heretofore independent condensates would then be established as the condensates merge together into one. We have examined both scenarios. Depending on the relative phases of the three interacting condensates and the rate at which the condensates merge together (via either process), the final merged BEC may have nonzero net angular momentum about the vertical trap axis, as we now describe. We first recall that the relative phase between two independent superfluids is indeterminate until an interference measurement is made. However, when interference occurs, a directional mass current will be established between the superfluids. A relative phase can then be determined, but it will vary randomly upon repeated realizations of the experiment [15,16]. In our experiment, when the initial condensates merge together, fluid flow in the intervening overlap regions is established; a straightforward model of the mass current for two overlapping but otherwise uncorrelated states shows that the direction of fluid flow depends on the sine of the relative phase between the states [17]. When our three condensates are gradually merged together while remaining trapped, fluid flow that is simultaneously either clockwise or counter-clockwise across all three barrier arms may occur with finite probability. For ease of this discussion, and keeping in mind that only relative phases carry physical meaning, we imagine that the condensates formed in the three local minima of Fig. 1(a) can be labeled with phases φ j , where the indices j=1, 2, and 3 identify the condensates in a clockwise order, respectively. Upon merging of the three condensates, if it turns out that for example, φ 2 − φ 1 = 0.7π and φ 3 − φ 2 = 0.8π (thus necessarily φ 1 − φ 3 = 0.5π since each φ j must be single valued), then some finite amount of clockwise-directed fluid flow will be established between each pair, hence also for the entire fluid. More generally, if the three merging condensates have relative phases φ 2 − φ 1 , φ 3 − φ 2 , and φ 1 − φ 3 that are each simultaneously between 0 and π, or each between π and 2π, the resulting BEC will have nonzero angular momentum, which will be manifest as a vortex within the BEC. By examining the full range of phase difference possibilities, the total probability P v for a net fluid flow to be established in either azimuthal direction is determined to be P v = 0.25, given statistically random phase differences for each experimental run. P v is thus the probability for a vortex to form as the three condensates merge together. Absent from the above description is an analysis of the phase gradients of the three condensates during the merging process, which will lead to transient interference fringes in overlapping condensates. These fringes may decay to numerous vortices and antivortices, and possibly vortex rings, similar to instabilities of dark solitons in BECs [11,12,18,19]. For rapidly merged condensates, we may thus expect the observation of multiple vortex cores in an BEC image, or to find a value of P v greater than 0.25. Yet as the condensates are merged together more and more slowly, the dependence of P v on phase gradients becomes negligible and P v should approach a limiting value of 0.25. Our experiment is designed to study the presence of vortices in an |F =1, m F =−1 87 Rb BEC subsequent to the merging of three condensates such created in a threesegment potential well. To create just a single BEC in a trap without a central barrier, we first cool a thermal gas to just above the BEC critical temperature in an axially symmetric TOP trap with radial (horizontal) and axial (vertical) trapping frequencies of 40 Hz and 110 Hz, respectively. We then ramp the TOP trap magnetic fields such that the final trap oscillation frequencies are 7.4 Hz (radially) and 14.1 Hz (axially). A final 10-second stage of radio-frequency (RF) evaporative cooling produces condensates of ∼4×10 5 atoms, with condensate fractions near 65% and thermal cloud temperatures of ∼22 nK. The chemical potential of such a BEC is k B ×8 nK, where k B is Boltzmann's constant. To create instead three isolated condensates in a segmented trap, we modify the above procedure by ramping on an optical barrier immediately before the final 10-s stage of evaporative cooling in the weak TOP trap. The barrier itself is formed by illuminating a binary mask, illustrated in Fig. 1(b), with a focused blue-detuned Gaussian laser beam of wavelength 660 nm. After passing through the mask, the beam enters our vacuum chamber along the vertical trap axis. The mask is imaged onto the center of the atom cloud with a single lens. Due to diffraction, the beam has an intensity profile as shown in Fig. 1(c), with a maximum intensity and thus barrier energy aligned with the center of the TOP trap. The barrier's potential energy decreases to zero over ∼35 µm along three arms separated by azimuthal angles of approximately 120 • . With 170 µW in the beam, which corresponds to a maximum barrier height of k B ×26 nK for our beam, three condensates are created and do not merge together during their growth; a set of three BECs created under these conditions is shown in Fig. 1(d). With instead 45 µW in the beam, corresponding to a maximum barrier energy of k B ×7 nK, three independent condensates again initially form, but as the condensates grow in atom number, their repulsive interatomic interactions eventually provide enough energy for the three condensates to flow over the barrier arms. The three condensates thus naturally merge together into one BEC during evaporative cooling, as shown in Fig. 1(e). In our first study, three spatially isolated condensates were created in the presence of a strong barrier of maximum potential energy k B ×26 nK, and were then merged together by ramping down the strength of the barrier to zero linearly over a time τ . Any vortex cores formed by this process in the resulting BEC have a size below our optical resolution limit, and are too small to be directly observed in the trapped BEC. We thus suddenly removed the trapping potential and observed the atom cloud using absorption imaging along the vertical axis after 56 ms of ballistic expansion. This entire process was repeated between 5 and 11 times for each of 6 different values of the barrier ramp-down time τ between 50 ms and 5 s. In a significant fraction of our experimental runs, we observed one or more vortex cores in our condensates, a clear indication that condensate merging can indeed induce vortex formation. Moreover, the observed spatial density distributions vary from shot to shot, as would be expected with indeterminate phase differences between the initial condensates. However, many images are absent of any vortices. Example images of expanded BECs in Fig. 2 Fig. 2(e) for the different values of τ examined. We define a vortex observation fraction F v as the fraction of images, for each value of τ , that show at least one vortex core [20]. The error bars reflect the ambiguity in our ability to determine whether or not an image shows at least one vortex. For example, core-like features at the edge of the BEC image, or core-like features obscured by imaging noise, may lead to uncertainty in our counting statistics and determination of F v . As the plot shows, F v reaches a maximum near 0.6 for the smaller τ values, and drops to ∼0.25 for long ramp-down times. We expect that with larger sample sizes, F v should approximate P v for each τ . Thus our results are consistent with our conceptual analysis, where P v > 0.25 for fast merging times when interference fringes may occur, and P v = 0.25 for long merging times. (a)-(d) show the presence of vortex cores after various barrier ramp-down times. An analysis of vortex observation statistics is given in For τ ≤ 1 s, multiple vortices were often observed in our images, as images of Fig. 2 show, indicating that phase gradients are likely to play an important role in vortex formation if condensates are quickly merged together. Furthermore, observations of multiple vortex cores may indicate the presence of vortices and antivortices. Although we are unable to determine the direction of fluid circulation around our observed vortex cores, we performed a test in which the barrier was ramped off in 200 ms, thus forming multiple vortex cores with a high probability. We then inserted additional time to hold the final BEC in the unperturbed harmonic trap before our expansion imaging step. After such a sequence, the probability of observing multiple vortices dropped dramatically: for no extra hold time, we observed an average of 2.1 vortex cores per image, whereas this number dropped to 0.7 for an extra 100 ms hold time, suggestive of either vortex-antivortex combination or other dynam- In a second investigation, we differed from the above experiment by using a weaker barrier with a maximum energy of k B ×7 nK such that the three condensates naturally merged together into one BEC during the evaporative cooling process. We emphasize that this merging process is due solely to the increasing chemical potentials exceeding the potential energy of the barrier arms between the condensates; the barrier strength remained constant throughout the growth and merging of the condensates when vortices may form. After evaporative cooling, we ramped off the optical barrier over 100 ms and released the atoms from the trap to observe the BEC after ballistic expansion. Under these conditions, our vortex observation fraction was F v =0.56±0.06 in a set of 16 images, with example images shown in Fig. 3(a) and (b). By adding an extra 500 ms of hold time after the final stage of BEC formation but before the start of the 100 ms barrier ramp-down and ballistic expansion, the vortex observation fraction decreased to F v =0.28±0.14. Again, this drop in probability may be due to vortexantivortex combination during extra hold time in the weakly perturbed harmonic trap. We thus conclude that with a maximum barrier energy of k B ×7 nK, vortices are formed during the BEC creation process rather than during the ramp down of the weak barrier, consistent with our phase-contrast images of trapped BECs that show a ring-like rather than segmented final density distribution. Barrier strengths between the two limits so far described also lead to vortex formation, either during BEC Fig. 3(g), where a "gash"-like feature may be a possible indicator of vortex-antivortex combination; similar features have been seen in related numerical simulations [18]. Often, however, no vortices are observed; an example with no vortex cores appearing is Fig. 3(h). For comparison, an expansion image taken after creating a condensate in a trap without a barrier is shown in Fig. 3(i). Perhaps surprisingly, single vortex cores have also appeared in ∼10% of our expansion images taken in the absence of any central barrier. In other words, for our basic single BEC creation procedure as outlined above, and without a segmenting barrier ever turned on, vortices occasionally form spontaneously and are observable in expansion images. An example image is shown in Fig. 3(j). These observations may be related to predictions of spontaneous vortex formation due to cooling a gas through the BEC transition [21]. We are currently investigating these intriguing observations further. We finally note that to generate vortices by the mechanism described in this paper, it is important for two reasons that condensates merge and interfere while trapped, as opposed to during expansion. First, in a trapped BEC, the nonlinear dynamics due to interatomic interactions would play a key role in the structural decay of interference fringes, which may be responsible for generation of multiple vortices and antivortices seen with fast merging times. In an expanding gas, the interactions become negligible as the gas density decreases. Second, by keeping condensates trapped during their mixing, we are able to study slow merging, where we believe that relative overall phases are primarily responsible for vortex generation. We conjecture that with slow merging, it may be possible to directly imprint relative phases onto three or more trapped, separated, and phase-correlated condensates to controllably engineer vortex states. In summary, we have generated vortices by merging together isolated and initially uncorrelated condensates into one final BEC. We have shown that our vortex observation statistics are consistent with a simple conceptual theory regarding indeterminate phase differences between the initial condensates; however, quantitative theoretical examination is needed for further analysis of our results. We have also demonstrated that condensates created in the presence of weak trapping potential defects or perturbations, such as our weak optical barrier, may naturally acquire vorticity and nonzero orbital angular momentum. This result challenges the common notion that a BEC necessarily forms with no angular momentum in the lowest energy state of a trapping potential; rather, the shape of a static confining potential may be sufficient to induce vortex formation during BEC growth, a concept that may be of relevance to other superfluid systems as well. We thank Ewan Wright and Poul Jessen for helpful discussions, and Tom Milster for use of his Maskless Lithography Tool to create our optical barrier mask and phase plates for phase-contrast imaging. This work was funded by grants from the ARO and NSF.
2017-06-23T21:29:19.260Z
2006-10-06T00:00:00.000
{ "year": 2007, "sha1": "2c471a4e98728267549bbf084f59298226a7cf2c", "oa_license": null, "oa_url": "https://espace.library.uq.edu.au/view/UQ:289464/UQ289464.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6b3627cb3329ff106ff7d2e62a96cdf81f98f259", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
80623863
pes2o/s2orc
v3-fos-license
Skeletal Muscle Lipid Droplets and the Athlete’s Paradox The lipid droplet (LD) is an organelle enveloped by a monolayer phospholipid membrane with a core of neutral lipids, which is conserved from bacteria to humans. The available evidence suggests that the LD is essential to maintaining lipid homeostasis in almost all organisms. As a consequence, LDs also play an important role in pathological metabolic processes involving the ectopic storage of neutral lipids, including type 2 diabetes mellitus (T2DM), atherosclerosis, steatosis, and obesity. The degree of insulin resistance in T2DM patients is positively correlated with the size of skeletal muscle LDs. Aerobic exercise can reduce the occurrence and development of various metabolic diseases. However, trained athletes accumulate lipids in their skeletal muscle, and LD size in their muscle tissue is positively correlated with insulin sensitivity. This phenomenon is called the athlete’s paradox. This review will summarize previous studies on the relationship between LDs in skeletal muscle and metabolic diseases and will discuss the paradox at the level of LDs. Introduction The lipid droplet (LD) is an organelle that stores neutral lipids in cells and plays an important role in maintaining lipid homeostasis in almost all organisms [1]. There is abundant experimental evidence that LDs interact with other organelles, and that this is mediated by regulatory proteins and enzymes embedded in their surface. LD proteins are also responsible for regulating the size, shape, and stability of LDs, parameters that are associated with various physiological states [2]. Some of the interactions between LDs and other organelles and aspects of LD dynamics are diagramed in Figure 1. In the past several decades, there has been a worldwide increase in the incidence of lipid metabolic diseases. As high caloric diets have become more affordable and lifestyles more sedentary, an increasing fraction of the world's population ingests lipids and other calories above their metabolic needs. A chronic positive energy balance results in elevated blood triacylglycerol (TAG) and free fatty acid (FFA) content. This, in turn, leads to the ectopic storage of neutral lipids in non-adipose tissues, such as skeletal muscle, liver, and heart [3]. Two major human adipose tissues are described: white adipose tissue (WAT) and brown adipose tissue (BAT). WAT is used mainly to store energy, while BAT is used mainly to produce heat [4]. Ectopic lipid storage [5,6] refers to the lipids that cannot be consumed and stored in adipose tissues but are stored in non-adipose tissues, which, in turn, causes lipid metabolism disorder as well as lipid toxicity. lipids that cannot be consumed and stored in adipose tissues but are stored in non-adipose tissues, which, in turn, causes lipid metabolism disorder as well as lipid toxicity. Lipid toxicity is thought to interfere with normal cellular functions in a process called lipid toxicity [7,8]. This is the hypothesis that excessive, atypical storage of lipids in non-adipose tissues influences the metabolic homeostasis of the impacted tissues and organs. Lipid toxicity leads to disturbances in cell signaling and the development of insulin resistance, which, in turn, can result in a series of related diseases including type 2 diabetes (T2DM) and non-alcoholic fatty liver disease (NAFLD) [9]. Due to its great tissue mass and large contribution to metabolic demand, skeletal muscle is a particularly consequential cell type for pathologies of lipid homeostasis [10,11]. Lipids are stored as Lipid toxicity is thought to interfere with normal cellular functions in a process called lipid toxicity [7,8]. This is the hypothesis that excessive, atypical storage of lipids in non-adipose tissues influences the metabolic homeostasis of the impacted tissues and organs. Lipid toxicity leads to disturbances in cell signaling and the development of insulin resistance, which, in turn, can result in a series of related diseases including type 2 diabetes (T2DM) and non-alcoholic fatty liver disease (NAFLD) [9]. Due to its great tissue mass and large contribution to metabolic demand, skeletal muscle is a particularly consequential cell type for pathologies of lipid homeostasis [10,11]. Lipids are stored as TAG in LDs within skeletal muscle cells, called intramyocellular lipid (IMCL) [12,13]. To meet energy demands, IMCL can be hydrolyzed into FFAs, which are processed through β-oxidation in mitochondria to generate ATP and heat in skeletal muscle [12,14]. Physical training can induce an increase in IMCL pools, which is reflected in an increased LD number and accompanying morphological changes. Interestingly, while a diet-induced increase in IMCL is associated with insulin resistance, similar IMCL accumulation in response to exercise is not [15]. This apparent contradiction is referred to as is the athlete's paradox [16]. A resolution of the paradox has yet to be fully achieved. There have been advances in methodological approaches permitting more accurate measurements of IMCL and LDs in skeletal muscle, including the biochemical extraction of TAG, magnetic resonance spectrometry, histochemical staining with immunofluorescence microscopy, and transmission electron microscopy (TEM) [12,13]. Methods have also been recently established for the isolation of muscle LDs permitting proteomic analysis [17]. These new techniques provide more detailed observation of skeletal muscle LDs, which raises a new perspective for the study of the athlete's paradox. Diabetes Mellitus Diabetes mellitus (DM) is a complex, chronic metabolic disease with multiple causes. The most obvious feature of the disease is a sustained elevation of blood glucose levels, accompanied by long-term disorders in glucose, lipid, and protein metabolism caused by insufficient insulin secretion or insulin non-responsiveness [18]. The definition of DM contains several criteria including fasting blood glucose above 7.0 mmol/L [19]. If DM is not controlled through proper lifestyle and medical intervention, the sequelae include organ failure and peripheral nerve necrosis [20]. There are three common types of DM. Type 1 diabetes mellitus (T1DM) was previously called insulin-dependent diabetes. This type of autoimmune disease is most common in children and adolescents. Immunological destruction of the pancreatic beta cells results in insufficient insulin secretion to stimulate glucose uptake [21]. T2DM is a type of metabolic disease that is characterized mainly by a reduction in insulin sensitivity (insulin resistance), which in turn decreases insulin-stimulated uptake of blood glucose [22]. The third type is gestational diabetes mellitus (GDM). This type occurs in pregnant women and represents a serious complication of childbirth. The condition is specific to pregnant women who have not previously been diagnosed with diabetes [23]. T2DM is by far the most common form of the disease, accounting for over 90% of cases. The early stages of the disease are marked by increasing insulin resistance in the peripheral tissues including muscle, fat, and liver. At first, a compensatory increase in insulin secretion is able to maintain normal glycemic levels [24,25]. However, over time, the heavy demand leads to beta-cell exhaustion and apoptosis. The resulting decrease in insulin secretion along with increasing insulin resistance leads to a loss in glycemic control and dangerous elevations in blood glucose levels [26][27][28]. At the cellular level, lipid homeostasis includes the balance between the absorption of free fatty acids from the blood and the synthesis and hydrolysis of lipids in cells under normal conditions [29]. The ectopic accumulation of lipids in peripheral tissues can interfere with these cellular functions, disturbing lipid homeostasis at the cellular level. Ultimately, this can lead to diseases including, commonly, T2DM. Due to its increasing prevalence and mounting societal costs, T2DM has become the focus of global attention. When lipids accumulate beyond the capacity of adipose tissue to efficiently store it, the rate of lipid hydrolysis exceeds that of esterification. This results in a sharp rise in blood FFA levels, which has two downstream consequences. First, it drives insulin resistance through fatty acid receptors on the cell surface. Second, FFAs are absorbed by the peripheral tissues, which further disturbs insulin signaling. Since skeletal muscle accounts for more than 70% of the body's blood glucose intake [30], the high concentration of free fatty acids in the blood of obese patients has the greatest impact on the insulin response of skeletal muscle [31]. The degree of IMCL is positively correlated with insulin resistance [32,33]. Furthermore, the increase in IMCL in skeletal muscle is accompanied by an accumulation of the metabolic intermediates diacylglycerol (DAG) [34] and neuroamide [35], which have also been linked to insulin resistance. The levels of DAG accumulation in skeletal muscle of endurance training and sedentary obese rats are similar [36]. In addition, the concentration of phosphatidylethanolamine molecular species containing palmitoleate is increased in endurance-trained rats, but the opposite is true in sedentary obese rats. These findings indicate that endurance exercise can affect the lipid composition of skeletal muscle. Unfortunately, the LD lipidome of skeletal muscle with or without endurance exercise remains unknown. Skeletal Muscle Lipid Droplets and the Athlete's Paradox The adoption of an exercise program can alleviate many chronic diseases [37]. An exercise regimen can accelerate metabolism, improve cardiovascular function, and enhance immunity [38]. Indeed, a short-term exercise intervention was found to alleviate induced insulin resistance but also to increase the expression of genes associated with IMCL synthesis, resulting in the accumulation of TAG in skeletal muscles [39]. Another observational study found that physically trained individuals had increased skeletal muscle TAG relative to lean, sedentary people [16]. This is consistent with the known role of IMCL in meeting the energy demand of skeletal muscle [12]. Other studies have found a positive correlation between IMCL levels and insulin sensitivity in those engaged in aerobic training [40,41]. However, this association is surprising since IMCL has also been positively correlated with the development of metabolic diseases and insulin resistance in the general population [11,42,43]. This is the athlete's paradox. It is likely that the assessment of IMCL levels is too crude a measure and that this simple metric belies important biochemical differences between diet-induced and exercise-induced accumulation of TAG in skeletal muscle. A more nuanced analysis is required to resolve the paradox. Improvements in biopsy and imaging techniques for studying muscle are allowing for more detailed analysis of the phenomenon linking LDs and insulin sensitivity in muscle tissue. Subcellular Compartmentalization of Lipid Droplets LDs are found in two distinct locations in muscle, just beneath the cell membrane (subsarcolemmal LDs) and between myofibrils (intermyofibrillar LDs) [44]. The total cellular volume of intermyofibrillar LDs greatly exceeds that of the subsarcolemmal pool [44]. There is a substantial drop in total IMCL following acute exercise suggesting the use of IMCL as an energy reservoir for the muscle [11]. Multiple methodologies including stable isotope tracing, magnetic resonance spectroscopy, and fluorescence and electron microscopy support that conclusion [12]. There is evidence that LDs in both subcellular locations contribute to some degree to the energy needs of muscle tissue [11]. However, the pools appear to be biochemically distinct. An analysis of LDs in muscle tissue before and after exhaustive exercise found a measurable drop in the cellular fraction of the intermyofibrillar but not the subsarcolemmal LDs [44]. Similarly, in another study, a reduction in the intermyofibrillar lipid pools was seen in endurance athletes after moderate and high-intensity exercise [45]. Thus, it is primarily the intermyofibrillar LDs that contribute energy to muscles during periods of high demand. Furthermore, LDs are more abundant in fast-twitch, type I fibers than type II fibers and this distribution is more pronounced in trained athletes [44,46]. LDs in the muscle of trained athletes are relatively small, are predominantly intermyofibrillar, and are more prevalent in fast-twitch, type I fibers. In contrast, T2DM patients accumulate larger, subsarcolemmal LDs that are more abund ant in type II fibers [45]. A measure of total IMCL does not distinguish between these pools, which likely underlies the athlete's paradox. Lipid Droplets and Mitochondria LDs interact with mitochondria, and the distribution of LDs and mitochondria in skeletal muscle type I muscle fibers have been observed by traditional light microscope and laser confocal three-dimensional reconstruction technique. It was found that LDs are mainly distributed in the aggregation of mitochondria [47]. Meanwhile, peridroplet mitochondria (PDM) in brown adipocytes support LD expansion because Perilipin-5 induces mitochondrial recruitment to LDs and increases the synthesis of TAG dependent on ATP synthase [48]. It has also been reported that the protein content of each intermyofibrillar mitochondrion in rat skeletal muscle is nearly twice as high as that of Subsarcolemmal mitochondria, which means that intermyofibrillar mitochondria have higher activity [49]. At the protein level, SNAP23 is one of the proteins found to regulate the interaction between LDs and mitochondria [50]. In skeletal muscle, SNAP23 is partially localized on the cell membrane and is involved in the translocation of insulin-sensitive glucose transporter 4 (GLUT4) to the cell membrane [51]. In fatty acid treated cells with increased LDs, SNAP23 is more localized on the surface of LDs, which enhances the interaction between LDs and mitochondria and reduces the plasma membrane GLUT4, which, in turn, decreases glucose uptake [51]. Multiple lines of evidence have established that LDs can physically associate with mitochondria, suggesting a functional link in the mobilization and use of energy reserves [40]. In addition to the accumulation of intramyofibrillar LDs, long-term endurance training also increases the biogenesis of mitochondria [52]. There is evidence that mitochondrial dysfunction can lead to insulin resistance [53]. The activity of the electron transfer chain in submembranous mitochondria of patients with type 2 diabetes and obesity is significantly lower than that of thin volunteers. It suggests that mitochondrial dysfunction may also lead to type 2 diabetes. Therefore, a more detailed understanding of the role of IMCL in health and disease will require the techniques of cell biology and biochemistry. New Approaches to the Study of Muscle Physiology The traditional techniques of total lipid extraction and microscopy of biopsies are complemented by LD isolation and spectroscopy. It remains technically difficult for many laboratories to purify LDs from muscle tissue. Our group established a method to isolate LDs as early as 2004 [54] and then improved the method [55]. Using this technique, we have successfully isolated LDs from mouse skeletal muscle and found a close association between LDs and mitochondria. The isolation method provides a means to carry out morphological, biochemical, and functional analyses of muscle LDs [56]. This technique may help illuminate the role of LD contact with mitochondria or other organelles in health and disease along with molecular details of muscle LD physiology. The application of magnetic resonance spectroscopy (MRS) to muscle physiology is a new approach permitting a non-invasive examination of muscle performance. The technique can be used to measure multiple parameters in a single experimental protocol, including acetylcarnitine, phosphocreatine, IMCL, and maximum oxidative capacity (Qmax). In one study, multiple parameters were measured in the quadriceps of the left leg of thirteen Ironman volunteers and ten normal volunteers using MRS. The athletes had a higher IMCL content than the normal volunteers, as well as a higher Qmax and faster phosphocreatine resynthesis and recovery [57]. However, both LD isolation and MRS are currently unable to distinguish between intermyofibrillar and subsarcolemmal LDs. Conclusions and Prospect There are two distinct LD populations in skeletal muscle cells. Intermyofibrillar LDs are highly metabolically active, serving as an energy reservoir during acute exercise while subsarcolemmal LDs are fewer in number and are less active. T2DM patients accumulate lipids in large subsarcolemmal LDs. In contrast, athletes accumulate lipids in intermyofibrillar LDs, which are smaller and more numerous (Figure 2). The high surface area to volume ratio allows for more efficient and rapid liberation of their energy stores than those of the subsarcolemma. However, their LDs are smaller in volume than the LDs in the skeletal muscle of T2DM and, therefore, the larger surface area provides higher lipolysis activity. The LDs of endurance athletes may also be contacted with mitochondria more than LDs in the skeletal muscle of T2DM, which provides the required energy for skeletal muscle with higher efficiency. It remains to be discovered how these two LD populations differ at the molecular level. It is possible that proteins specifically enriched in the intermyofibrillar LDs mediate a close association with mitochondria, facilitating energy use. There are likely other molecular differences that also explain why lipids can safely accumulate in intermyofibrillar LDs while diet-induced lipid storage in subsarcolemmal LDs leads to lipid toxicity and metabolic dysfunction. The mechanisms governing the differential storage of lipids in these two populations remain unknown. Further advances in spectroscopic, microscopic, and biochemical methods will be required to establish a deeper understanding of the molecular mechanisms underlying the athlete's paradox. Author Contributions: X.L. and Z.L. wrote the manuscript and revised it, they are contributed equally to this work. M.Z. and Y.N. contributed to review and edit the manuscript. The main review was performed by P.L., Y.Z. and X.Z. All authors read and approved the final manuscript. However, their LDs are smaller in volume than the LDs in the skeletal muscle of T2DM and, therefore, the larger surface area provides higher lipolysis activity. The LDs of endurance athletes may also be contacted with mitochondria more than LDs in the skeletal muscle of T2DM, which provides the required energy for skeletal muscle with higher efficiency. It remains to be discovered how these two LD populations differ at the molecular level. It is possible that proteins specifically enriched in the intermyofibrillar LDs mediate a close association with mitochondria, facilitating energy use. There are likely other molecular differences that also explain why lipids can safely accumulate in intermyofibrillar LDs while diet-induced lipid storage in subsarcolemmal LDs leads to lipid toxicity and metabolic dysfunction. The mechanisms governing the differential storage of lipids in these two populations remain unknown. Further advances in spectroscopic, microscopic, and biochemical methods will be required to establish a deeper understanding of the molecular mechanisms underlying the athlete's paradox. Author Contributions: X.L. and Z.L. wrote the manuscript and revised it, they are contributed equally to this work. M.Z. and Y.N. contributed to review and edit the manuscript. The main review was performed by P.L., Y.Z. and X.Z. All authors read and approved the final manuscript.
2019-03-17T13:02:10.284Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "37dd03de49c83afbbe8b3492c470228194578ea6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/cells8030249", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "37dd03de49c83afbbe8b3492c470228194578ea6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
71143601
pes2o/s2orc
v3-fos-license
Prevalence of Depression and Personality Disorders in the Beginning and End of Emergency Medicine Residency Program; a Prospective Cross Sectional Study. Introduction Emergency medicine physicians are constantly under psychological trauma due to encountering critically ill patients, mortality, and violence, which can negatively affect their mental and physical health. The present study was performed with the aim of determining the rate of depression and personality disorders in first-year emergency medicine residents and comparing it with the time they reach the 3rd year. Methods In the present prospective cross-sectional study, emergency medicine residents working in multiple teaching hospitals were included via census method and evaluated regarding the rate of depression and personality disorders using the standard MMPI-2 questionnaire upon admission to the program and graduation and their status regarding the evaluated disorders were compared between the 2 phases of evaluation. Results 99 residents with the mean age of 33.93 ± 5.92 years were evaluated. 85 (85.85%) rated their interest in their discipline as moderate to high. The rates of stress (p = 0.020), anxiety (p < 0.001), and hypomania (p = 0.015) had significantly increased during the 3 years and psychasthenia rate had decreased significantly during this time (p = 0.002). Changes in the prevalence of other disorders on the third year compared to the year of admission to emergency medicine program were not significant. Conclusion Considering the results of the present study, it seems that paying more attention to mental problems and decreasing environmental stressors of medical residents, especially emergency medicine residents, should be among the priorities of managers and policymakers of this discipline. Introduction A considerable part of each individual's life is spent in the workplace. Environmental factors such as noise, crowding, improper light and sound, human factors like conflict with other individuals, and organizational factors such as work density, improper policy making, injustice and many other factors are among the stressors of workplace. If an individual is not able to effectively cope with these mental pressures, numerous physical, mental, and behavioral side effects will manifest and this will bring about decrease in effectiveness and job dissatisfaction (1). The rate of anxiety in those working in the field of healthcare is higher than the general population and this is related to long night shifts, low sleeping hours, and high and exhausting workload (2). The emergency department is among the hospital environments with a high tension. Physicians and other emergency staff are constantly under psychological trauma due to encountering critically ill patients, mortality, and violence, which can negatively affect their mental and physical health (3). Emergency physicians experience a high degree of job burnout throughout their career, this rate has been estimated to be about 49% to 65% in emergency medicine residents (4)(5)(6). Studies have shown that medical residents experience higher degrees of depression compared to other students (7)(8)(9)(10)(11). These facts have received attention from graduate accreditation association and a movement has been initiated for improving physicians' mental health (12). Evaluating the health condition of medicine students and evaluating the effects of work environment on their psychological balance seems necessary for better planning and improving the conditions by reducing preventable stressors. Therefore, the present study has been performed with the aim of evaluating the rate of depression and personality disorders in first-year emergency medicine residents and comparing it with the time they reach the 3rd year. Study design and setting In the present prospective cross-sectional study, all of the first-year emergency medicine residents of Shahid Beheshti University of Medical Sciences, admitted in 2014-2015, were evaluated. The questionnaires used were filled out after obtaining informed consent and by keeping data of the participants completely confidential, once when they entered the program (first year) and once at the time of graduation (third year). The study was approved by the ethics committee of Shahid Beheshti University of Medical Sciences. Participants Sampling was done using census method and all of the firstyear residents, working in teaching hospitals affiliated with the mentioned university, were included without any age or sex limitation. Not giving consent for participation in the study or dropping out of the program and not filling the questionnaire on the third year were exclusion criteria. Data gathering The tools used for gathering data in this study were baseline characteristics questionnaire and 71-question MMPI2 questionnaire for evaluating the rate of depression and personality disorders. Minnesota Multiphasic Personality Inventory (MMPI) is a standard questionnaire for recalling a broad range of self-described characteristics and scoring them, which gives a quantitative index of the individual's emotional and their viewpoint on participating in the test (13). All firstyear residents filled out the questionnaires in the first phase of the study and their data were recorded. Then 2 years later and in the second phase of the study, the same residents, who had become third-year residents then, filled out the questionnaires again. Finally, the data gathered in the first and third year were compared. In addition, important happenings affecting mental health (such as getting married, having babies, losing dear ones, acute problem in the family, acute disease for the residents themselves,. . . ) that had happened during the 2 years (between the 2 phases of the study) were also recorded to eliminate their confounding effect. The per- 20 (20.20) son in charge of data gathering was an emergency medicine resident that personally gathered the data on the first and third year. Statistical Analysis Data were analyzed using SPSS software, version 18. To describe data, mean and standard deviation or frequency and percentage of the variables were used. Before-after test was applied for comparing the condition of personality assessment indices on the first and third year. P<0.05 was considered as level of significance. Results 99 residents with the mean age of 33.93 ± 5.92 (26 -55) years were evaluated (56.6% female). Table 1 has depicted the baseline characteristics of the studied residents. 85 (85.85%) residents rated their interest in their discipline as moderate to high and only 20 (20.20%) had an income more than 15 million Rials (1500 US dollars) a month. Table 2 has compared the prevalence of depression and other personality disorders at the time the mentioned residents were enrolled in the emergency medicine program with their third year (the time of graduation). Based on the comparison, stress (p = 0.020), anxiety (p < 0.001), and hypomania (p = 0.015) had significantly increased during the 3 years and psychasthenia rate had decreased significantly during this time (p = 0.002). Changes in the prevalence of other disorders on the third year compared to the year of admission to emergency medicine program were not significant. Discussion Based on the results of the present study, the rate of stress, anxiety and hypomania in the third year emergency medicine residents had significantly increased compared to the time they were first admitted to the residency program and severity of psychasthenia had decreased. Changes in the rate of other disorders on the third year compared to the year of admission to emergency medicine program were not significant. Considering the nature of their job, physicians and healthcare team members are more exposed to stress and anxiety compared to other people in the society (2). There-fore, paying attention to this matter in this group of people is very important. Studies that have been carried out in this regard have reported contradicting results regarding the rate of stress that medical staff members bear; some have reported high rates of stress in surgeons (14) and emergency physicians (15), and in some other studies no significant difference was found regarding stress rate in emergency physicians (16). In another study, results showed that the rate of cortisol measured in residents was not related to the year of residency program they were in (17). However, in another study, the level of stress among professors of medicine had a significant increase after a few years passing (18). This finding shows that increase in cortisol production following stress does not significantly drop with gaining experience (17). An increase in the rate of anxiety following increase in the years of residency in residents was another finding of the present study, which is in line with the study by Buddeberg-Fischer et al. (19). Cabrera et al. (2018) found that emergency medicine residents experience many times more stress and anxiety compared to the general population and this increase in anxiety directly correlates with the increase in their duration of stay and shifts in the emergency department. In this study, no sex difference was observed between the residents regarding anxiety rate (17). Overall, it should be noted that in addition to the problems that stress, anxiety, and depression as factors of mental health cause for the resident during education, they also lead to interference with their professional role and taking responsibility of the society's health in the future. Therefore, it seems that prevention of stress, anxiety and depression of residents and decreasing their mental pressure can play an important role in increasing their interest in working and protecting people's health in acute and critical situations and cooperating with the group and feeling responsible. Considering the results of the present study, it seems that paying more attention to mental problems of medical residents, especially emergency medicine residents, should be among the priorities of managers and policymakers of this discipline. Limitation One of the limitations of the present study is its small sample size, which was also seen in previous studies (15,20). Inability to control some confounding factors such as the menstrual cycle, and personal and family problems were also among the limitations of this study. Another important point is that a control group was not available for performing more comparisons. Conclusion Based on the results of the present study, the rate of stress, anxiety, and hypomania in the third year emergency medicine residents had significantly increased compared to the time they were first admitted to the residency program and severity of psychasthenia had decreased. Changes in the rate of other disorders on the third year compared to the year of admission to emergency medicine program were not significant. Acknowledgements All the residents and teaching staff members of hospitals affiliated with Shahid Beheshti University of medical sciences are thanked for their cooperation.
2019-03-11T17:25:25.419Z
2019-01-25T00:00:00.000
{ "year": 2019, "sha1": "6540bdc088ddf0453768980219b940b9faf5b34e", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "d1490709be5895f0b90df7af5b129f2df592b34d", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
233811074
pes2o/s2orc
v3-fos-license
A lesson study to foster prospective teachers’ disposition in STEM education Fostering students’ dispositions may start with their interest to the topic then spread to another indicator such as persistence, contextual applications, etc. In many conditions, technology can attract students’ interest and turning complexity into a lot easier problem. Therefore, prospective teachers must first be equipped in the use of the latest technology. However, the use of technology in learning must be designed appropriately. Since excess use of technology may negate someone thinking process, even though thinking is every learning main porpose. This research aim to design the apropriate integration between the learning and technology to foster prospective teachers’ mathematical disposition. POM-QM represents technology embedded for the Science, Technology, Engineering, and Mathematics (STEM) learning process. In the conduct, this lesson study starts with planning the chapter design to integrate the technology into learning. Inside the learning implementations, several observers assigned for each group of students to record every activity. From both cycles reflection phase, the chapter design is able to foster students’ interests, contextual awareness, and their appreciation into the topic. Continuous treatment is needed until the dispositions occurs frequently. Introduction It is appropriate that every student has a positive disposition towards mathematics [1]. As Kilpatrick [2] pointed out, that mathematical disposition is one of the supports of student success in learning. Likewise, National Council of Teachers of Mathematics (NCTM) [3] which states that students' disposition in dealing with mathematics and their beliefs can affect their achievement in mathematics. This positive disposition will encourage students to stay focused on learning [4]. Unfortunately, many students and parents trust on scores, procedural ability, and problem-solving competence more than their trust on thinking processes and dispositions towards mathematics [2]. The emphasis on cognitive aspects can also have negative impact on students' mathematical dispositions, especially for school-age students [5]. Worse, these negative dispositions may carry over until they continue to higher education or to their future job. Therefore, mathematics learning must be specifically designed in such a way to remove students' negative mathematical dispositions. To design a learning that can foster mathematical dispositions, can start from things related to students' daily lives or future job, such as technology. Integrating technology such as in the Science, Technology, Engineering, and Mathematics (STEM) framework is vital, more connected, and relevant for students [6]. In many conditions, technology can turn complexity into a lot easier and more enjoyable. Since its capability to find the solutions of mathematical calculation, the use of computers in learning is taking over the students' thinking process [7]. As an example, the use of Mathematica software to find the derivatives of complicated functions in calculus course. Another example in elementary school, the use of calculator for simple calculations (e.g. 7 × 12). It will only negate the thinking process which is more useful for future knowledge. However, it does not mean that the calculator does not help mathematics learning at all. In various calculations and more complex problems, e.g. numerical methods, the use of calculators greatly helps both thinking and learning process. Therefore, the use of technology in learning must be designed appropriately. Otherwise, it will only cause the misuse of technology in learning by making a negative disposition for the students' persistence and interest in mathematics. In order to teach their students according to technological developments, prospective teachers must first be equipped in the use of the latest technology. In this research, we will discuss about appropriate technology integration in linear programming course. To design a preferable STEM learning, it requires precision on integrating technology into certain parts of the topic. Through the stages of lesson study, a design chapter will be designed and implemented in the classroom. Methods This research conducted in lesson study format with purpose to design a proper integration between technology and the classroom learning. Student divided into two parallel classes, this enable minimum cycle of lesson study. The lesson study format in this study follows the idea of Yoshida [8]. First stage of lesson study is planning the chapter design. In this paper, the design planned for the first topic and it also reviewed by both lesson study and the topic expert. Second stage of lesson study is to implement the chapter design in the classroom. Several observers assigned for each group of students. The teaching session also written down and recorded in video by a specific person, this will give a more detailed description of events in learning. In the reflecting session, every observer gives feedbacks to revise the chapter design regarding mathematical disposition indicators to achieve, and then repeat the lesson study to the other parallel class. The last lesson plan and chapter design expected to be an ideal outcome besides the students' mathematical disposition. Result Since the learning is the first course meeting, it started with an introduction of linear programming. The introduction begins in a unique way, it started with a contextual problem in the first meeting. This is intended so that students explore and try to solve these problems with the knowledge they have. After the contextual problem set is shown, the instructor gives the opportunity to read and understand the problem set. Then the instructor goes around to check the students' understanding of the problem set. This was also an opportunity for instructors to recall various knowledge about linear programming that they had learned during high school. This recalling process is important as a warm up so that they can be directly connected to the learning [9]. This process is done by repeating keywords in a linear programming, then assessing students' responses, and guiding them to recall and re-understand. As can be seen in figure 1, some of the keywords are repeated such as; vertex, maximize, minimize, and profit. This keyword is repeated continuously from the beginning until end of the learning, the understanding is deepened, and instructors ensures all students have an equivalent understanding. An understanding of vertices in linear programming is a vital one because vertex is a check point for objective functions. Therefore, the vertex keyword is the starting point for rememorizing. After that, move on to the profit keywords along with the maximum keywords become the most repeated, because corresponds to the problem set given. Figure 1. Graph of keywords repetition in learning This understanding and exploration of vertices and profits continues to be deepened, until all students understand that profits can occur in any known vertex. The following in table 1 is a snapshot of the conversation between the instructor and one of the students. Students are given time to explore and try to solve the given problem with their initial abilities. Even without many treatments, the students able to be solve the problem set by relying on the accuracy of using simple algebraic variables and arithmetic calculations that have previously been studied in secondary schools, some students also use the linear programming method that has been taught in high school. Some groups managed to answer with the right process and solution, while others at least succeeded in making the correct mathematical model. This shows that the connection process in the introduction phase has been going well. In the next phase, technology, namely POM-QM software, was introduced. Before the learning begins, students have been asked to install the software. In this phase, the chapter design underwent a revision from experts. In the first cycle design chapter, the instructor gives a brief explanation about POM-QM and its use. However, after the reflection stages the instructors leave the students to explore and try POM-QM themselves. Most of POM-QM menus and functions are available to see, those make it easier for students to do trial and error. As Hmelo-Silver [10] opinion that experiential learning is more meaningful and last longer for students' knowledge. To foster students' persistence and appreciation of mathematics, they are then given a second problem set. This time they were asked to solve problems with more than two variables, problem that they had never encountered before. This problem leads to the use of simplex method to find the solution. Simplex methods have not yet been taught, but they are expected to explore their initial method limitations by themselves and figure out that POM-QM is able to solve the problem. All students, even the person who is not usually interested, trying to understand and participate in solving the problem. Interestingly, the students shout cheerfully and seem satisfied for every idea confirmed as true, and they pay attention carefully when explained that their idea was invalid. Those conditions describes that the learning which has been applied can foster perseverance, interest, and flexibility, all is mathematical disposition indicator [3]. To provide a better picture of learning that has been taught in class, the chapter design in this study shown in table 2 below. In table 2, students' responses prediction does not reflect the students' real responses, the column is a prediction of our design. However, the actual learning interactions do not deviate much from the designed learning. Since the instructor plays his role very well to explore and then return to the track that should be. Introduction The instructors present a problem in simple contextual linear programming. This phase also warming up students' memories and understanding about the topic by repeating and exploring some keywords. Problem: A tailor has a supply of 16 m of silk fabric, 11 m of wool, 15 m of cotton fabric. He plans to make two clothing models in the following conditions:  Every A model of cloth requires 2 m of silk, 1 m of wool, and 1 m of cotton.  Every B model of cloth requires 1 m of silk, 2 m of wool, and 3 m of cotton. Let the profit of A model clothing is IDR 30,000 / unit and the profit of model B clothing is IDR 50,000 / unit. Determine the amount of each clothing model that must be made in order to obtain the maximum profit. Note: the mathematics model of this problem is required for every groups. Instructor checks and may assist each group to ensure their responses. Students' responses prediction Instructions/anticipations Conduct an experiment: Understanding and inputting constraints Understanding and inputting variable Trial and error in solving the problem given Core activities: Exploration Instructor checks every groups understanding about POM-QM. If students do not take the initiative to try solving the problems that have been previously given, then they are asked to try it using POM-QM. Note: the solutions and their ability to use POM-QM is obligatory. Instructor checks and may assist each group to ensure their responses. Core Activities: Discussion Students asked to present their results of the initial linear programming problems. They share findings and interpreting experimental data in groups. After solution presented, all students are focused to pay attention and discuss the solutions. Only if not initiated by students, instructor encourage students to explore all the menus and features of POM-QM such as graph panel, dual panel, etc. Instructor extend the problem into deeper and more detail understandings. Students given critical questions about the problem, such as "Is it possible that the maximum point lies outside the shaded area? Why?" Instructors provide other problems to solve with POM-QM. The problem consists of similar problem to the initial problem they already solved and extended problem that needs a good understanding of POM-QM features. The problem also plays a role to connect into the next course meetings, simplex method. Discussion In this chapter design, students divided into six groups consist of four to five persons. The group limited into five person to provide small-group learning as it is more effective to gain focus [11]. Inside the learning, this grouping has proven effectively encouraging discussion, making a more diligent student to take control of their group and explain the given problems. By the reflection we conclude, lecturer still need to check and confirm every group's idea, therefore students achieves their confidence and persistence to continue their idea, or to align misleading students' idea, so they don't get too far but eventually need repeat from the beginning [12]. Students also asked to explain their findings in front of class. Even though students voluntarily explain in front of the class, they still look bashful and showing their unease, despite their explanation and solution is correct. But not all, some students also showing a good confidence in explaining their solution even when hit by a series of questions. Those activity is very needed to improve understandings, critical and creative thinking, and to practice their confidence in front of people which is needed as a teacher in the future [13]. Those unease gesture should not be done by a professional teacher, and those became our special concern as we preparing our prospective teacher to be ready on their future professional job. The concern has become a revision material of the chapter design. Before the core of learning activities, students need to be given understandings of how to be confident in speaking his minds. Another revision is to give students another more challenging problem, using three variables or more that is using simplex method. Those problems could connect their understandings of the use of simplex method.
2021-05-07T00:04:05.802Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "3cfc99b5a077ce0a90f055ac09374327a8aae7b3", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1806/1/012107", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "b8ff5f15bb372396fa5d0f651d3a618132b79414", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology", "Physics" ] }
210892582
pes2o/s2orc
v3-fos-license
First-in-human phase I clinical trial of the NY-ESO-1 protein cancer vaccine with NOD2 and TLR9 stimulants in patients with NY-ESO-1-expressing refractory solid tumors Cholesteryl pullulan (CHP) is a novel antigen delivery system. CHP and New York esophageal squamous cell carcinoma 1 (NY-ESO-1) antigen complexes (CHP-NY-ESO-1) present multiple epitope peptides to the MHC class I and II pathways. Adjuvants are essential for cancer vaccines. MIS416 is a non-toxic microparticle that activates immunity via the nucleotide-binding oligomerization domain 2 (NOD2) and TLR9 pathways. However, no reports have explored MIS416 as a cancer vaccine adjuvant. We conducted a first-in-human clinical trial of CHP-NY-ESO-1 with MIS416 in patients with NY-ESO-1-expressing refractory solid tumors. CHP-NY-ESO-1/MIS416 (μg/μg) was administered at 100/200, 200/200, 200/400 or 200/600 (cohorts 1, 2, 3 and 4, respectively) every 2 weeks for a total of 6 doses (treatment phase) followed by one vaccination every 4 weeks until disease progression or unacceptable toxicity (maintenance phase). The primary endpoints were safety and tolerability, and the secondary endpoint was the immune response. In total, 26 patients were enrolled. Seven patients (38%) continued vaccination in the maintenance phase. Grade 3 drug-related adverse events (AEs) were observed in six patients (23%): anorexia and hypertension were observed in one and five patients, respectively. No grade 4–5 drug-related AEs were observed. Eight patients (31%) had stable disease (SD). Neither augmentation of the NY-ESO-1-specific IFN-γ-secreting CD8+ T cell response nor an increase in the level of anti-NY-ESO-1 IgG1 was observed as the dose of MIS416 was increased. In a preclinical study, adding anti-PD-1 monoclonal antibody to CHP-NY-ESO-1 and MIS416 induced significant tumor suppression. This combination therapy is a promising next step. Electronic supplementary material The online version of this article (10.1007/s00262-020-02483-1) contains supplementary material, which is available to authorized users. Introduction Peptide cancer vaccines have been evaluated in previous studies, but they have thus far exhibited limited efficacy against advanced cancers. Adjuvant selection is an important factor that affects the success of cancer vaccines. For example, substances that are retained at the injection site should be avoided as cancer vaccine adjuvants. Hailemichael et al. [1] reported that incomplete Freund's adjuvant-based vaccination caused T cells to accumulate at the vaccination site rather than at the tumor site and induced T cell apoptosis. Stimulants of pathogen recognition receptors, including TLRs and nucleotide-binding oligomerization domain (NOD)-like receptors, could activate antigen-presenting cells and may thereby overcome the immunosuppressive tumor microenvironment. TLR stimulants, such as OK-432, CpG or poly-ICLC, have been evaluated in conjunction with the New York esophageal squamous cell carcinoma 1 (NY-ESO-1) antigen-related cancer vaccine in clinical studies [2][3][4][5][6][7][8]. One of the most promising agents is a stimulant of TLR9. Muraoka et al. [9] reported that immunization with a peptide vaccine without an adjuvant increased apoptosis in vaccine-induced CD8 + T cells; in contrast, immunization with a peptide vaccine with a TLR9 stimulant reduced apoptosis in vaccine-induced CD8 + T cells and induced a significant anti-tumor effect in a mouse model. The stimulation of multiple innate immunity signaling pathways may greatly improve the efficacy of cancer vaccines over that achieved by TLR9 signaling alone. MIS416 is a nontoxic microparticle adjuvant derived from Propionibacterium acnes that activates the immune response via the NOD2 and TLR9 pathways. MIS416 acts as a Th1 response-skewing adjuvant by promoting the CD8 + T cell response and enhancing the anti-tumor activity of vaccines in a mouse model [10]. However, no clinical trials of cancer vaccines with MIS416 as an adjuvant have been reported. The NY-ESO-1 antigen, a cancer-testis antigen, was identified in esophageal cancer by serological expression cloning (SEREX) performed using serum obtained from patients with autologous esophageal squamous cell carcinoma [11,12]. The NY-ESO-1 antigen is expressed in a variety of cancers; for example, the NY-ESO-1 antigen is expressed in approximately 40% of refractory urothelial cancers [13][14][15], approximately 15-40% of advanced prostate cancers [16,17] and 49-75% of synovial cell sarcomas [18,19] but is not expressed in normal tissues with the exception of the testis and placenta. These findings suggest that the NY-ESO-1 antigen could be an ideal target for cancer immunotherapy against many malignant tumors and may have high cancer specificity and low toxicity. Cholesteryl pullulan (CHP) is a polysaccharide-based novel antigen delivery system for cancer vaccines. A complex of CHP and the NY-ESO-1 antigen (CHP-NY-ESO-1) was constructed that contains multiple MHC class I-and II-restricted epitopes and efficiently induces antigen-specific CD4 + and CD8 + T cell immunity [20][21][22][23][24][25]. We conducted single-center, open-label, dose-escalation studies of CHP-NY-ESO-1 with MIS416 as an adjuvant in patients with NY-ESO-1-expressing refractory urothelial cancer or castration-resistant prostate cancer and malignant solid tumors to evaluate its safety, tolerability, and immune response [26,27]. Patients and treatment Patients meeting the following criteria were included: histologically documented urothelial cancer, prostate cancer (clinical trial Registration Number UMIN000005246) or malignant solid tumors (UMIN000008006) that were refractory to standard therapy, age ≥ 20 years, an Eastern Cooperative Oncology Group (ECOG) performance status (PS) scale of 0-2, a life expectancy ≥ 3 months, adequate organ function and positive tumor expression of NY-ESO-1. Patients with a history of active autoimmune disease, the use of steroids (more than 20 mg equivalent of prednisolone/ day), the use of immunosuppressive drugs, uncontrolled infections or previous NY-ESO-1-related immunotherapy were excluded. Patients were enrolled from March 2011 to February 2017 and received CHP-NY-ESO-1 (0.5 mg/mL)/MIS416 (2 mg/mL) administered at 100 μg/200 μg, 200 μg/200 μg, 200 μg/400 μg or 200 μg/600 μg (cohorts 1, 2, 3 and 4, respectively) every 2 weeks for a total of 6 doses during the treatment phase (clinical trial registration number UMIN000005246 and UMIN000008006) followed by vaccination every 4 weeks (maintenance phase) until disease progression, patient refusal or unacceptable toxicity (UMIN000008007). CHP-NY-ESO-1 and MIS416 were manufactured according to good manufacturing practices and provided by ImmunoFrontier, Inc. (Tokyo, Japan) and Innate Therapeutics Ltd., respectively. CHP-NY-ESO-1 was subcutaneously (s.c.) injected into the chest, abdomen, upper arm or lower leg. MIS416 was injected s.c. at a site 2 cm away from the periphery of the CHP-NY-ESO-1 injection bulge. Mixing the 2 drugs under the skin was prohibited. The primary endpoints were safety and tolerability, and the secondary endpoints were immune response and quality of life (QOL). The dose-limiting toxicity (DLT) was defined as grade 3 or higher for injection site reaction, allergic reaction, pruritus, chills, and fever. The maximum tolerated dose (MTD) was the highest dose that caused DLT in no more than one of 6 patients. Patients assessable for dose escalation were those who were treated with more than 4 courses. If the number of assessable patients was lower than three, patients were added to the cohort. In cohort 4, 2 of the 4 patients had a total of 3 severe adverse events (SAEs), including 1 treatment-related case of anorexia, 1 nervous system disorder caused by cancer progression and 1 case of pancreatitis caused by alcohol intake. The data and safety committee recommended stopping further patient enrollment in cohort 4 because of the high frequency of SAEs. We decided to terminate the clinical trial considering the long amount of time required for patient enrollment and because no further improvements in efficacy were expected. Assessment Treatment response was assessed at 12 weeks by computed tomography. Patients with prostate cancer were also assessed by bone scans and the detection of serum PSA levels. Responses were assessed according to the Response Evaluation Criteria in Solid Tumors (RECIST) criteria version 1.1 [30]. AEs were assessed according to the National Cancer Institute Common Terminology Criteria for Adverse Events version 4.0. Serial serum and PBMC samples were collected before and during treatment. QOL was assessed using the European Organization for Research and Treatment of Cancer (EORTC) QLQ-C30 Japanese version 3.0 every 4 weeks as follows: at baseline and on treatment phase course 3 day 1, treatment phase course 5 day 1, and treatment phase course 6 day 15 (clinical trial registration number UMIN000005246). The EORTC QLQ-C30 is a questionnaire that was developed to assess the QOL of cancer patients. The QLQ-C30 contains 5 functional scales (physical, role, cognitive, emotional, and social), 3 symptom scales (fatigue, pain, and nausea and vomiting), and a global health and QOL scale [31]. Expression of the NY-ESO-1 antigen Archival or newly obtained tumor samples obtained from patients were screened for NY-ESO-1 expression. Eligible patients were those with NY-ESO-1 expression in ≥ 1% of tumor cells according to immunohistochemical staining with an E978 monoclonal antibody or ≥ 1 copy NY-ESO-1/10 4 copies of glyceraldehyde-3-phosphate dehydrogenase (GAPDH) according to a quantitative real-time PCR (qRT-PCR) analysis. T cell response, NY-ESO-1-specific antibody response and cytokine kinetics Serum samples were obtained at baseline, 6 h and 1 week after the 1st vaccination, and 2 weeks after each vaccination. PBMCs were obtained at baseline, before the 4th vaccination and 2 weeks after the 6th vaccination. All samples were stored at − 80 °C until analyzed. The NY-ESO-1-specific antibody response was assessed by ELISA as previously described [24,25]. Briefly, 1 3 recombinant NY-ESO-1 proteins (His-tag and GSTtag) were absorbed onto immunoplates (442,404; Nunc, Roskilde, Denmark) at a concentration of 10 ng/50 μL/well at 4 °C. The collected serum samples were diluted from 1:400 to 1:6400. After washing and blocking the plate, the sera were added and incubated for 10 h. After washing, goat anti-human IgG (H + L chain) (MBL, Nagoya, Japan) conjugated with peroxidase was added. After adding the TMB substrate (Pierce, Rockford, IL), the plate was read using a Microplate Reader (model 550; BioRad, Hercules, CA). Serum samples were obtained from 83 healthy volunteers and assayed by ELISA for the NY-ESO-1 IgG antibody as described in another study performed at Mie University (Mie University approval number: 817) before the start of this clinical study using CHP-NY-ESO-1 and MIS416. The cutoff level selected for the anti-NY-ESO-1 IgG antibody was defined as the mean optical density (OD) 450-550 absorption value + 1.645 × the standard deviation as 0.254. Hence, an OD 450 absorption value of at least 0.254 was considered a positive reaction at a serum dilution of 1:400. For patients who were antibody-positive at the baseline, their antibody titers were judged to be "augmented" if they changed by fourfold or more compared with the baseline. The IgG subclass antibody response to the NY-ESO-1 protein was detected with ELISA using polyclonal sheep anti-human IgG1 (dilution, 1:25,600; cat. no. AP006), IgG2 (dilution, 12,800; cat. no. AP007) and IgG3 (dilution, 1:12,800; Cat. No. AP008) (H + L chain) conjugated with HRP (The Binding Site Group Ltd., Birmingham, UK) used as the secondary antibodies. Statistical analysis The combined data analyses of the two treatment phase trials are described in the protocol. The Mann-Whitney U test was used to compare data obtained in the two groups. Fisher's exact test was used to compare the IgG positivity rates of patients who received CHP-NY-ESO-1 at 100 µg versus those who received CHP-NY-ESO-1 at 200 µg. Kruskal-Wallis ANOVA was used to compare data obtained in more than three groups. Student's t test was used to assess changes in QLQ-C30 scores between the pretreatment and treatment phases. P-values below 0.05 were considered statistically significant. Calculations were performed with SPSS Statistics version 25 (IBM Japan, Ltd., Tokyo, Japan). Patient characteristics and treatment exposure In total, 26 patients were enrolled (13 with prostate cancer, 5 with urothelial cancer, 4 with synovial sarcoma and 4 with other cancers, as shown in Table 1 and Supplementary Table 1). The median age was 70 years old (range 36-84). All patients had received prior therapies (chemotherapy, radiation, and/or surgery). Eight patients, all of whom were all prostate cancer patients, received systemic dexamethasone (DEX) (Supplementary Table 2). Nine patients were enrolled in cohort 1, 7 in cohort 2, 6 in cohort 3 and 4 in cohort 4. The median number of vaccinations was 6 (range 1-66). Seven patients (38%) moved to the maintenance phase and received ≥ 7 doses of the vaccine. Drug-related adverse events The drug-related AEs reported in this study are shown in Table 2. Grade 1-2 injection site reactions were observed in all patients. Grade 3 drug-related AEs were observed in 6 patients (23%): 5 exhibiting hypertension and 1 presented with anorexia. There was 1 case of grade 3 hypertension (11.1%) in cohort 1, 2 (28.6%) in cohort 2, 1 (16.7%) in cohort 3, and 1 (25.0%) in cohort 4 (Supplementary Table 3). The patients with these AEs were all prostate cancer patients and had grade 2 hypertension at baseline. One patient required an increase in the dose of antihypertensive medication, and the other AEs were resolved without medical modification. No grade 4-5 drug-related AEs were observed. Responses Eight patients (31%) had stable disease (SD) ( Table 3). Among the 13 patients with prostate cancer, 5 had SD (38%), 7 had PD (54%) and 1 was N/A (7%) (Supplementary Table 2). Among these patients, 4 advanced to the maintenance phase. One patient with prostate cancer who was enrolled in cohort 2 received 60 vaccinations during the maintenance phase. He showed no disease progression or significant increase in the serum PSA levels over 5 years. Cytokine analysis We hypothesized that the MIS416 adjuvant affected serum cytokine levels in the early phase. The changes in serum cytokine/chemokine levels that occurred from baseline to 6 h after the 1st vaccination were determined using a multiplex assay. Sixteen patients' samples were assessable. To exclude an effect of NY-ESO-1 dose, we assessed samples from patients enrolled in cohorts 2-4. Compared with baseline values, the levels of IL-6, IL-9, IL-10, GM-CSF, PDGF-BB, β-NGF, SCF and SCGF-β were significantly higher at 6 h after the first vaccination (Fig. 2). In contrast, no cytokines significantly decreased. With regard for the MIS dose, a comparison of cohorts 2, 3 and 4 revealed that the level of IL-17 tended to increase as the MIS416 dose increased (Supplementary Figure 1). T cell response Patients whose PBMCs were available for assessing T cell responses are listed in Table 3. The T cell response was assessed in two patients per cohort, for a total of 8 patients. No increase in NY-ESO-1-specific IFN-γ secreting CD8 + T cells was detected after vaccination. In contrast, the number of NY-ESO-1-specific IFN-γ-secreting CD4 + T cells increased in 2 patients after vaccination. One of these 2 patients, Pt ID UR-008, did not show progression for 5 years, as mentioned above, but did exhibit a NY-ESO-1-specific CD4 + T cell response (Table 3 and Supplementary Fig. 2a). Quality of life In total, 11 patients were assessed for QOL. There was no significant difference in any score, including the 5 functional scales, 3 symptom scales, and global health and QOL scale, during treatment ( Supplementary Fig. 3). Discussion CHP-NY-ESO-1 is a safe and promising cancer vaccine [21][22][23][24][25]. We expected the addition of MIS416 to make CHP-NY-ESO-1 more efficient without compromising safety. In this study, CHP-NY-ESO-1 with The number of spots (target -control) in 5 × 10 4 cells assessed by an ELISPOT assay was graded as follows: -: < 5, ±: 5-10, 1+: 11-50, and 2+: >50 § Patients who had no measurable lesion at baseline DEX dexamethasone, N/A not assessed adjuvant MIS416 200-400 μg showed acceptable safety and tolerability. Grade 3 hypertension was observed in five patients (19%). In multiple sclerosis (MS) patients who were administered MIS416 intravenously, vascular disorders, including hypertension, were observed in 52.5% of the patients [34]. In the higher MIS416 dose group (500-600 μg), the frequency of vascular disorders was 83.3%; these included diastolic hypertension in 50.0%, hypertension in 50.0%, and systolic hypertension in 66.7% of the patients. These results indicate that MIS416 may cause vascular AEs. The exact mechanism has not yet been clarified. In our study, among the 5 patients with grade 3 hypertension, 4 had prostate cancer. A previous meta-analysis showed that there was a relationship between hypertension and the risk of prostate cancer [35]. Because some cytokines can elevate blood pressure [36], prostate cancer patients with hypertension at baseline might be more sensitive to MIS416-induced cytokines. In line with the results of a previous study [25], we found that the antibody response was stronger in patients who received CHP-NY-ESO-1 200 µg than in those who received CHP-NY-ESO-1 100 µg ( Table 3). The antibody response rate and cycle number in the seropositive group were similar between the report by Kageyama [25]. One patient in cohort 3 of our study was vaccinated twice and had negative seroconversion, and this patient was excluded from the IgG analysis. CHP-NY-ESO-1 induced a prominent IgG1 response with increased IgG2 and IgG3 titers. However, adding MIS416 seemed to suppress IgG1, 2 and 3 responses (Supplementary Fig. 4a). While some patients did use steroids, NY-ESO-1-specific IgG1 titers were not affected by steroid use in cohorts 2-4 (Supplementary Fig. 4c). The patients' QOL was well-maintained during vaccination with CHP-NY-ESO-1 with MIS416 (200 and 400 µg) (Supplementary Fig. 3). However, CHP-NY-ESO-1 vaccination with 600 μg MIS416 was not welltolerated, and tumor shrinkage was not observed in this group. Furthermore, the immune response stimulated by CHP-NY-ESO-1 was not enhanced by increasing the dose of MIS416. MIS416 skewed the Th1 response in a mouse model [10]. In vitro, MIS416 was readily internalized by human myeloid and plasmacytoid DCs, resulting in cytokine secretion and cell activation/maturation. In this study, the subcutaneous injection of MIS416 caused IL-6 and IL-10 levels to be increased at 6 h after the first vaccination (Fig. 2). CHP-NY-ESO-1 vaccination showed CD4 + and CD8 + T cell responses in previous studies [22][23][24]. Unfortunately, in this study, we did not find that CD4 + Th1 cell and CD8 + T cell responses were enhanced. As mentioned above, we compared serum IgG1, IgG2 and IgG3 titers between samples obtained from patients in cohorts 2-4 (CHP-NY-ESO-1 200 μg with MIS416) and those obtained from 8 patients enrolled in a previous study by Kageyama et al. in which the patients received CHP-NY-ESO-1 200 μg without an adjuvant [25]. The anti-NY-ESO-1 IgG1 response was also attenuated as the dose of MIS416 increased (Supplementary Fig. 4b). The NY-ESO-1-specific IgG1 titer was significantly lower in patients who received MIS416 600 μg than in those who received the vaccine without MIS416. Two main causes might lead to these unexpected results. One cause is species-specific differences [37]. Although MIS416 suppressed IL-17 in an MS mouse model [38], this result has not been confirmed in humans with MS [34]. In our study, IL-17 levels tended to increase as the dose of MIS416 increased (Supplementary Fig. 1). When the effects of different CHP-NY-ESO-1 doses were compared, no significant change in the serum cytokine levels was observed with the exception of IL-13 ( Supplementary Fig. 5). Based on the findings in a mouse model, we could not predict whether the human innate immune system would be activated by the TLR9 and NOD2 signals included in MIS416, leading to an adaptive immune response. Another cause is the paradoxical effect of MIS416. IL-10 generally acts as an immunosuppressive cytokine but can enhance the CD8 + T cell response at higher doses [39,40]. Although IL-10 levels were increased in this study (Fig. 2), the change in absolute concentration was small (median + 1.3 pg/mL, Supplementary Table 4). In vitro, MIS416 stimulated PBMCs, causing them to secrete IL-6, IL-10, IFN-γ, TNF-α and IL-1β and exhibit an inverse dose-response to MIS416 at 5, 20 and 50 µg/mL [41]. In an MS mouse model, a high dose of MIS416 (100 µg/mouse) resulted in the systemic suppression of pro-inflammatory cytokine levels [38]. In addition, in humans, in a clinical trial in which a TLR9 stimulant was applied, the higher dose group had a lower response rate than was observed in the lower dose group (≤ 2 mg 80%, 8 mg 38%) [42]. These findings suggest that TLR stimulation may suppress the innate response in a dose-dependent manner. However, it is not possible to directly compare data obtained using MIS416 preclinical and human studies since the route of administration was different between the present and previous studies, and this is an important factor. In a previous study of MS patients, MIS416 was administered intravenously with the expectation that it would move to the liver and act as an immunosuppressive agent; in contrast, in this study, MIS416 was administered subcutaneously with the expectation that it would exert an immune stimulatory effect in a draining lymph node [10]. There are several limitations of this study. First, there are a variety of cancer types, and we cannot exclude differences in prior systemic therapy and immunogenicity in different cancers. Second, the effect of steroids cannot be ruled out as steroid use is an important issue in cancer immunotherapy. For example, in patients with prostate cancer, steroids may elicit anti-prostate cancer effects. Combined systemic therapy with steroids is commonly administered to patients with prostate cancer refractory to castration [43][44][45][46]. In this study, steroid use (≤ 20 mg equivalent of prednisolone/day) was allowed. Eight patients (31%) used DEX, with 1 mg/ day used by 3 patients and ≤ 0.5 mg/day used by 5 patients. All 8 patients were prostate cancer patients who received DEX at enrollment and throughout this study. Among these 8 patients, 38% achieved SD. In the cytokine analysis, steroid use was found to suppress changes in serum G-CSF and CCL2 levels ( Supplementary Fig. 6). However, the measured values showed no significant differences. Steroid use also did not affect NY-ESO-1-specific IgG1 titers (Supplementary Fig. 4c). Interestingly, one of the 8 patients who were enrolled in cohort 2 experienced no progression for 5 years and showed a NY-ESO-1-specific CD4 + T cell response. Cancer vaccines are known to be effective in prostate cancer, as shown in studies using Sipuleucel-T [47,48]. Yoshimura et al. [49] reported that patients who received a peptide vaccine plus DEX had longer PSA progression-free survival than was observed in patients who received DEX alone. Although we cannot deny that immune suppression was induced by steroids, these findings suggest that the induction of the NY-ESO-1-specific T cell response shows promising effects for prostate cancer patients, even among patients undergoing low-dose systemic steroid treatment. It has been assumed that the appropriate clinical use of an immune-stimulating adjuvant enhances the anti-tumor effects of cancer vaccines. Based on this idea, a number of cancer vaccine clinical trials have been conducted worldwide; however, almost all of them have produced disappointing results. Melanoma-associated antigen (MAGE)-A3, a cancer-testis antigen, is one of the most promising CHP-NY-ESO-1 (40 μg/mouse) and MIS416 (250 μg/mouse) were administered subcutaneously on days 1 and 7. The anti-PD-1 monoclonal antibody (clone RMP1-14, 150 μg/mouse) was administered intraperitoneally on days 1, 4, 7, 9, 13, 16 and 19. Compared with the no-treatment group, only the CHP-NY-ESO-1 + MIS416 + anti-PD-1 mAb group showed significant tumor growth suppression (p = 0.029) targets for cancer vaccines [50]. Recombinant MAGE-A3 vaccination with the AS15 immunostimulant, which contains CpG, a TLR9 stimulant, was assessed in patients with melanoma and non-small cell lung cancer but did not produce a survival benefit [51,52]. This finding suggests that cancer vaccines cannot be developed as monotherapies. In our study, no complete or partial response was observed. The combination of CHP-NY-ESO-1 + MIS416 + anti-PD-1 mAb exerted a significant tumor growth suppression effect in the mouse model (Fig. 3). The addition of anti-PD-1 mAb activates not only NY-ESO-1-specific T cells but also other tumor antigen-reactive T cells. A combination therapy that includes a cancer vaccine with the proper adjuvant and an immune checkpoint inhibitor may confer clinical anti-tumor effects. In conclusion, CHP-NY-ESO-1 with MIS416 200-400 µg was safe and tolerable but did not induce adequate immune or clinical responses. As a next step, we plan to conduct a clinical trial of a combination therapy that includes a cancer vaccine, an adjuvant and an immune checkpoint inhibitor. Informed consent Written informed consent to participate in the study and for the use of clinical data for research and publication was obtained from all patients included in the studies (Mie University Approval Numbers 2201, 2203 and 2366). Blood from healthy volunteers was obtained by laboratory members with the approval of Mie University (Approval Number 817). All healthy donors provided written informed consent to the use of their specimens for research and publication. Availability of data and materials To protect patient information in the clinical trial database, the datasets generated and/or analyzed in the present study are not publicly available, but they are available from the corresponding author on request. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2020-01-26T14:05:04.990Z
2020-01-24T00:00:00.000
{ "year": 2020, "sha1": "d9a4a74a3113b56c67b34daea06f798ad82f4173", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00262-020-02483-1.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "1031b0cc61e1af3128710054d582b9d6c8db0fd5", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
209445123
pes2o/s2orc
v3-fos-license
Automorphisms of real del Pezzo surfaces and the real plane Cremona group We study automorphism groups of real del Pezzo surfaces, concentrating on finite groups acting minimally on them. As a result, we obtain a vast part of classification of finite subgroups in the real plane Cremona group. INTRODUCTION 1.1. The classification problem. This paper is devoted to the study of finite automorphism groups of real del Pezzo surfaces. Our main motivation is the classification of finite subgroups of the real plane Cremona group; hence this paper may be viewed as a follow-up paper to [Yas16]. Recall that the Cremona group Cr n (k) = Bir(P n k ) is the group of birational automorphisms of the n-dimensional projective space over a field k. The finite subgroups of Cr 1 (k) ∼ = PGL 2 (k) have been known since Klein's time (see Lemma 2.5 and [Bea10]). By contrast, the complete classification of finite subgroups of Cr 2 (k) for k = k was obtained by I. Dolgachev and V. Iskovskikh only in 2009 and involves different hard techniques of modern birational geometry, such as Mori theory, equvariant resolution of singularities, etc. For the exposition of these results, as well as some historical notes, we refer the reader to the original papers [Bla09] (case of abelian subgroups) and [DI09a]. Much less is known for algebraically non-closed fields or n 3. Classification of finite subgroups of Cr 2 (R) was initiated by the author in [Yas16] where subgroups of odd order were classified up to conjugacy. The goal of this paper is to extend these results much further and to classify all finite groups acting minimally on real del Pezzo surfaces (see below). As will be explained below, this gives a vast part of classification of finite subgroups in Cr 2 (R). As for the case n 3, k = C, the classification seems out of reach at the moment. There are some partial results, see e.g. [Pro12], [Pro15]. Alternatively, one can try looking at things from a different point of view using the notion of Jordan property introduced in [Pop11]. Recall that an abstract group Γ is called Jordan if there exists a positive integer m such that every finite subgroup G ⊂ Γ contains a normal abelian subgroup A G of index at most m. The minimal such m is called the Jordan constant of Γ and is denoted by J(Γ). There is a remarkable result 2 [PS16a]: Theorem 1.1 (Yu. Prokhorov, C. Shramov). Let char k = 0. Then Cr n (k) is Jordan for each n 1. This theorem allows, at least theoretically, to classify finite subgroups of Cremona groups «up to abelian subgroups». Indeed, we know that for each extension Throughout this paper G denotes a finite group. Let k be a perfect field. We use the standard language of G-varieties (see e.g. [DI09a] or [Yas16]). The modern approach to classification is based on the following observations: • For any finite subgroup G ⊂ Cr 2 (k) there exists a k-rational smooth projective surface X , an injective homomorphism ι : G → Aut k (X ) and a birational G-equivariant k-map ψ : X P 2 k , such that This process of passing from a birational action of G on P 2 k to a regular action on X is usually called the regularization of the G-action. On the other hand, for a k-rational G-surface X a birational map ψ : X P 2 k yields an injective homomorphism Moreover, two subgroups of Cr 2 (k) are conjugate if and only if the corresponding G-surfaces are birationally equivalent. So, there is a natural bijection between the conjugacy classes of finite subgroups G ⊂ Cr 2 (k) and birational isomorphism classes of smooth k-rational Gsurfaces (X ,G). 2 It was initially proved modulo so-called Borisov-Alexeev-Borisov conjecture, which was settled in any dimension in [Bir16]. • For any projective geometrically smooth G-surface X over k there exists a birational G-equivariant k-morphism X → X min where the G-surface X min is G-minimal. The latter means that any birational G-equivariant k-morphism X min → Z is an isomorphism. If the surface X is additionally k-rational, then one of the following holds [DI09b, Theorem 5]: (1) X min admits a conic bundle structure with Pic(X ) G ∼ = Z 2 ; (2) X min is a del Pezzo surface with Pic(X ) G ∼ = Z. So, the classification of finite subgroups of Cr 2 (k) is equivalent to birational classification of minimal pairs (X ,G) described above. The goal of this paper is to describe all the minimal pairs (X ,G) with X a real del Pezzo surface, i.e. to complete the study of the first case in the previous dichotomy. 1.3. Some comments on the conic bundle case. The reader may wonder why do we focus only on the case of del Pezzo surfaces in this paper. The following example can serve as a partial explanation (or rather an excuse). Namely, it shows that there exist infinitely many pairwise non-conjugate involutions in Cr 2 (R), which are all conjugate over C. So, the classification of finite subgroups up to conjugacy in Cr 2 (R) is a much more subtle question. For the philosophy of k-birational unboundedness of conic bundles quotients standing behind this example see [Tre16]. (t 2 0 + k 2 t 2 1 ) + y 2 t 4n 0 + z 2 t 4n 1 = 0 in Proj R[x, y, z] × Proj R[t 0 , t 1 ] ∼ = P 2 R × P 1 R . The projection to P 1 R -factor defines a structure of a conic bundle on π : Z n → P 1 . Its geometrically singular fibers lie over the points p k = [i k : 1], p k = [−i k : 1] (here i = −1) and are given by y 2 + z 2 = 0. Let g n ∈ Aut(P 1 R ) be the involution [t 0 : t 1 ] → [−t 0 : t 1 ]. The complex involution σ and the automorphism g n act on Z n as shown on Figure 1. Note that (1) Irreducible components of all singular fibers of Z n can be Γ-equivariantly contracted on a conic bundle without singular fibers, hence Z n is rational over R. In particular, g n ∈ Cr 2 (R). (2) Z n is 〈g n 〉-minimal. On the other hand, Z n ⊗ C is not 〈g n 〉-minimal over C, as we can contract disjoint irreducible components of all singular fibers onto some Hirzebruch surface equivariantly. Using elementary transformations between Hirzebruch surfaces (or just [Bla09, Theorem 1]), we observe that all g n are conjugate in Cr 2 (C). (3) The surface X n = Z n /〈g n 〉 has a structure of a conic bundle with 2n singular fibers, and irreducible components in each fiber are complex conjugate. In particular X n is R-minimal. Thus X n is not rational over R when n > 3 (e.g. by Iskovskikh's rationality criterion, see [Isk96,§4]). Assuming that G 1 is conjugate to G 2 , there exists a common equivariant resolution Y → Y 1 , Y → Y 2 such that the actions of G 1 and G 2 coincide on Y . Therefore, Y 1 /G 1 is birational to Y 2 /G 2 . However, for n, m > 3 the conic bundles X n and X m are not pairwise birational to each other (see e.g. [Isk67, Theorem 1.6] or [Kol97,Theorem 4.3]). Therefore, involutions g n and g m are not conjugate in Cr 2 (R). This paper is organised as follows. Section 2 recalls some basic facts about del Pezzo surfaces, their topology and relation to Weyl groups; it also gathers some auxiliary results about Sarkisov program and classical linear groups that will be used later. The reader may skip this section and return to it later, if needed. In Sections 3-9 we study groups acting on real del Pezzo surfaces X with K 2 X 3, K 2 X = 7, 9. The cases K 2 X = 9 and K 2 X = 7 are trivial. Indeed, a del Pezzo surface of degree 7 is never G-minimal, and a real del Pezzo surface X of degree 9 with X (R) = ∅ is isomorphic to P 2 R , so finite groups acting on it are well known, see Lemma 2.5. In comparison with the case k = C, we have to deal with real forms of del Pezzo surfaces (i.e. non-isomorphic real surfaces that become isomorphic over C). Here we face an additional difficulty, since the complete classification of possible automorphism groups of del Pezzo surfaces is available only over the field of complex numbers; in fact, this classification was heavily used in the work of Dolgachev and Iskovskikh. So, in Sections 3-6 (i.e. K 2 X 4) we generally adapt the following strategy to classification: for each (R-rational) real form of a del Pezzo surface X , we study the group Aut(X ) (giving its precise description in many cases), and then determine possible finite groups G ⊂ Aut(X ) that can act minimally on X . To find such groups G, we usually investigate the action of Gal(C/R) × G on X ⊗ C; for high degree del Pezzo surfaces, we look directly at the intersection graph of (−1)-curves, which is easy to analyze in these cases. For low degree surfaces (K 2 X 3), our approach becomes more combinatorial. Both real structure σ on X and automorphisms G ⊂ Aut(X ) can be considered as elements of the Weyl group W associated to X (see §2.1). Using the classification of conjugacy classes in W , we determine possible pairs (σ,G) such that the action of 〈σ〉 ×G on X ⊗ C is minimal. In many cases we work with explicit equations of X and G (for example, in Section 7 we adapt for our purposes Sylvester's classical approach to cubic surfaces). In Appendixes A and B we focus on some special classes of finite subgroups in Cr 2 (R) (being motivated by the study of those in [Tsy13], [Pro12], [Pro17]) and in particular classify non-solvable finite groups acting on real geometrically rational surfaces. Our goal is to demonstrate that: (1) this classification can be obtained independently of the "complete" classification of all finite subgroups and (2) the corresponding list is considerably shorter than in the case k = C. Finally, for the reader's convenience, some technical information about real invariants of some finite groups is included in Appendix C. 1.4. Notation and conventions. We use the following notation and conventions. • In this paper, we say that a real del Pezzo surface X is G-minimal, or simply G is minimal (when it acts on X ), if and only if any birational G-morphism X → Y of G-surfaces is an isomorphism. Further, we say that X is strongly 3 G-minimal, or G is strongly minimal, if and only if rk Pic(X ) G = 1. Clearly, strong G-minimality implies G-minimality, but not vice versa (consider e.g. X = P 1 R × P 1 R with G acting preserving the factors). • Moreover, all del Pezzo surfaces are assumed to be R-rational (if not stated otherwise), and in particular their real loci X (R) are not empty. The latter condition implies that Pic(X C ) Γ = Pic(X ), hence Pic(X C ) Γ×G = Pic(X ) G , where X C = X ⊗C, and Γ is the Galois group Gal(C/R) generated by the involution σ. Therefore, a real del Pezzo surface X is strongly G-minimal if and only if X C is strongly Γ ×G-minimal. • We denote by Q r,s the smooth quadric hypersurface {[x 1 : . . . : x r +s ] : • For a real del Pezzo surface X , we denote by X (a, b) the blow-up of X at a real points and b pairs of complex conjugate points. We shall mostly use P 2 R , Q 3,1 or Q 2,2 as X . • Z/n or simply n is a cyclic group of order n; • D n is a dihedral group of order 2n; • BD n = 〈a, x | a 2n = 1, x 2 = a n , xax −1 = a −1 〉 is the binary dihedral group of order 2n; • S n is a symmetric group on n-letters. • A D B is the diagonal product of A and B over their common homomorphic image D, i.e. the subgroup of A × B of pairs (a, b) such that α(a) = β(b) for some epimorphisms α : A → D, β : B → D. • A • B is an extension of B by A; • When running the Sarkisov program (e.g. as in Proposition 3.4) we denote by D d (resp. C d ) a del Pezzo surface (resp. a conic bundle) of degree d (resp. with d = 8 − K 2 X singular fibers). • I or I n denotes the identity matrix of size n × n. Acknowledgments. The author would like to thank Andrey Trepalin for numerous useful discussions and explanation of the results of [Tre19], and the anonymous referee whose suggestions helped to improve both the exposition and the results of this paper. The author is also grateful to Jérémy Blanc, Yuri Prokhorov and Constantin Shramov for their valuable comments. The author acknowledges support by the Swiss National Science Foundation Grant "Birational transformations of threefolds" 200020_178807. 2. SOME AUXILIARY RESULTS 2.1. A quick look at (real) del Pezzo surfaces. Let us briefly overview some important tools that shall be used in this paper. For a more comprehensive account see e.g. [Dol12] or [Man86]. For the Minimal Model Program over R and its relation to the topology of real rational surfaces see [Kol97]. In this paper we are interested in the embedding of finite groups into Cr 2 (R), hence we focus on R-rational surfaces in the first place. When X is a non-singular real projective algebraic surface its set of real points X (R) will be always regarded as a compact two-dimensional C ∞ -manifold with the usual Euclidean topology. The following characterization of R-rational del Pezzo surfaces will be useful for us. Remark 2.2. In fact, for an R-rational del Pezzo surface X , its real locus X (R) is diffeomorphic to one of the following manifolds: (2) T 2 if X ∼ = Q 2,2 (0, b); where g = a + 1 and 1 g 9. See [Kol97] for details. Another powerful tool for studying del Pezzo surfaces is the Weyl groups. Let X C be a complex del Pezzo surface of degree d ≤ 6, obtained by blowing up P 2 C in r = 9 − d points. The group Pic X C ∼ = Z r +1 has a basis e 0 , e 1 , . . . , e r , where e 0 is the pull-back of the class of a line on P 2 C , and e i are the classes of exceptional curves. Put ∆ r = {s ∈ Pic(X C ) : s 2 = −2, s · K X C = 0}. Denote by E r the sublattice of Pic(X C ) generated by the root system ∆ r . For an element g * ∈ W (∆ r ) denote by tr(g * ) its trace on E r . To determine whether a finite group Γ × G acts strongly minimally on X C , we use the well-known formula from the character theory of finite groups (1) Thus the group Γ×G acts strongly minimally on X C if and only if g ∈Γ×G tr(g * ) = 0. On the other hand, by the Lefschetz fixed point formula for any h ∈ G we have, (2) Remark 2.3. Note that a cyclic group always has a fixed point on a complex rational variety. This follows from the holomorphic Lefschetz fixed-point formula. In this paper we shall use the known classification of conjugacy classes in the Weyl groups. These classes are indexed by Carter graphs, named e.g. A 1 , A 2 1 , etc. Here we follow the terminology of [Car72] (used in [DI09a]). Among other things, a Carter graph determines the characteristic polynomial of an element from a given class and its trace on K ⊥ X C , see [DI09a, Table 2]. Another useful source of information about involutions in Weyl groups and real structures on del Pezzo surfaces is [Wall87]. Note that Wall labels the conjugacy classes by Dynkin diagrams; in the situation where it can be confusing for the reader, we give the precise correspondence between these two different notations (e.g. in Table 11). 2.2. Sarkisov links. The main tool for exploring conjugacy in Cremona groups is the Sarkisov program. Here we very briefly recall how this tool looks like. For details see [Isk96], [DI09a] or [Pol97] for the theory developed over R. We work in the category of G-surfaces over a perfect field k. Similarly to the classical case of trivial G, any birational G-map between two G-surfaces can be decomposed into a sequence of birational G-morphisms and their inverses. A birational G-morphism X → Y can be thought of as a blow-up of . When p is reduced and consists of closed points y 1 , . . . , y n with residue fields κ(y i ), one has deg p = deg y i with deg y i = [κ(y i ) : k]. If p is G-invariant, then it is a union of G-orbits. So, over the field of reals one can blow up orbits of real points and pairs of complex conjugate points. In this paper we shall work with G-minimal del Pezzo surfaces and conic bundles (in the sense defined above). From the Mori theory's point of view, these are rational Fano-Mori G-fibrations of dimension two (extremal contractions π : X → C , where C is a point in the del Pezzo case, and C is a curve in the conic bundle case). A birational G-map f between Mori fibrations is a diagram of Now, according to Sarkisov program, every birational map f : X X of rational minimal G-surfaces is factorized into a composition of elementary Sarkisov links of four types. For complete description of all such possible links we refer to [Isk96]. 2.3. Topological bounds. For a finite group G ⊂ Aut(X C ), the representation obviously restricts the order of G when K 2 X < 6, which makes the classification of finite subgroups of Cr 2 (k) possible. It seems curious to us, that for real del Pezzo surfaces one can get some bounds on |G| independently of the Weyl groups. We shall not use the following result, but in our opinion it is worth mentioning. Proposition 2.4. Let X be an R-rational del Pezzo surface of degree d and G ⊂ Aut(X ) be a finite group. Then one of the following holds. Proof. We may assume that G faithfully acts on X (R) by diffeomorphisms. Let X ∼ = P 2 R (a, b). Then X (R) ≈ # a+1 RP 2 . Denote its orientable double cover by Σ a . By [Bre72,Corollary 9.4] we may assume that G acts faithfully on Σ a by orientation-preserving diffeomorphisms. Take any Riemannian metric on Σ a and average it with respect to G action. The resulting G-invariant metric gives a complex Ginvariant structure on Σ a , and G can be regarded as a group of automorphisms of a Riemann surface of genus a. Therefore, for a = 0 the group G embeds into Aut(Σ 0 ) ∼ = PSL 2 (C). We recall its subgroups in Lemma 2.5 below. For a = 1 the claim follows from a well-known classification of automorphisms of elliptic curves. Finally, for a > 1 the Hurwitz theorem implies |G| 84(a − 1), so a + 2b = 9 − d gives the result. Let X ∼ = Q 3,1 (0, b) or X ∼ = Q 2,2 (0, b). Again, G faithfully acts by diffeomorphisms of X (R). Passing to an index 2 subgroup, we may assume that the action is orientationpreserving. Applying the same arguments as above, we finish the proof 4 . 2.4. Classical linear groups. The next result is classical and will be used throughout all the paper (see e.g. [Bli17] or [Bea10] for a modern treatment). Lemma 2.5. The following assertions hold. (i) Any finite subgroup of PGL 2 (C) is one of the following: Z/n, D n , n 1, A 4 , S 4 , A 5 . Despite its simplicity, Lemma 2.5 has important consequences for classification of finite subgroups of Cr 2 (R) and, more generally, groups acting on real geometrically rational surfaces. For example, it "kills" almost all simple finite subgroups of Cr 2 (R), see Appendix A. DEL PEZZO SURFACES OF DEGREE 8 In this section X denotes a real del Pezzo surface of degree 8. We shall assume that X C ∼ = P 1 C × P 1 C (the other surface of degree 8, the blow up of P 2 R at one point, is never G-minimal), so either X ∼ = Q 3,1 or X ∼ = Q 2,2 [Kol97, Lemma 1.16]. We treat these two cases separately. Let X = Q 3,1 . Since Q 3,1 is R-minimal, any G ⊂ Aut(X ) acts strongly minimally on X . On the other hand, where O(3, 1) ↑ is the subgroup preserving the future light cone. In particular, O(3, 1) ↑ ∼ = PO(3, 1) and we may identify subgroups of PO(3, 1) with subgroups of the Lorentz group O(3, 1). Finite subgroups of O(3, 1) were classified in [PSA80]. The authors also indicated the smallest of the five locally isomorphic Lorentz groups which contains each finite subgroup. The group O(3, 1) ↑ was denoted O 1 (3, 1). To list the finite subgroups of O(3, 1) ↑ we then have to look at finite subgroups belonging to O 1 (3, 1) and DO(3, 1) in the notation of [PSA80]. In turns out that all our subgroups belong to class (i) in the cited paper, i.e. we may assume that they consist of elements of the form g ⊕ 1, where g ∈ O 3 (R) and 1 is the identity acting on the time coordinate. The classification of finite subgroups of O 3 (R) (or point groups in three dimensions) is a very classical topic and we do not give the whole list here (one can consult [CoSm03,II] or apply Goursat's lemma to O 3 (R) = SO 3 (R) × {±I }). For an explicit description of these groups by matrices we refer the reader to [PSA80]. Remark 3.1. One can give a topological explanation of the embedding G → O 3 (R). Indeed, the group G faithfully acts by diffeomorphisms of Q 3,1 (R) ≈ S 2 . By the classical theorem of Brouwer-Kerekjarto-Eilenberg, every such action is equivalent (i.e. conjugate) to a linear one, see e.g. [Zim12,§2]. Proposition 3.2. Let G ⊂ Aut(Q 2,2 ) be a finite subgroup such that Pic(Q 2,2 ) G ∼ = Z. Then G is isomorphic to one of the following groups (which are all strongly minimal): Proof. The group G = G ∩ (PGL 2 (R) × PGL 2 (R)) naturally acts on the factors of X = P 1 R × P 1 R preserving them. Let G 1 and G 2 be the images of G under the projections of PGL 2 (R) × PGL 2 (R) onto its factors. By Goursat's lemma, G = G 1 D G 2 for some D. As Z/2-component of Aut(Q 2,2 ) acts on P 1 R × P 1 R by switching the factors, the groups G 1 and G 2 must be isomorphic: otherwise G = G and Pic(X ) G ∼ = Z 2 , a contradiction. Thus G ∼ = H D H , where H is either cyclic, or dihedral. Note that a subgroup of a direct product of two cyclic groups is itself a direct product of at most two cyclic groups. Thus for H cyclic one can also write G ∼ = (Z/m × Z/k) • Z/2, m, k 1. For some isomorphic presentations of D n D D n see [DI09a, Theorem 4.9]. Remark 3.3. Let X be a real del Pezzo surface of degree 8 with X C ∼ = P 1 C × P 1 C , and G ⊂ Aut(X ). If G has a real fixed point p on X , then G is linearizable. Indeed, blowing up p and contracting the strict transforms of the lines passing through p, we conjugate G to a subgroup of Aut(P 2 R ). Proof. If G has a real fixed point on X , then G is linearizable by Remark 3.3. Assume there is a birational map f : (X ,G) (P 2 R ,G) and run the Sarkisov program on X to decompose f into a product of Sarkisov links; in what follows we refer to [Isk96, Theorem 2.6] for description of these links (including group action in the picture is straightforward). The first link can connect D 8 either with some D * (link of type II) or with C 2 (link of type I; recall that here 2 stands for the number of singular fibers). In the latter case we can continue making the links in the class C (e.g. of type II or IV), without creating new singular fibers, but at some point we have to link a conic bundle with a del Pezzo surface S. Same theorem shows that S ∈ D 8 . Since we do not want to return back to D 8 , we may assume that the first link was actually of type II. In the diagram below we list all possibilities (regardless the base field or group action). We stop drawing arrows if we have to link our surface with some D * which already occurred in the diagram. The labels denote the degrees of points which we blow up. So, we see that we have only two possibilities to connect (X ,G) with some (P 2 R ,G). The first one assumes that the link starts at a G-invariant point, which have to be real in our case. The second possibility is a combination of links of type II, namely D 8 5 −→ D 5 1 −→ D 9 . In particular, G must have a real fixed point on D 5 , and hence either G Z/5 or G D 5 (see Proposition 5.2). In the first case G must have a fixed point on X and X Q 3,1 (see e.g. [Yas16,4.4]). Let G D 5 , and the link D 8 → D 5 is as follows: f is a blow up of the point η, deg η = 5, and g is a contraction to a point ξ; note that deg ξ = 2 by [Isk96, Theorem 2.6]. We now use the linearization argument given in Section 5 below (or [Yas16, §4.6]). If X Q 3,1 , then g contracts two conjugate Gorbits, so ξ is a pair of conjugate G-fixed points, and we cannot proceed to P 2 R . If X Q 2,2 , then g contracts two real G-orbits, so ξ is a pair of real G-fixed points. Such a group indeed can be further linearized. Finally, if G has a fixed point p ∈ X (R) then there is a faithful linear representation G → GL 2 (T p X ), so G is either cyclic or dihedral by Lemma 2.5. DEL PEZZO SURFACES OF DEGREE 6 Let X be a real del Pezzo surface of degree 6. Then X C can be obtained by blowing up P 2 C in three noncollinear points p 1 , p 2 , p 3 . The set of (−1)-curves on X C consists of six curves: the exceptional divisors of blow-up e i = π −1 (p i ) and the strict transforms of the lines d i j passing through p i , p j . In the anticanonical embedding X C → P 6 C these exceptional curves form a "hexagon" Σ. This yields a homomomorphism to the symmetry group of this hexagon Since the set of all (−1)-curves on X C is defined over R, its complement T is isomorphic to a torus over C. But X (R) = ∅, so T is in fact an algebraic R-torus. One can view it as the connected component of the identity of Aut(X ). There exist only 4 real forms of R-rational del Pezzo surfaces of degree 6: P 2 R (3, 0), P 2 R (1, 1), Q 3,1 (0, 1), and Q 2,2 (0, 1). They correspond to real forms of T described by V. E .Voskresenkii FIGURE 2. Action of Γ on Σ Proposition 4.1. Let X be a real del Pezzo surface of degree 6 and G ⊂ Aut(X ) be a finite group acting minimally on X . Then one of the following holds: (i) The surface X is isomorphic to Q 2,2 (2, 0) ∼ = P 2 R (3, 0) and can be given as Its automorphism group fits into the short exact sequence Here Ker ρ ∼ = (R * ) 2 is the diagonal subgroup of PGL 3 (R), and ρ(Aut(X )) ∼ = D 6 is generated by the rotation r = ρ(α 1 ) and the reflection s = ρ(α 2 ), where The group G is of the form where H ⊂ Ker ρ is isomorphic to a subgroup of Z/2 × Z/2. (ii) The surface X is isomorphic to Q 2,2 (0, 1) and can be given as Its automorphism group fits into the short exact sequence Here Ker ρ ∼ = SO 2 (R)×SO 2 (R), and ρ(Aut(X )) ∼ = D 6 is generated by the rotation r = ρ(α 1 ) and the reflection s = ρ(α 2 ), where The group G is one of the following: where H ⊂ Ker ρ is a direct product of at most 2 cyclic groups of an arbitrary large order. All listed groups do act minimally on the corresponding real surfaces. Case X = P 2 R (3, 0). All (−1)-curves on X are real. Thus a cyclic group ρ(G) ∼ = 〈r k 〉 acts minimally on X if and only if k = 1 (otherwise one can G-equivariantly contract an orbit which consists of disjoint (−1)-curves and is defined over R). Following the same argument, it is easy to check that in the dihedral case only 〈r 2 , s〉, and hence 〈r, s〉, act minimally on X . As any nontrivial finite subgroup of R * is isomorphic to Z/2, we get the result. Case X = Q 2,2 (0, 1). The action of Γ on the hexagon is shown on Figure 2. Examining the action of G on Σ, one easily gets that only the groups 〈r 〉, 〈r 2 〉, 〈r 2 , s〉, 〈r 2 , r s〉, 〈r, s〉 act minimally on X . Proposition 4.2. Let X be a real del Pezzo surface of degree 6, and G ⊂ Aut(X ) be a finite group acting minimally on X . Assume that G is linearizable. Then G is one of the following groups (in the notation of Proposition 4.1): • isomorphic to S 4 : (1b) and (2c), where H is a Klein 4-group; • isomorphic to A 4 : (2b), where H is a Klein 4-group; • dihedral: Proof. This is an elementary group theory. As A 5 is simple, none of the groups from Proposition 4.1 is isomorphic to A 5 . Let G ∼ = S 4 . Note that S 4 has no normal subgroups H with G/H isomorphic to Z/3, Z/6 or D 6 . If S 4 /H ∼ = S 3 , then H = {e, (12)(34), (13)(24), (14)(23)} is a Klein group. Let G ∼ = A 4 . Note that A 4 has no normal subgroups H with quotient isomorphic to Z/6, S 3 or D 6 . If A 4 /H ∼ = Z/3, then H is a Klein group. Let G ∼ = D n . We know that G has a normal subgroup H with G/H isomorphic to Z/3, Z/6, S 3 or D 6 . In particular H is cyclic (otherwise [G : H ] 2). On the other hand, a quotient of a dihedral group is again dihedral. In the case (1) of Proposition 4.1 we get that for H = id the group G is D 3 (1b) or D 6 (1c), while for H ∼ = Z/2 the group G is D 6 (1b) or D 12 (1c). In the case (2) the cyclic group H can be of any order k, so either G ∼ = D 3k and is of type (2c), (2d), or G ∼ = D 6k and is of type (2e). Finally, let G ∼ = Z/n. Then H is cyclic. In the case (1) of Proposition 4.1 one has |H | 2, and G/H ∼ = Z/6. Thus G ∼ = Z/6 or Z/12. In the case (2) the order of H can be arbitrary large, hence G is isomorphic to Z/3k or Z/6k. Remark 4.3. As was shown in [Yas16, §4.5] there exist infinitely many non-linearizable subgroups of type (2b) acting minimally on Q 2,2 (0, 1). Moreover, we exhibited two non-conjugate embeddings of G = (Z/3) 2 into Cr 2 (R): the one is a trivial extension of type (2b), and the other comes from the fiberwise G-action on the conic bundle X = Q 2,2 ∼ = P 1 R × P 1 R with rk Pic(X ) G = 2. DEL PEZZO SURFACES OF DEGREE 5 Each real del Pezzo surface X of degree 5 is isomorphic to There are 10, 4 or 2 real lines on X respectively. It is clear from the blow-up model of X that the configuration of Γ-orbits of exceptional curves is uniquely determined by the pair (a, b). The incidence graph of such a configuration is the colored Petersen graph, where the lines in one Γ-orbit have the same color (and we additionally label by * the real ones). We assume that X is the blow-up of P 2 R at four points p 1 , p 2 , p 3 , p 4 in general position, e i is the exceptional divisor over the point p i and d i j is the proper transform of the line passing through the points p i and p j , see Figure 3. Let us do some extra work and find all possibilities for Aut(X ). Proposition 5.1. Let X be a real del Pezzo surface of degree 5. Then Proof. The "split" case X ∼ = P 2 R (4, 0) is classical and can be found e.g. in [Dol12, Theorem 8.5.8]. Denote by Π a,b the colored incidence graph of (−1)-curves on X C = P 2 R (a, b)⊗C. As Aut(X ) naturally acts on the exceptional lines preserving incidence relations, we have a homomorphism ψ : Aut(X ) → Aut(Π a,b ). It is injective, as any automorphism of X C which fixes all (−1)-curves comes from an automorphism of P 2 R that fixes 4 closed points p i 's, so it must be trivial. Note that for each ϕ ∈ Aut(Π a,b ) and any two vertexes v 1 and v 2 we must have: (if a line is not indicated then it is stabilized). Note that α, β ∈ Aut(Π 2,1 ) and ς, ∈ Aut(Π 0,2 ). Then Aut(Π 2,1 ) = 〈α〉 2 × 〈β〉 2 ∼ = Z/2 × Z/2, Indeed, in the case of Π 0,2 one can use that Aut(Π 0,2 ) acts on the set {e 1 , e 2 , e 3 , e 4 } and the kernel of this action is obviously trivial. On the other hand, Aut(Π 0,2 ) cannot be isomorphic to S 4 , as any automorphism of order 3 would fix d 12 (hence e 1 and e 2 ), d 34 (hence e 3 and e 4 ). Since Aut(Π 0,2 ) contains D 4 , we get Aut(Π 0,2 ) ∼ = D 4 . The case of Π 2,1 is easy as well. To show that ψ is surjective we explicitly construct the corresponding geometric actions. For this We may also assume (after applying a suitable transformation from PGL 3 (R)) that the blown up points are Then the lifts of α , β , ς and act as α, β, ς and respectively on the corresponding Π a,b . Proposition 5.2. Let X be a real del Pezzo surface of degree 5 and G ⊂ Aut(X ) be a finite group acting minimally on X . Then X is isomorphic to P 2 R (4, 0), and the group G is one of the following: All listed groups do act minimally on X . Proof. In the case X = P 2 R (4, 0) we argue exactly as if k = C, see [DI09a, Theorem 6.4]. Assume that X ∼ = P 2 R (2, 1). Note that the curve d 12 is the only line on X intersecting 3 real lines. Thus it is stabilized by Aut(X ) and can be equivariantly contracted, implying that the pair (X , Aut(X )) is not minimal. Now let X ∼ = P 2 R (0, 2). Then every automorphism in Aut(X ) preserves the set {e 1 , e 2 , e 3 , e 4 } consisting of 2 pairs of complex conjugate lines which are pairwise disjoint, and hence can be equivariantly contracted. So, (X , Aut(X )) is not minimal. Let S = P 2 R (4, 0). It follows from the classification of Sarkisov links that for G = S 5 or A 5 the pair (S,G) is superrigid, see [DI09a, Proposition 7.12,7.13]. Let G = 〈a〉 ∼ = Z/5. Then S(C) a consists of 2 points, whose blow-up is a del Pezzo surface Y of degree 3 with two skew lines 1 and 2 , either real or complex conjugate [Yas16, §4.6]. One can use the G-birational map to conjugate G to a group acting on a quadric surface Q. If σ( 1 ) = 2 , then Q ∼ = Q 3,1 and G can be Then S(C) a consists of 2 points and this set is b-invariant. Therefore, we can again use the same birational map as above to conjugate G to a group acting on a quadric surface (this is a Sarkisov link of type II). 6. DEL PEZZO SURFACES OF DEGREE 4 6.1. Topology and equations. Throughout this section X denotes a real del Pezzo surface of degree 4. It is well-known that the linear system | − K X | embeds X into P 4 R as a complete intersection of two quadrics, which we denote Q 0 and Q ∞ . If no confusion arises, we denote by the same letter a quadric, the corresponding quadratic form and its matrix. Let Q be the pencil Its discriminant ∆(µ, λ) ≡ det(λQ 0 + µQ ∞ ) is a binary form of degree 5. Since we assume X smooth, the equation ∆ = 0 has five distinct roots [λ i : µ i ], i = 1, . . . , 5. Equivalently, the matrix Q −1 0 Q ∞ (we may suppose Q 0 nonsingular) has five distinct eigenvalues −λ i /µ i ∈ C . They correspond to the singular members of Q, which we denote by Q i , i = 1, . . . , 5. Note that eigenspaces corresponding to different eigenvalues are orthogonal with respect to both Q 0 and Q ∞ . Over C, we can find a basis of eigenvectors, making both Q 0 and Q ∞ diagonal, so the pencil takes the form The complex conjugation permutes the eigenspaces. In a Γ-invariant one, we can pick a real vector for our basis, so the corresponding part of the pencil's equation has real coefficients a i and b i . For two complex conjugate eigenspaces, we get a two-dimensional real subspace W orthogonal to the other eigenspaces. If we pick an orthogonal basis {w, w} in W ⊗ C, where w is an eigenvector with eigenvalue −b/a, then Let us summarize this discussion by stating the following classification result. Proposition 6.1. A real del Pezzo surface of degree 4 can be reduced to one of the following normal forms (I): (II): (III): Now let us describe how the topology of X (R) depends on the equation of X . Nonsingular real pencils of quadrics were classified by C. T. C. Wall in [Wall80] by an invariant called characteristic. In the notation of Proposition 6.1 set and define points on the circle These points can be grouped in blocks: as we proceed anticlockwise around the circle we meet a block of m 1 points P t , then a block of n 1 points Q t , then a block of m 2 points P t and so on. When we are half way round, we meet an opposite block of m 1 points Q t , so one has m 1 = n g +1 for some g 0. This g is called the genus and the sequence (m 1 , . . . , m 2g +1 ) in cyclic order the characteristic Ξ(Q) of our pencil. Below we list some information about the topology and real lines on X , following [Wall80], [Wall87] and [Kol97] (we list only those surfaces which are rational over R). Using Proposition 6.1, for each real form we also indicate the type of equation of X . Remark 6.2. Note that the sum of entries in Ξ(Q) equals to the number of real eigenvalues of Q. In particular, there is no one-to-one correspondence between the numbers of real eigenvalues of Q and the real structures on X . 6.2. Automorphisms. Let v i and Q b i denote the vertex and the base of the singular quadric Q i respectively. Since Γ acts on the set {v 1 , v 2 , v 3 , v 4 , v 5 }, there can be 1, 3 or 5 real v i 's. As Q b i ⊗C has two pencils of lines, each Q i has two pencils of planes, whose intersections with X C give two complementary pencils of conics C i and C i on X C . These pencils satisfy the conditions C i · C i = 2, C i · C j = C i · C j = 1 for i = j , and C i + C i ∼ −K X . Two complementary pencils define a double cover π i : X C → P 1 C × P 1 C , which coincides with the projection of X from v i . Depending on the type of real locus Q b i (R) (i.e. on the realness of two pencils of lines on Q b i ) one has either σ( The Galois involution of the double cover π i induces an automorphism τ i ∈ Aut(X C ). For real v i both π i and τ i are defined over R. As was explained in the beginning of this section, in a suitable system of complex coordinates both Q 0 and Q ∞ can be brought to diagonal form, so the equations of X can be written in the form and then τ i are given by x i → −x i . These five commuting involutions generate a normal abelian In what follows it will be convenient for us to use the following description of this group, see [Bla09, Lemma 9.11]: where an element (a 1 , . . . , a 5 ) exchanges the two conic bundles C i and C i if a i = 1 and preserves each one if a i = 0. In this terminology, the automorphism a = (a 1 , a 2 , a 3 , a 4 , a 5 ) corresponds to the projective transformation so τ 1 corresponds to (0, 1, 1, 1, 1), τ 2 corresponds to (1, 0, 1, 1, 1) etc. Further, the groups Aut(X ) and Aut(X C ) act on the pencil Q preserving the set of five degenerate quadrics or, equivalently, the set of pairs R i = {C i , C i }. Thus we have two homomorphisms with ker ρ 1 = ker ρ 2 = A. In fact, the exact sequence id → A → Aut(X C ) → Im ρ 2 → id splits, and Aut(X C ) ∼ = A Im ρ 2 . One can easily see [DI09a, Section 6] that Aut(X C )/A is one of the following groups: id, Z/2, Z/3, Z/4, Z/5, S 3 , D 5 . Denote by ρ the restriction of ρ 2 on the real automorphism group Aut(X ). Set Convention. In this paragraph every permutation τ ∈ S 5 should be understood as a permutation of the set {R i : i = 1, . . . , 5}. For an automorphism (a, τ) ∈ Aut(X C ) we denote it simply by a if τ = id, and by τ if a = 0. 6.3. Groups acting minimally on real del Pezzo quartics. We now start to enumerate the groups acting minimally on real del Pezzo surfaces of degree 4. For each real form listed in Table 3, we first get some restrictions on the groups A o and A , and then list possible strongly minimal groups G ⊂ Aut(X ). We describe a way to get an explicit group structure of G (e.g. to list all its elements (a, τ)), and do this job ourselves for subgroups of A o . To shorten the exposition, we do not treat systematically minimal groups of mixed type. i.e. those which are not contained in A o or A , but demonstrate how one can get such description in Proposition 6.4 (case G Z/4). It is straightforward to write down all these automorphisms in coordinates using (5). To write down the equation of X , one may use Proposition 6.1 choosing coefficients in accordance with characteristic Ξ(Q), see Table 3. The split case X ∼ = P 2 R (5, 0) immediately follows from the work of Dolgachev and Iskovskikh [DI09a], as σ * = id and the whole groups (Z/2) 4 and S 5 act by real transformations of P 4 R . Proposition 6.3 ([DI09a, Theorem 6.9]). Let X = P 2 R (5, 0) be a real del Pezzo surface of degree 4, and G ⊂ Aut(X ) be a group acting strongly minimally on X . Then G is isomorphic to one of the following: are non-abelian groups of order 16. Moreover, all listed groups act strongly minimally on X . We now proceed with non-trivial real forms of del Pezzo quartics. Let X = Q 3,1 (0, 2) be the blow up of Q 3,1 at four points p, p, q, q. Denote by E x the exceptional divisor over a point x ∈ Q 3,1 , and by F the strict transform of a fiber. Then, in the notation as above, one has Note that in each pair one has In what follows we shall depict the action of σ on five pairs of conic bundles like this: This example corresponds to description of pairs R i = {C i , C i } for X = Q 3,1 (0, 2) given above. No arrow means that the corresponding conic bundle is σ-invariant. We shall omit the bullets' labels in the future. Now it is easy to see that A ⊂ {id, (23), (45), (23)(45)} (however this inclusion is strict, as one can see from the list (7)), and any element (a 1 , a 2 , a 3 , a 4 , a 5 ) ∈ Ker ρ has a 4 = a 5 , so A o embeds into (Z/2) 3 (here and below we often use the fact that Γ commutes with automorphism). The first three groups lie in Ker ρ. Finally, all listed groups indeed act strongly minimally on X . More information about the structure of the last three groups is given in the proof. Remark 6.5. The case when X ∼ = Q 3,1 and G is a group of prime order was investigated in [Rob15, §4.3]. It was shown that (1) X can be given by the equations (2) A o is isomorphic to (Z/2) 3 , and generated by elements γ 1 , γ 2 and γ 3 . To save some space, we shall use these results below referring to [Rob15] for their proofs . Proof of Proposition 6.4. In the light of the previous Remark, we may proceed with determining minimal groups. In what follows we denote the elements of Ker ρ as a ≡ (a, id), where a = (a 1 , . . . , a 5 ) ∈ (Z/2) 5 , and the elements of Im ρ are denoted as τ ≡ (0, τ), τ ∈ S 5 . Below we shall use the following trivial observation several times. Assume that elements of G • either all have a 1 = 0; • or all have a 4 = a 5 = 0. Then G is not strongly minimal. Indeed, in the first case G fixes σ-invariant C 1 and C 1 , hence we have rk Pic(X ) G > 1. In the second case G fixes F + F , which is not a multiple of K X ; hence rk Pic(X ) G > 1. For brevity, we will call any of the two conditions above a -condition (it will be always clear from the context which one we actually mean). In particular, g is of order 4, and we may assume that we are in the first case. A simple calculation shows that the action of g and σ on Pic From this one can easily get that rk Pic(X C ) Γ×G = ( h∈Γ×G tr(h * ))/|Γ ×G| = 1, so G is strongly minimal. Now suppose that G is not cyclic and set G 0 = G ∩ Ker ρ. We may assume that |G 0 | equals 2 or 4, as otherwise G contains Ker ρ and hence is strongly minimal. First consider the case |G 0 | = 2. Since we already considered the cyclic case, we may assume that G = 〈g 〉 × 〈h〉, where h = (0, τ) and g = (a, id). Both these elements indeed give minimal automorphisms, as was noticed in the very beginning of the proof. CASE τ · a = a + b: is obtained from the first one by switching the roles of a and b. Finally, let us stress once again that all groups of Proposition 6.4 are defined over R and strongly minimal. We next pass to the case when X ∼ = P 2 R (a, b). There are no strongly minimal groups G acting on X . and A is a subgroup of isomorphic to id, Z/2, Z/3 or S 3 . Moreover, each group acting strongly minimally on X is either contained in A o and isomorphic to (Z/2) 2 , (Z/2) 3 , or is an extension of such a group by a subgroup of A isomorphic to Z/2, Z/3 or S 3 (and every such group actually occurs as a strongly minimal group), or is a group of mixed type of order > 2. Proof. We have The complex involution acts as Thus, for any element (a 1 , a 2 , a 3 , a 4 , a 5 ) ∈ A o one has a 4 = a 5 and A o ⊂ (Z/2) 3 is the group described in the statement. Embedding A → D 6 indicated therein is clear as well; to exclude some possibilities for A one consults (7). Now let G ⊂ Aut(X ) be a strongly minimal group. Note that G A . Indeed, otherwise C 1 + C 2 + C 3 = 3L − E 1 − E 2 − E 3 is defined over R, G-invariant and not a multiple of −K X . All these groups are in fact strongly minimal. To check this, it is convenient to assume that Pic(X C )⊗R is spanned by e 0 = −K X , e i = C i , i = 1, . . . , 5. In this basis, the actions of a = (a 1 , a 2 , a 3 , a 4 , a 5 ) and σ • a are given by where δ i is the total number of i 's occurring at the first three positions of all a ∈ G, and ε i is the total number of i 's occurring at the last two positions of a. Hence G is strongly minimal if and only if 2(δ 0 −δ 1 )+(ε 0 −ε 1 ) = 0. It is now straightforward to check the latter condition for G 1 o , G 2 o and G 3 o , so all listed groups do act strongly minimally on X . Arguments, similar to the ones in Proposition 6.4, show that there are no involutions of mixed type acting strongly minimally on X . Proposition 6.8. Let X ∼ = Q 2,2 (0, 2) be a real del Pezzo surface of degree 4, and G ⊂ Aut(X ) be a strongly minimal group. Then A is a subgroup of Sym{R 1 , R 2 , R 3 } × Sym{R 4 , R 5 } ∼ = S 3 × Z/2 ∼ = D 6 isomorphic to id, Z/2, Z/3 or S 3 . Moreover, each strongly minimal group is either contained in A o and isomorphic to (Z/2) k , k = 1, 2, 3, 4, or is an extension of such a group by a subgroup of A isomorphic to Z/2, Z/3 or S 3 (and all listed groups indeed occur as strongly minimal groups), or is a group of mixed type. More information about the structure of G is given in the proof. Proof. As above, denote by E x the exceptional divisor over a point x ∈ Q 2,2 , and by F 1 and F 2 the strict transforms of fibers. Then one has The complex involution acts as and the same reasoning as in the proof of Proposition 6.7 applies to A to get restrictions on this group. We proceed with enumerating minimal subgroups of A o . One can use the same basis for Pic(X C )⊗R as in the proof of Proposition 6.7 and see that a * remains unchanged, while σ * a * sends e i to (1 − a i )e 0 + implying that G is strongly minimal if and only if δ 0 = δ 1 . We leave to the interested reader to write down all the possibilities for such G. Proof. According to Table 4 such a surface contains 2 elliptic lines. Assume that G acts minimally on X . Then it permutes elliptic lines which must intersect at a point. The plane passing through these lines intersects X in the third real line, which must be G-invariant, a contradiction. Proof. If σ * is of type A 3 1 or A 4 1 , then X has exactly 3 real lines. In both cases these lines form a triangle, possibly a degenerate one (i.e. all lines meet at one Eckardt point). Indeed, in the first case this is obvious, as X dominates P 2 R . In the second case X is non-rational over R, so it cannot contain two disjoint real lines. So, the group G acts on the set of three real lines, say 1 , 2 , 3 , and one has a homomorphism δ : G → S 3 . The minimality condition implies that Im δ contains an element of order 3. The kernel Ker δ consists of automorphisms that preserve each i , and in particular either fix three real points p = 1 ∩ 2 , 1 ∩ 3 , 2 ∩ 3 , or preserve the unique point p = 1 ∩ 2 ∩ 3 . In both cases Ker δ embeds into GL(T p X ) = GL 2 (R) and acts on T p X with two real eigenvectors, hence must be isomorphic to (Z/2) k , k = 0, 1, 2. Now a simple exercise in group theory and Lemma 7.1 give the list of groups in the statement. We are ready to state the main result of this section. Note that in the following theorem we do not classify all possible automorphism groups of real cubic surfaces. It is more convenient for us to go through classification of possible Aut(X C ) instead. The latter can be found in [Dol12,9.5] and [DI09a]; see also 6 [Seg42] and [Hos97]. For reader's convenience, we collect this description in the table below. Aut(X C ) Equation Note that Segre's classification is known to be incorrect in some places. For example, the class VII is missing in his classification. Theorem 7.4. Let X be a smooth real R-rational cubic surface, and G ⊂ Aut(X ) be a group acting minimally on X . Then, according to the type of X C , one of the following cases holds: Type I: X is a real form of the Fermat cubic surface Type II: X is the real Clebsch cubic surface Moreover, σ * = id, Aut(X ) ∼ = S 5 and G is either S 4 , or S 5 (both cases occur). Types III and IV: Then X is a real form of the cyclic cubic surface x 3 0 + x 3 1 + x 3 2 + x 3 3 + ax 1 x 2 x 3 = 0, and G = Aut(X ) ∼ = S 3 acts minimally by permuting the coordinates x 1 , x 2 and x 3 . The real structure σ * is of type A 3 1 . Type V: X is the real cubic surface Type VI: Then Aut(X ) ∼ = S 3 × Z/2, and X is a real form of the surface given by The group G is one of the following: S 3 , Z/6 Z/3 × Z/2, S 3 × Z/2 (all groups act minimally). Possible types of σ * are: id, A 1 , and A 3 1 (more information is given in the proof ). Type VIII: Then Aut(X ) ∼ = S 3 , G = Aut(X ), and X is a real form of the surface given by (Aut(X ) acts minimally). Possible types of σ * are id, A 1 and A 3 1 (more information is given in the proof ). Proof. Here we give an overview of the proof, referring the reader to subsequent paragraphs for details. First we notice that Lemma 7.1 implies that Types VII, IX, X and XI are not relevant for us, as G would be a p-group. Next we look at surfaces with comparatively "large" automorphism groups. Cubics of Types II and V are studied in paragraphs 7.1 and 7.2 respectively. Type I is discussed in paragraph 7.3. Types III and IV are discussed in paragraph 7.4. Now let us consider the case when X is a surface of type VIII, i.e. Aut(X C ) ∼ = S 3 . Then we must have G = Aut(X ) ∼ = S 3 and G is minimal by [DI09a, Theorem 6.14 (7)]. Let us find possible real structures. Note that X has 3 real Eckardt points p 1 , p 2 and p 3 (recall that there is a bijective correspondence between the set of Eckardt points on a smooth cubic surface, and the set of its involutions whose fixed (1) p 1 , p 2 are of 1st type and p 3 is of 2nd type. Then clearly G preserves R 3 , hence is not minimal. (2) p 1 and p 2 are of 2nd type and p 3 is of 1st type. We may assume that G permutes R 1 and R 2 . If R 1 ∩ R 2 = ∅, then G is not minimal. Otherwise the plane 〈R 1 , R 2 〉 intersects X at some real line which is G-invariant. (3) All points are of 2nd type. Then G acts on R i . Note that these lines cannot intersect in one point, as it would be another Eckardt point. Therefore, R i form a triangle. Now a surface containing a point of 2nd type can be of Segre types F 3 , F 4 or F 5 [Seg42, p. 153]. In our situation, R-rationality assumption and Lemma 7.2 imply that σ is of type A 3 1 . (4) Finally, if all points are of 1st type, then we have at least 9 real lines on X and hence σ * is of type id or A 1 . Finally, assume that X is a real cubic of type VI, i.e. Aut(X C ) ∼ = S 3 ×Z/2. Then X C has 4 Eckardt points: 3 collinear points p 1 , p 2 , p 3 , and the fourth point q. The lines q p i , i = 1, 2, 3, lie on X C (otherwise, X C ∩q p i = {q, p i , r i } with r i being an Eckardt point by [Dol12, Proposition 9.1.26]). Since both Γ and G preserve collinearity, q is real and G-fixed. Assume that not all p i 's are real, and let p 1 be the only real point among them. Then the real line q p 1 is G-invariant for any G ⊂ Aut(X ). Thus we may assume that all Eckardt points on X are real. In particular, Aut(X ) ∼ = S 3 × Z/2 (otherwise we do not have enough real involutions), and then [DI09a, Theorem 6.14 (6)] shows that minimal subgroups are S 3 , Z/6 and the whole Aut(X ). Same considerations as in the previous case show that p i have the same type. One can show that σ can be of types id, A 1 and A 3 1 , see [Seg42, §106]. Remark 7.5. The classification given in Theorem 7.4 can be also formulated in terms of elements of the Weyl group W (E 6 ). Let X be a real R-rational cubic surface, and G ⊂ Aut(X ) be a group acting minimally on X . Recall that |W (E 6 )| = 2 7 3 4 5. By Lemma 7.1 we may assume that G contains an element of order 3 or 5. We have the following cases: • G has an element of order 5. Then X is isomorphic to the Clebsch diagonal cubic over R, see Proposition 7.6. • G has an element of order 3. Let g ∈ G be an element of order 3. Then g * is of type 8 A 2 , A 2 2 or A 3 2 in W (E 6 ). On the other hand, if g * is of type A 3 2 , then tr g * = −3, hence Eu(X (C) g ) = 0. This is possible if and only if X (C) g consists of an elliptic curve, a section by a fixed hyperplane of g in P 3 C . But g ∈ PGL 4 (R), so it cannot have such a hyperplane (as it would correspond to an eigenvalue of multiplicity 3). So, we may assume that g * is either of type A 2 , or of type A 2 2 in W (E 6 ). -G has an element of type A 2 . As was shown in [Dol12,9.5.1], in this case X C is isomorphic to the Fermat cubic surface whose real forms are studied in paragraph 7.3. -G has an element of type A 2 2 . Then X C is isomorphic to the surface whose complex automorphism groups is S 3 for general values of parameters a and b. For special values we get more automorphisms, which can be illustrated as follows (the 8 3C, 3D and 3A respectively in ATLAS notation. arrows denote specialization, and the numbers denote the type of surface according to [DI09a]): Note that the types III, IV, and I correspond to the situation when the surface (9) specializes to a cyclic cubic surface (defined later). Such surfaces are logically divided in three distinct types: harmonic, equianharmonic and the rest, see below. The equianharmonic case, namely the Fermat cubic (I), is discussed in paragraph 7.3. The types III and IV correspond to harmonic and neither harmonic nor equianharmonic cubics respectively and are discussed in paragraph 7.4. In the next few paragraphs we discuss cubic surfaces of types I-V. For this we first need to recall Sylvester's classical approach to cubic forms. If S = {F = 0} ⊂ P 3 is the cubic surface given by F , one has Let us further make the change of coordinates z i = α i y i and assume that our surface is given by where λ i = 1/α 3 i . These parameters are uniquely determined up to permutation and common scaling by the isomorphism class of the surface. Representation (11) is called the Sylvester form of a cubic surface. So, a general cubic surface admits a unique 9 Sylvester form. We call such surfaces Sylvester nondegenerate (and Sylvester degenerate otherwise). One can show that the automorphism group of any surface given by (11) is a subgroup of the group S 5 which acts by permuting coordinates (or, equivalently, the sides of the Sylvester pentahedron) [DD17, Theorem 6.1]. Moreover, the equation must be transformed into itself under any such permutation τ, i.e. the constant ζ ∈ C by which this equation is multiplied equals to 1. Indeed, otherwise it is easy to see that ζ must be a 5th primitive root of unity, and τ is a cycle of length 5. The equation then necessarily reduces to which defines a Sylvester degenerate cubic surface. So, in order to have some nontrivial permutation among the z i 's transforming (11) into itself, the parameters λ i 's must be not all distinct. As was noticed already in [Seg42], the corresponding automorphism groups are generated by permutations of z i s with the same values of λ i 's (e.g. if λ 0 = λ 1 = λ 2 , then Aut(X C ) ∼ = S 3 is the group of permutations of z 0 , z 1 and z 2 ): 7.1. Clebsch diagonal cubic. (see also Segre's account [Seg42,§102]) 9 In the sense mentioned above. Proposition 7.6. Let X be a real R-rational cubic surface with Aut(X C ) ∼ = S 5 , and G ⊂ Aut(X ) be a group acting minimally on X . Then X is ismomorphic to the Clebsch diagonal cubic R , Aut(X ) ∼ = S 5 and G is either S 5 or S 4 (both groups occur). Proof. It is well known that X C is C-isomorphic to the Clebsch cubic surface [Dol12, Theorem 9.5.8]. Note that S 5 acts on X C acts by permuting coordinates x 1 , . . . , x 5 , and Γ = Gal(C/R) acts on S 5 trivially. SYLVESTER DEGENERATE CUBIC SURFACES We are now going to study those real cubic surfaces which either do not admit the Sylvester form at all, or this form is not unique. The latter ones are called cyclic surfaces. These are the surfaces for which four of the five L i 's are linearly dependent, and after a suitable change of variables the equation where G 3 is a ternary cubic form (so, our surface is a Galois triple cover of P 2 ). Consider the cubic curve Recall that Aut(X C ) ∼ = (Z/3) 3 S 4 , where one can view (Z/3) 3 as the group ω = (ω 1 , ω 2 , ω 3 , ω 4 ) ∈ C 4 : ω 3 i = 1 for all i , and ω 1 ω 2 ω 3 ω 4 = 1 with an obvious action ψ of S 4 on (Z/3) 3 . The group Γ acts on Aut(X C ) as σ · (ω 1 , ω 2 , ω 3 , ω 4 ), τ = (ω 1 , ω 2 , ω 3 , ω 4 ), τ . In particular, τ is either trivial, or of order 2. If c ∼ c , then τ and τ (corresponding to c(σ) and c (σ)) are conjugate in S 4 , thus we may assume that τ is one of the following: id, (12) or (12)(34). A slightly tedious computation shows that this indeed corresponds to partition of the set of 1-cocylces into 3 conjugacy classes with representatives (1, id), (1, (12)) and (1, (12)(34)), so the Fermat cubic surface has 3 real forms. We refer to these cases as F id , F (12) and F (12)(34) respectively. The 27 lines on X C are given by where 0 j , k 2, and ω is a primitive 3rd root of unity. One can easily check that (σ, g )-invariant lines (i.e. real ones) are α 00 , β 00 , γ 00 for g = (1, id), γ 00 , γ 10 , γ 20 for g = (1, (12)), all γ k j , α 00 β 00 , α 12 , β 11 , α 21 , β 22 for g = (1, (12)(34)), We see that there are 3 real lines on F id and F (12) , and 15 real lines on F (12)(34) . Note that three real lines on F id form a triangle, while on F (12) they intersect at an Eckardt point. A real cubic surface with 15 real lines is always rational over R. To determine which of the forms F id and F (12) are R-rational, one can compute the number of real tritangent planes. These are given by where s < p, and i , j , k, l ∈ Z/3. So, in each of two cases the number of real planes is 7, which means that all real forms of the Fermat cubic are rational over R (see Table 4). where g 3 (x, y) = x 3 − 3x y 2 is the absolute invariant of D 3 ∼ = S 3 . Note that S is automatically the real form of the Fermat cubic, since only the automorphism group of the latter one can contain a copy of S 3 × S 3 (see Table 6; the case of H 3 (3) Z/4 is easily excluded). Non-equianharmonic case. A cyclic non-singular and non-equianharmonic cubic surface has the canonical equation [Seg42,§88] over C with λ(λ 3 + 8)(λ 3 − 1) = 0. It corresponds to Segre types (viii) and (ix) [Seg42, §100] and Types III-IV of [DI09a]. So, the equation (15) describes a cyclic cubic surface varying in a pencil whose real members correspond to λ ∈ R. There are only two real singular surfaces in this pencil, arising from λ = ∞ and λ = 1. It can be checked 10 that σ * is always of type A 3 1 (see also [Seg42,§104]) Let f be a homogeneous polynomial defining a hypersurface Z in P n . Recall that the hypersurface Hess(Z ) = det Hess( f ) = 0 is called the Hessian hypersurface of Z . The Hessian of a cyclic cubic surface is the union of a fundamental plane Π = {x 0 = 0} and the cone over a cubic curve. Thus each automorphism of X is a linear map operating separately on x 0 and x 1 , x 2 , x 3 . One can show that Aut(X ) is isomorphic to a subgroup of S 3 [Seg42, §104]. So, a minimal group G must be isomorphic to S 3 ; note that such a group indeed acts minimally on X (since it is already minimal over C, [DI09a, Theorem 6.14]). Remark 7.7. Recall that the intersection C = Π ∩ X C is a cubic curve, whose 9 inflection points correspond to 9 Eckardt points of X C . Obviously, in our case C is defined over R (as Π is Γ-invariant, being the only plane component of the Hessian). It is well known that a real cubic curve has exactly 3 real 10 One can pick a specific value of λ > 1 and λ < 1 and calculate the number of real lines and tritangent planes, and then use Table 4. inflection points, and these points are collinear. In terminology of the proof of Theorem 7.4, the corresponding Eckardt points on X are of type 2 (these automatically follows from the type of σ * , or can be easily seen from the explicit description of lines on X , see [Dol12, Example 9.1.24]). 7.5. Non-cyclic Sylvester degenerate surfaces. A detailed description of the automorphism groups of such surfaces can be found in [Seg42,§100] (cases x-xvii). After excluding 2-groups, we are left just with two types (xi) and (xiv), having (complex) automorphism groups S 3 and S 3 × Z/2 respectively. Such groups were already discussed in the proof of Theorem 7.4. 7.6. Conjugacy classes. Classification of links in [Isk96] shows that del Pezzo cubic surfaces are rigid, and hence the conjugacy class of G in Cr 2 (R) is determined by the conjugacy class of G in Aut(X ). DEL PEZZO SURFACES OF DEGREE 2 Throughout this section X (or X sgn B , see below) denotes a real del Pezzo surface of degree 2. The anticanonical map ϕ |−K X | : X → P 2 R is a double cover branched over a smooth quartic B ⊂ P 2 R . The Galois involution γ of the double cover is called the Geiser involution. Note that B (R) divides RP 2 into connected open sets and only one of these is non-orientable. Choose an equation F (x, y, z) = 0 of B such that F is negative on that non-orientable set. One can associate two different degree 2 del Pezzo surfaces to B , namely It is classically known that there are 6 topological types of degree 4 smooth real plane curves. Correspondingly there are 12 topological types of degree 2 real del Pezzo surfaces. The following table lists only those X = X sgn B which are rational over R (see [Wall87] or [Kol97] for details): The Geiser involution is contained in the center of Aut(X ) and fits into the short exact sequence It is well known that this exact sequence splits, i.e. Aut(X ) ∼ = Aut(B ) × 〈γ〉. In particular, we have the following possibilities for the group G: • γ ∉ G. Then G is isomorphic to a subgroup G B ⊂ Aut(B ) ⊂ PGL 3 (R). Possible automorphism groups of real algebraic curves of genus 3 (considered as Klein surfaces) were described 11 in [BEG86]. Excluding those which do not embed into PGL 3 (R) we get the following list: Z/2, Z/2 × Z/2, D 3 , D 4 , D 6 , S 4 . Since our quartic lies in P 2 R it is not difficult to obtain this classification using invariant theory, see Appendix C for description of some invariants. This also shows that our curve cannot admit an automorphism of order 6: otherwise the equation of B reduces to the form z 4 + Az 2 (x 2 + y 2 ) + B (x 2 + y 2 ) 2 = 0, which is singular. Further, by [Yas16, Theorem 1.2] there is no H = Z/3-action on an R-rational del Pezzo surface of degree 2 with Pic(X ) H Z. Therefore, if G does not contain γ and acts strongly minimally on an R-rational del Pezzo surface of degree 2, then it is isomorphic to one of the following groups: Z/2, Z/2 × Z/2, Z/4, D 3 , D 4 , A 4 , S 4 . (16) • γ ∈ G. Then G is of the form 〈γ〉 ×G B , where G B is one of those listed in (16) (if not trivial), and the group Z/6 containing γ. Recall that for every real del Pezzo surface of degree 2 we have Pic(X ) γ Z. Therefore, any group G ⊂ Aut(X ) containing γ is automatically strongly minimal. The main result of this section is the following. (1) G is a cyclic group 〈g 〉 n • n = 2: 11 In fact, for each automorphism group the authors even provide some restrictions on the number of real ovals. (2 + ) g : [x : y : z : w] → [x : y : −z : w], g * has type A 4 1 , and σ * is of type A 4 1 , A 3 1 or A 3 1 . The equation of X has the form where f 2 and f 4 are some binary forms of degrees 2 and 4 respectively which are chosen 12 in accordance with Table 10 (as well as the sign of w). In each case, except possibly 2 + and σ of type A 3 1 , it is indeed possible to choose the coefficients in the equation of X , such that G is strongly minimal. (2) G is isomorphic to one of the groups (Z/2) 2 , S 3 , D 4 , A 4 or S 4 and contains at least one of the elements described in (1). In particular, all groups occur (but we do not find all possibilities for compatible real structures). In what follows we assume that γ ∉ G. 8.1. Case G ∼ = Z/2. Let g be an involution generating G. Without loss of generality we may assume that g acts on P 2 R as [x : y : z] → [x : y : −z] and then the equation of X C has the form where f 4 has no multiple factors (since B is smooth). For the action on X we have two possibilities We consider these two cases separately. of type A 3 1 in W (E 7 ) and tr id * + tr g * + tr σ * = 8 + tr σ * ∈ {7, 9, 11, 13}. Since X C is assumed to be Γ × Gminimal, we must have tr(σ * g * ) ∈ {−7, −9, −11, −13}. The last three values are impossible in W (E 7 ), so we may assume that σ * is of type A 4 1 and σ * = γ * • g * . We are going to show that this case does not occur. Indeed, run two H -equivariant minimal model programs on X C , one with H = 〈σ〉 and the other with H = 〈g • γ〉. Their common result will be some del Pezzo surface Z . Since g • γ is of type (2 + ) it fixes an elliptic curve on X (see below), so we have K 2 Z 4 (it is easy to check that a del Pezzo surface Z with K 2 Z > 4 cannot contain a fixed-point elliptic curve). On the other hand, Z is minimal over R, hence is not R-rational. But then X is non-rational over R too, a contradiction. The first two possibilities do occur. The first one is considered in Example 8.2. The second one is obtained by applying the same construction to Q 2,2 . (a "rotation" of S 2 by 180 • ). Then g (p + ) = p − , g (s + ) = s − , g (r + ) = r − . Denote by π : X → Q the blow up of Q at our six points and by g the lift of g on X . We claim that (1) X is a smooth real del Pezzo surface of degree 2, Example 8.2. Consider a quadric surface Q (2) The involution σ * on X is of type A 4 1 (in particular, X (R) ≈ S 2 ), (3) X is minimal with respect to g = γ • g . Let us assume that (1) holds for a moment. Note that Pic(X C ) is generated by three pairs of complex conjugate exceptional divisors E p ± , E s ± , E r ± and a pair of complex conjugate divisors F, F , where Note that σ permutes the members in each pair (which implies (2)), while g permutes the members in each pair of exceptional divisors and preserves F and F . So, g * • σ * acts with trace equal to 6 in Pic(X C ) ⊗ R, hence with trace equal to 5 in K ⊥ X ⊗ R. Put g = g • γ. Since γ * acts as −id in K ⊥ X ⊗ R one has tr (γ • g ) * • σ * = −5, so X C is 〈g 〉 × Γ-minimal. Finally, let us prove (1). For convenience of calculation, let us make the linear change of coordi- Divisors of bidegree (1, 0) and (0, 1) on Q are the lines T 1 = t T 0 , T 3 = t T 2 and T 2 = t T 0 , T 3 = t T 1 . It can be easily checked that no two points from above lie on such lines. Further, divisors of bidegree (1, 1) are hyperplane sections of Q, but our points do not simultaneously satisfy the equation αT 0 + βT 1 + γT 2 +δT 3 . Next, assume that our six points lie on the curve C of bidegree (1, 2) (note that C is smooth). Then g (C ) is a curve of bidegree (2, 1) still containing all six points. But C · g (C ) = 5, a contradiction. Finally, assume that the six points lie on a curve E of bidegree (2, 2). Note that E has at most one ordinary double point and E 2 = 8. So, the self-intersection of a strict transform of E after the blow-up is at least −1. 8.2. Case G ∼ = Z/4. Let g be a generator of G B ⊂ PGL 3 (R) ∼ = SL 3 (R). We may assume that g acts as There are two possible lifts of g to an automorphism of X , namely We treat these two cases separately. When k = R one can make the change of variables z → z/ 3 A − B /3 3 A 2 and reduce the equation of X to the form The linear system | − 2K X | has no base points and exhibits X as a double cover of a quadratic cone Its fixed point locus X β is the union of a curve R ⊂ Q of genus 4 and a single point q. This point is the unique base point of the elliptic pencil | − K X |, so in particular q ∈ X (R). Remark 9.1. In Table 11 below we collect some information about real structures on del Pezzo surfaces of degree 1. This time we do not restrict ourselves to R-rational surfaces only, because -as will become clear in §9.1 -we should have a closer look at involutions' conjugacy classes in W (E 8 ), and deal with the fact that sometimes the Carter graph does not determine an involution up to conjugacy. For an irreducible reflection group W acting on a vector space V , and involution σ * ∈ W , define i (σ * ) = dimV − , where V = V + ⊕V − is the decomposition into eigenspaces. In the notation of Table 11, i (σ * ) is the sum of lower indices. Note that there is a central involution −id in W (E 8 ), which induces a correspondence of each σ * with σ * t (called the Bertini twist of σ * in §9.1), where i (σ * t ) = 8 − i (σ * ). It will be important for us that two classes with i (σ * ) = 4 are both self-corresponding under this, see [Wall87,§2] for details. Since Aut(X ) fixes q, we have the natural faithful representation so either Aut(X ) ∼ = Z/n or Aut(X ) ∼ = D n . Let G ⊂ Aut(X ). The action of G on the pencil | − K X | induces the action on C = Proj R[x, y] ∼ = P 1 R (recall that by construction {x, y} is a basis in H 0 (X , −K X )). This gives us the natural homomorphism υ : G → Aut(C ) = PGL 2 (R). Note that rk(X C ) β = 1, hence to classify groups acting strongly minimally on X , we can focus only on those that do not contain the Bertini involution. Dihedral groups G = 〈R n , S | R n n = S 2 = 1, SR n S −1 = R −1 n 〉 (Z/2) 2 R 2 , S ax 4 + bx 2 y 2 + c y 4 a x 6 + b x 4 y 2 + c x 2 y 4 + d y 6 D 4 R 4 , S ax 4 + bx 2 y 2 + c y 4 (x 2 + y 2 )(a x 4 + b x 2 y 2 + a y 4 ) Proof. All the groups listed in Table 12 do contain the Bertini involution, so they act strongly minimally on X . To write down the corresponding equation one should consult Appendix C. It remains to exclude the groups which do not contain the Bertini involution. Below we assume that β ∉ G and G S 3 . The case G ∼ = S 3 requires more thorough analysis and is excluded in paragraph 9.1. Hence tr σ * = 0, tr(σ • g ) * = −8 (see Table 11). The latter equality implies that σ * = g * • β * . Now run two H -equivariant minimal model programs on X C , one with H = 〈σ〉 and the other with H = 〈g • β〉. Their common result will be some del Pezzo surface Z . Since g • β fixes an elliptic curve on X (as well as g ), we have K 2 Z 4 (it is easy to check that a del Pezzo surface Z with K 2 Z > 4 cannot contain a fixed-point elliptic curve). On the other hand, Z is minimal over R, hence is not R-rational. But then X is non-rational over R too, a contradiction. Case G = Z/2n, n 2. Let g generate G. As G does not contain the Bertini involution, we may assume that g n acts as diag{−1, 1} on T q X , so det g n = −1. But each h ∈ GL 2 (R) with 2 < ord h < ∞ has determinant equal to 1, a contradiction. Case G = D n , n 2. It is easy to see that G = D 2 ∼ = (Z/2) 2 always contains the Bertini involution. So, we assume that n > 2 and n is even. Then G contains an element of order n whose (n/2) th -power is not the Bertini involution. The same argument as before shows that this is impossible. 9.1. Geometry of -configurations and S 3 -actions. We now apply the techniques of [Tre19] to analyze S 3 -actions on real del Pezzo surfaces of degree 1. More precisely, we now show that if β ∉ G ∼ = S 3 , then G cannot act on any R-rational del Pezzo surface of degree 1 with invariant Picard number equal to one (the R-rationality assumption is crucial). So, assume G = 〈g , h | g 3 = h 2 = 1, g h = hg −1 〉 and rk Pic(X C ) Γ×G = 1. Since we suppose β ∉ G, all involutions in G have zero traces on K ⊥ X (i.e. of types A 4 1 or A 4 1 ). All elements of order 3 in G are of type A 2 2 , with trace equal 2, as was shown in [Yas16,§5.4]. The formula (1) implies Following the terminology of [Tre19], we say that six (−1)-curves H 1 , . . . , H 6 on a del Pezzo surface of degree 1 form a -configuration, if (all subscripts are modulo 6). In fact, and By [Tre18,Lemma 4.12], for every element g * of type A 2 2 there are twelve g -invariant (−1)-curves on X C forming two -configurations, and two g -invariant -configurations on which g acts faithfully. These four configurations are pairwisely asynchronized. Denote the first two configurations by A = {A 1 , . . . , A 6 } and B = {B 1 , . . . , B 6 }, and the last two (where g acts faithfully) by C = {C 1 , . . . ,C 6 } and D = {D 1 , . . . , D 6 }. Let us choose the numbering in every 6-tuple so that the first two entries are disjoint (i.e. neighbors in graph). By the proof of [Tre18,Lemma 4.15], the classes a i = A i + K X , b i = B i + K X , c i = C i + K X and d i = D i + K X , i = 1, 2, form a basis of the vector space V = Pic(X C ) ⊗ R ∩ K ⊥ X . We may assume that g acts on C and D by rotating them (more precisely, the "triangles" and ) counterclockwise. Using relations (24), one easily finds the matrix of g * in our basis: Now the involution h ∈ G acts on the -configurations, and it is easy to see that condition (23) implies that there is a G-invariant -configuration among our four (the incidence relation in together with g h = hg −1 show that h acts either trivially, or as a central symmetry). We call this invariant configuration 0 . Similarly, the Γ × G-minimality of X C implies that Γ acts by central symmetry on 0 . Denote by σ * the image of the complex involution on X in the Weyl group W (E 8 ), and assume that X is given by equation (21). Changing the sign of w 2 in that equation gives another del Pezzo surface of degree 1, which we denote X [β] and call the Bertini twist of X . If σ t is the complex involution on X [β], then its image in W (E 8 ) equals σ * t = β * • σ * . Note that β * acts as −id on K ⊥ X , and therefore tr(σ * t ) = − tr σ * . In particular, where Γ t = 〈σ t 〉. The output of G-Minimal Model Program on X [β] is a real G-minimal del Pezzo cubic surface Y . Note that now Γ t stabilizes the vertices of 0 , so σ * t has the same type in W (E 8 ) as the type of the complex involution on Y -i.e. id, A 1 , A 2 1 , A 3 1 , or some lift of A 4 1 (see Section 7). Therefore the original involution σ * is of type A 8 1 , A 7 1 , A 6 1 , A 5 1 , A 4 1 or A 4 1 . The first four correspond to non-R-rational del Pezzo surfaces. So, we may assume that the complex involution on Y is of type A 4 1 , and hence both Y and X [β] are not R-rational. As was noticed in Remark 9.1, both classes A 4 1 and A 4 1 are self-dual under the Bertini twist, hence X is not rational over R either. 9.2. Embedding into Cr 2 (R) and conjugacy classes. A priori it is not clear that one can choose the coefficients of f 4 and f 6 in Table 12 in such a way that the corresponding surfaces are R-rational. Here is one of the possible approaches to this problem. Let X denote the blow-up of X at q. By Proposition 2.1, the surface X is R-rational if and only if X (R) is connected. The surface X is an elliptic fibration over P 1 R with a real section (coming from the exceptional divisor of the blow-up). As shown in [Wall87,§5] the set X (R) is connected if 13 Eu( X (R)) < 0. Now Eu( X (R)) is the sum of Euler characteristics of singular fibers. Recall that every geometrically singular member of | − K X | is an irreducible curve of arithmetic genus 1. Therefore, each singular fiber of the fibration X → P 1 R is a rational curve with a unique singularity which is either a node or a simple cusp. From the point of view of Euler characteristic only the nodes do matter: we have contributions +1 from each acnode (a singularity which is equivalent to the singularity y 2 = x 3 − x 2 over R) and −1 for each crunode (those which are equivalent to y 2 = x 3 + x 2 ), see Figure 4. Now an easy calculation shows that the coefficients of the binary forms f 4 (x, y) and f 6 (x, y) from Table 12 can be chosen in such a way that the number of crunodal curves in (21) is greater than acnodal ones, so X (R) < 0. So, all the groups G from Table 12 do embed into Cr 2 (R). Also note that any del Pezzo surfaces of degree 1 is G-superrigid (see [DI09a,Corollary 7.11] and [Isk96, Theorem 2.6]). In particular, none of the groups listed in Proposition 9.2 is linearizable. In this Appendix we show that classification of subgroups of some particular types in Cr 2 (R) can be much simpler than the analogous question in complex settings. Also, these results can be obtained directly, i.e. avoiding the complete classification. In this section X denotes a real smooth geometrically rational (not necessarily R-rational) surface. Let G ⊂ Bir(X ) be a finite group. Then, applying G-equivariant minimal model program to X , we can assume that X is either a real del Pezzo surface with Pic(X ) G ∼ = Z, or a real surface with G-equivariant conic bundle structure and Pic(X ) G ∼ = Z 2 [DI09b, Theorem 5]. Our goal is to classify simple groups acting on real geometrically rational surfaces. Let us first recall the situation in the case k = C. As a by-product result of [DI09a] one has the next theorem. Theorem A.1. Let G ⊂ Cr 2 (C) be a finite non-abelian simple group. Then G is isomorphic to one of the following groups: A 5 , A 6 , PSL 2 (F 7 ). More precisely, we have the following characterization of these groups. • There are 2 conjugacy classes of subgroups isomorphic to PSL 2 (F 7 ). First, PSL 2 (F 7 ) embeds into PGL 3 (C) and preserves the Klein quartic x 3 y + y 3 z + z 3 x = 0. Second, it embeds as a group of automorphisms of the double cover of P 2 C , ramified along that Klein quartic (i.e. a del Pezzo surface of degree 2). • There are 3 embeddings of A 5 into Cr 2 (C), up to conjugacy. The first is in PGL 2 (C), the second is in PGL 3 (C), and the third is in the group of automorphisms of a del Pezzo surface of degree 5. Remark A.2. Although a complete classification of finite subgroups in Cr 3 (C) seems to be out of reach, Yu. Prokhorov managed to find all finite simple non-abelian subgroups of Cr 3 (C). Besides A 5 , A 6 and PSL 2 (F 7 ), we have three new simple groups SL 2 (F 8 ), A 7 , PSp 4 (F 3 ). By contrast, over R the following holds. Proof. Assume first that G is minimally regularized on a real conic bundle π : X → P 1 R . Since X (R) = ∅ one has C (R) = ∅, so C ∼ = P 1 R . The homomorphism G → Aut(P 1 R ) ∼ = PGL 2 (R) is either injective or trivial. In both cases G embeds into automorphism group of some P 1 R (which is either a base or a fiber), hence must be cyclic or dihedral by Proposition 2.5 (i). Now assume that G is minimally regularized on a real del Pezzo surface X of degree d = K 2 X = 7. We consider each d separately. d = 9: Then X is a Severi-Brauer variety. As X (R) = ∅, we have X ∼ = P 2 R and Aut(X ) ∼ = PGL 3 (R), so G ∼ = A 5 by Proposition 2.5 (ii). This is where the Valentiner group A 6 is excluded (it does not embed into PGL 3 (R)). d = 8: The surface P 2 R (1, 0) is never G-minimal. If X ∼ = P 1 R × P 1 R , we argue as in the conic bundle case. If X ∼ = Q 3,1 = {x 2 + y 2 + z 2 = w 2 }, then G ∼ = A 5 realized as the automorphism group of an icosahedron inscribed in the sphere The action is minimal since Pic(Q 3,1 ) ∼ = Z. d = 6: Then G is a subgroup of Aut(X C ) ∼ = (C * ) 2 D 6 , so it maps isomorphically to a subgroup of D 6 . So, this case does not occur. d = 5: Then G is a subgroup of Aut(X C ) ∼ = S 5 , so G ∼ = A 5 . The action of this group can be defined over R and is always minimal, since G contains a minimal element of order 5, see [Yas16,4.6] or Section 5 d = 4: Then X = Q 1 ∩Q 2 ⊂ P 4 R is an intersection of two quadrics and G acts on the pencil Q = 〈Q 1 ,Q 2 〉 ∼ = P 1 R . Thus G acts trivially on Q and fixes the vertices of its singular members. But these vertices generate P 4 C , hence G is abelian, a contradiction. d = 3: All possible groups of automorphisms of complex cubic surfaces are well-known, see [Dol12, Table 9.6] or Section 7. The only group to consider is G ∼ = A 5 . This group acts faithfully on H 0 (X , −K X ) ∼ = R 4 . It is known that there exists only one real 4-dimensional irreducible representation 14 of A 5 . Thus there exists a unique A 5 -invariant cubic surface in PH 0 (X , −K X ); we may assume that it is given by in Proj R[x 0 , x 1 , x 2 , x 3 , x 4 ] and the set S of (−1)-curves on X consists of 27 real lines (real forms of the Clebsch cubic were described in §7.1). Moreover S = S 6 S 6 S 15 , |S k | = k, where the lines inside both S 6 and S 6 are disjoint. Further, there exists a commutative diagram X π π P 2 R / / P 2 R such that π (resp. π ) is a birational A 5 -morphism contracting S 6 (resp. S 6 ) to the unique A 5orbit of length 6 in P 2 R . It follows that rk Pic(X ) G = 2, so X is not strongly G-minimal (in fact, it is not G-minimal either, as the conic bundle structures on a cubic surface are given by projecting away from a line). d = 2: Then G embeds into Aut(B ) ⊂ PGL 3 (R), where B is a smooth quartic curve. By Proposition 2.5 (ii), we need to consider only G ∼ = A 5 . But, as is well known, a genus 3 curve has no automorphisms of order 5, so this case does not occur. d = 1: Note that any group G ⊂ Aut(X ) fixes a unique base point p ∈ X (R) of an elliptic pencil | − K X |. Thus we have a faithul representation G → GL(T p X ) ∼ = GL 2 (R), and G cannot be simple by subgroup of G ⊂ Cr 2 (C) is isomorphic to 2 • A 5 ∼ = SL 2 (F 5 ) and the embedding G ⊂ Cr 2 (C) is induced by action either on P 2 , or on a conic bundle. In contrast with this situation, we have Proposition A.4. Every quasisimple subgroup of Cr 2 (R) is simple (and is described in Theorem A.3). Proof. Let G ⊂ Cr 2 (R) be a finite quasisimple non-simple group. As usual, we assume that G acts biregularly on some R-rational surface X . The simple group H = G/ Z(G) acts on Y = X / Z(G). The surface Y is clearly unirational over C, hence is C-rational by Castelnuovo's theorem. Thus H ∼ = A 5 by Theorem A.3. Same group-theoretic arguments as in [Pro17, Proposition 2.1] imply that Z(G) ∼ = Z/2 and G is the binary icosahedral group 2 • A 5 . Suppose that X is a G-equivariant conic bundle over B ∼ = P 1 R . The kernel of the homomorphism G → Aut(B ) coincides with Z(G) = Z/2, as this is the only proper normal subgroup of 2 • A 5 . Thus A 5 acts faithfully on the general fiber, which is impossible. Now let X be a del Pezzo G-surface with rk Pic(X ) G = 1. We then argue as in the proof of Theorem A.3. Note that the image of every nontrivial homomorphism from 2 • A 5 either contains A 5 , or coincides with the whole group. This observation helps us to exclude all the cases 15 except d = 3. It remains to notice that 2 • A 5 does not act on any cubic surface [Dol12, Table 9.6]. A.1. p-subgroups in Cr 2 (R). Recall that a p-group is a finite group of order p k , where p is a prime. From the group-theoretic point of view, these groups are somewhat opposite to simple non-abelian groups. It follows from [Yas16] that for p 3, every p-subgroup G ⊂ Cr 2 (R) is conjugate either to a direct product of at most two cyclic groups, regularized on X = P 1 R × P 1 R with rk Pic(X ) G = 2, or to a cyclic subgroup of PGL 3 (R), or to (Z/3 k × Z/3 l ) (Z/3) acting on a del Pezzo surface of degree 6, or to Z/5 acting on a del Pezzo surface of degree 5 (with invariant Picard numbers equal to one). As the reader can see from present paper, the classification of 2-subgroups of Cr 2 (R) is much more extensive. We leave it to the interested reader to extract this classification for del Pezzo surfaces. Instead we give a bound on the number of generators of an abelian p-subgroup G ⊂ Cr 2 (R) in the spirit of [Bea07] (where it was done for k = C; note that a priori Beauville's bound might fail to be sharp over R). 15 It follows from [PSA80] that PO(3, 1) does not contain 2 • A 5 . Alternative proof: assume that G = 2 • A 5 ⊂ Aut(Q 3,1 ). As G has no index 2 subgroups, it faithfully acts by orientation-preserving diffeomorphisms of S 2 , and hence embeds into SO 3 (R), see Remark 3.1. But this is impossible by Lemma 2.5. If G is elementary, then r 2 for p = 3. For any p, these bounds are attained by some abelian p-subgroups G ⊂ Cr 2 (R). Proof. If G is minimally regularized on a real conic bundel X → B , then G fits into the short exact sequence where G B ⊂ Aut(B ) ∼ = PGL 2 (R), and G F acts by automorphisms of the generic fiber F . Since G is finite, G F is a subgroup of PGL 2 (R). So, both G F and G B are cyclic or dihedral, and hence G is generated by at most 4 elements. Note that if G ∼ = (Z/p) r and p > 2, then r = 1 or 2. The remaining cases directly follow from the results of this paper. Note that for p = 2 the value r = 4 is achieved for a del Pezzo quartic surface isomorphic to P 2 R (5, 0) or Q 2,2 (0, 2). The bound r = 3 for p = 3 is attained on a del Pezzo surface of degree 6 isomorphic to Q 2,2 (0, 1) (so G is a group of type 2b). Theorem B.1. Let X be a real geometrically rational surface with X (R) = ∅, and G be a finite nonsolvable group acting on it. Then the pair (X ,G) is isomorphic to one (and only one) of the following pairs • P 2 R , A 5 ; • Q 3,1 , A 5 or Q 3,1 , A 5 × Z/2 ; • P 2 R (4, 0), A 5 or P 2 R (4, 0), S 5 ; • Y , S 5 , where Y is the Clebsch diagonal cubic. Proof. If G is minimally regularized on a conic bundle, then we again have the short exact sequence (25) with both G F and G B cyclic or dihedral. Thus G is solvable. So, we may assume that G acts on a real del Pezzo surface X of degree d with Pic(X ) G ∼ = Z. If d = 9, then G ∼ = A 5 . If d = 8 and X ∼ = Q 2,2 , then G ∼ = H • (Z/2) r , where r ∈ {0, 1} and H is a subgroup of H 1 × H 2 with H 1 and H 2 being cyclic or dihedral. Clearly G is solvable in this case. If d = 8 and X ∼ = Q 3,1 we have G ⊂ PO(3, 1), so G ∼ = A 5 or A 5 × Z/2, see [PSA80]. When d = 6, Proposition 4.1 tells us that G ∼ = H • N , where H is an abelian group and N is a group of order at most 6, so G is solvable. For d = 5 we have either G ∼ = A 5 , or G ∼ = S 5 by Proposition 5.2. Let d = 4. Then G is a subgroup of W (D 5 ) ∼ = (Z/2) 4 S 5 , so G ∼ = A • H , where A is an abelian group, and H ⊂ S 5 . In fact it is known that |H | < 10 [Dol12, Theorem 8.6.8], so G is solvable. Let d = 3. Then [Dol12, Table 9.6] shows that G ∼ = A 5 or S 5 . Moreover, we already know (see the proof of Theorem A.3) that X is the Clebsch cubic and it is not A 5 -minimal. Further, in the notation of that proof let i , i = 1, . . . , 6, be the elements of S 6 (note that all the lines on the Clebsch cubic are real). It is known that the divisor classes of i and K X span Pic(X ) ⊗ R, so Pic(X ) A 5 ⊗ R is spanned by K X and the sum i . Since S 5 does not leave this sum invariant, the group S 5 acts minimally on X . APPENDIX C. REAL INVARIANTS OF SOME FINITE GROUPS In this appendix we collect some results concerning invariant theory of finite groups over R. They should be known to experts, but we decided to include them because we do not know proper references. Let V be a real m-dimensional vector space and x 1 , . . . , x m be a standard dual basis of V * . Let ρ : G → GL(V ) be a faithful linear representation of a finite group G and η : G → GL(V ⊗ C) be some faithful complex representation equivalent to ρ, i.e. ρ(g ) = T • η(g ) • T −1 for each g ∈ G and some T ∈ GL m (C). Recall that every finite subgroups of GL 2 (R) is either cyclic Z/n ∼ = 〈R n 〉 or dihedral D n ∼ = 〈R n , S | R n n = S 2 = 1, SR n S −1 = R −1 n 〉. In the sequel by standard representations of Z/n and D n we mean ρ : R n →   cos(2π/n) − sin(2π/n) sin(2π/n) cos(2π/n) In order to construct G-invariant del Pezzo surfaces in Sections 8 and 9 we need to know Ginvariant binary forms f k (x, y) of degrees k = 2, 4 and 6. They are listed below for different groups (in their standard representation). Cyclic groups. Let G = Z/n. We claim that R[x, y] ρ(Z/n) = R[x 2 + y 2 , Re(x + i y) n , Im(x + i y) n ] Denote by ω a primitive nth root of unity. The representation ρ is equivalent to η : R n → diag{ω, ω} via the map T : x → z = x + i y, y → w = x − i y. For each g ∈ G we have η(g )(T f ) = T ρ(g )T −1 T f = T f ,
2019-12-23T17:15:31.000Z
2019-12-23T00:00:00.000
{ "year": 2019, "sha1": "5e427771673110389cdbddbc8f363f4a40d4dfa9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "5e427771673110389cdbddbc8f363f4a40d4dfa9", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
260993328
pes2o/s2orc
v3-fos-license
Development and Evaluation of a Relational Agent to Assist with Screening and Intervention for Unhealthy Drinking in Primary Care Screening, brief intervention, and referral for alcohol misuse during primary care appointments is recommended to address high rates of unhealthy alcohol use. However, implementation of screening and referral practices into primary care remains difficult. Computerized Relational Agents programmed to provide alcohol screening, brief intervention, and referral can effectively reduce the burden on clinical staff by increasing screening practices. As part of a larger clinical trial, we aimed to solicit input from patients about the design and development of a Relational Agent for alcohol brief intervention. We also solicited input from patients who interacted with the implemented version of the Relational Agent intervention after they finished the trial. A two-part development and evaluation study was conducted. To begin, a user-centered design approach was used to customize the intervention for the population served. A total of 19 participants shared their preferences on the appearance, setting, and preferences of multiple Relational Agents through semi-structured interviews. Following the completion of the study one interviews, a Relational Agent was chosen and refined for use in the intervention. In study two, twenty participants who participated in the clinical trial intervention were invited back to participate in a semi-structured interview to provide feedback about their experiences in interacting with the intervention. Study one results showed that participants preferred a female Relational Agent located in an office-like setting, but the mechanical and still movements of the Relational agent decreased feelings of authenticity and human trustworthiness for participants. After refinements to the Relational Agent, post-intervention results in study two showed that participants (n = 17, 89%) felt comfortable interacting and discussing their drinking habits with the Relational Agent and participants (n = 10, 53%) believed that the intervention had a positive impact on the way participants thought about drinking or on their actual drinking habits. Despite variability in the preferences of participants during the development stage of the intervention, incorporating the feedback of participants during the design process resulted in optimized comfort levels for individuals interacting with the Relational Agent. clinicaltrials.gov, NCT02030288, https://clinicaltrials.gov/ct2/home Background Unhealthy alcohol use (Saitz, 2005) is currently the leading cause of preventable death in the USA (Center for Disease Control and Prevention, 2022) and is more prevalent among veterans relative to non-veteran civilian populations (Teeters et al., 2017).These high rates of unhealthy alcohol use result in increased healthcare utilization rates to address drinking consequences and high-cost burdens to hospital systems (Center for Disease Control and Prevention, 2022;Sacks et al., 2015).To address the rates of unhealthy alcohol use (i.e., problem drinking, alcohol abuse, and dependence), the US Preventative Services Task Force (USPSTF) (Curry et al., 2018) recommends regular alcohol screening in primary care for all adults over the age of 18, and the Veterans Health Administration (VHA) has joined the USPSTF by mandating that all veterans be screened for alcohol use annually during regular primary care visits (Teeters et al., 2017). Screening, Brief Intervention, and Referral to Treatment (SBIRT) is an evidenced-based approach used to identify and provide treatment to patients in a variety of clinical settings.SBIRT is conducted with the overall goal of immediate intervention and presentation of treatment options to individuals presenting with unhealthy levels of alcohol use (Babor et al., 2007).Primary care is often considered the ideal clinical setting to conduct SBIRT due to the frequency in which patients present to primary care, readily available treatment resources, and a higher rate of compliance among patients asked to participate in screening in primary care (McCance-Katz & Satterfield, 2012) versus other clinical settings. Despite the recommendation by the USPSTF and VHA, effective implementation of unhealthy alcohol use screening into primary care settings is difficult, and screening rates in primary care clinics remain highly variable.The low screening rates are often attributed to implementation related physician barriers including limited availability of the physician, competing clinical demands during the primary care visit (McNeely et al., 2018;Rahm et al., 2015;Yarnall et al., 2003), the need to address other pressing medical issues, and limited overall knowledge of available follow-up referral care services when a patient screens positive-a key component of the SBIRT approach.However, low positive screening rates may also be exacerbated by the reluctance of patients to disclose alcohol use, possibly due to feelings of discomfort or due to fears of substance use-related stigmas or consequences (McNeely et al., 2018;Teeters et al., 2017;Williams et al., 2016). The lack of overall knowledge surrounding the SBIRT approach among primary care providers has resulted in the low referral rates nationwide remaining a prevalent issue, and despite the recommendation and incentive for primary care physicians to administer the screening, the quality of the screening remains inconsistent.Upwards of 70% of veterans screening positive for past-year substance use disorders are not receiving treatment referrals (Boden & Hoggatt, 2018;Golub et al., 2013), and the proportion of veterans with unmet alcohol use-related treatment needs is double that of the proportion of unmet treatment needs for serious mental health concerns (Golub et al., 2013).Identifying innovative ways to provide all components of SBIRT in routine care, and not just screening, is critical to combating this public health issue in both the veteran and general population. Technology-based advances in alcohol screening and intervention provide a unique opportunity to combat unhealthy alcohol use by offsetting both provider and patient barriers to conducting SBIRT (Harris & Knight, 2014).Relational Agents are computer-based programs that can simulate face-to-face counseling and have been developed and tested for a variety of health-related conditions including weight loss (Watson et al., 2012), exercise (Bickmore et al., 2013;Fasola & Mataric, 2012;Sillice et al., 2018), diabetes management (Thompson et al., 2019), and medication adherence (Bickmore et al., 2010a, b).Relational Agents provide the opportunity to form therapeutic bonds with patients (Bickmore & Gruber, 2010) by fostering a setting of comfort and confidentiality and thereby providing patients with an alternative method to disclose and discuss behaviors that may be uncomfortable when meeting with their physicians face to face.Previous research has also shown that patient preferences for Relational Agents can vary on numerous characteristics including, but not limited to, gender (Esposito et al., 2021), animation (Parmar et al., 2022), and race (Bickmore et al., 2005a, b;Persky et al., 2013;Zhou et al., 2014).Tailoring a Relational Agent to the patient population's preferences can lead to increased trust and buy-in from patients.In addition, Relational Agents have been shown to effectively promote long-term behavior change.However, testing and development in the context of substance use are limited (Bickmore et al., 2020). The Relational Agent design was based on research demonstrating the elements of effective interventions (FRAMES; feedback, responsibility, advice, menu, empathy, self-efficacy) (Bien et al., 1993) and using a motivational interviewing style (Miller & Rollnick, 2012).It was designed to administer the QDS (Quick Drinking Screen) (Sobell et al., 2003) and AUDIT-C (Alcohol Use Disorder Identification Test) (Bush et al., 1998) to the participant, provide normative feedback about how their drinking compared to that of their same-gender peers, elicit concern about the participant's drinking habits, motivate and ask for commitment to change, and refer the participant to treatment.Feedback was tailored based on participant responses to the AUDIT-C and QDS screening tools.The Relational Agent was designed to take roughly 15 to 20 minutes to complete. This system was designed specifically to fit into primary care in providing both screening and motivational feedback as well as referral.The Relational Agent presented here is unique in that other computerized systems in the VHA provide only screening (i.e., eScreening, My HealtheVet alcohol use screener), depend on trained interviewers (e.g., Behavioral Health Lab (U.S.Department of Veterans Affairs, 2022), Mental Health Assistant (U.S.Department of Veterans Affairs, 2012)), or are part of a multi-session system meant to provide treatment (i.e., VetChange (Brief et al., 2013)).Many systems outside VHA are not optimized to primary care and tend to target college students.For instance, My Student Body is a program that provided screening and feedback using only social norms (Chiauzzi et al., 2005); Talk to Frank (The Home Office United Kingdom) and is informational only.In addition, eCheckup to Go provided similar but much less extensive feedback to users (Moyer et al., 2004). This report is part of a larger project to develop and implement an engaging and confidential method of delivering SBIRT to veterans in primary care.The goal of this study was to engage veterans in a novel user-centered design of a Relational Agent intervention for unhealthy alcohol use by optimizing the presentation, acceptability and feasibility of the intervention, and its ability to effectively engage end users.We aimed to solicit input on veteran impressions during the development process and following use of the final intervention during a randomized clinical trial.This was part of a multi-part Hybrid I Effectiveness-Implementation study (Curran et al., 2012), beginning with veteran patient input and user design implementation, followed by implementation and randomized clinical trial (as reported in (Rubin et al., 2022;Zhou et al., 2017)), and then follow-up assessment among selected veterans who used the final implemented version of the Relational Agent intervention in the RCT to solicit qualitative feedback to understand the results and further improve the intervention.The importance of effectiveness-implementation hybrid designs (Landes et al., 2020) is that critical information about successfully implementing the practice in the real world is gathered at the same times as effectiveness is tested in a classic clinical trial.This iterative user-centered design provides an important model demonstrating how to customize an intervention for the target population and the setting in which it will be implemented.Here, we report on the qualitative implementation data gathered. Presentation and discussion of the intervention Development and Usability phases and qualitative interview data collected among users in the trial (Evaluation phase) are presented here.Intervention development in the development phase was designed to determine participants' preferences for the physical appearance and environmental setting of a Relational Agent and to understand participants' perceptions of the ease of use and the perceived effectiveness of the Relational Agent intervention while still in the development phase.The evaluation of the implemented Relational Agent phase aimed to evaluate participants' overall perceptions of the Relational Agent following the completion of the clinical trial intervention (Rubin et al., 2022). Design and Usability Phases Recruitment, Eligibility, and Participants for Design and Usability Phases Recruitment Participants were recruited from two primary care clinic locations within VA Boston Healthcare System via posters displayed around the clinics describing the study.Clinic staff and physicians were encouraged to refer patients to the study, and two research assistants were available to speak with interested participants.All veterans who received services through the VA primary care clinic were eligible to participate.No level of alcohol use involvement was specified as an eligibility requirement during these Relational Agent development phases, as the goal was to develop a Relational Agent that could be used as a screening and (as needed) intervention and referral tool for all incoming primary care patients, regardless of their drinking status. Design Phase Twenty participants agreed to participate in the semi-structured interviews.One participant was later excluded due to ineligibility, resulting in 19 participants included in the analysis.Participants were 79% (n = 15) male with a mean age of 56 ± 15.3 years; 68% (n = 13) were White.Most participants (n = 17, 89%) self-reported using a computer regularly, while two participants (11%) self-reported limited computer use (i.e., only used a computer a few times). Usability Phase We invited the 19 participants who participated in the first interview to return for a second interview.However, only fifteen participants returned to complete the usability testing and second semi-structured interview.Participants were 80% (n = 14) male with a mean age of 57 ± 15 years; 73% (n = 12) were White. Study Procedures All study procedures and documents were approved by the VA Boston Healthcare System Institutional Review Board. Design Phase The Relational Agent was developed and rendered in the Unity3D game engine.The Relational Agent is able to simulate human conversational behavior, speaking using a speech synthesizer, synchronized with nonverbal behaviors generated using BEAT (Cassell et al., 2001), including hand gestures, gaze behavior, and facial display.The Relational Agent's language is modeled in a custom scripting language that represents dialogue using hierarchical transition networks with template-based text generation, enabling real-time tailoring and personalization of the counseling dialogue for each patient.Users converse with the Relational Agent by selecting multiple-choice utterance options displayed on the screen, updated at each turn of the conversation.The use of fully constrained user input ensures patient safety by validating all responses that the Relational Agent gives (Bickmore et al., 2018). The content was developed using decision trees, based on the senior author's extensive experience with developing these systems using various computer technologies to screen and intervene with people with substance use.A rule-based Relational Agent dialogue engine, as used here, allows for the Relational Agent responses to be designed and validated for every possible dialogue context and input.These decision trees, and the dialogue attached to them, were informed by and developed over several projects, in collaboration with colleagues who were part of the Motivational Interviewing Network of Trainers (motivationalinterviewing.org), and included insights from prior interactive voice response (Rubin, 2010;Rubin et al. 2006Rubin et al. , 2007)), web-based programs (Brief et al., 2013;Schreiner et al., 2021), and the Relational Agent research (Bickmore et al., 2020;Rubin et al. 2015).In the end, this rule-based and closed, multiple-choice dialogue allowed for target and streamlined communication between the Relational Agent and the veterans, through verbal output from the Relational Agent, and multiple choice responses from the veteran that flowed from alcohol screening, brief intervention, and, when indicated, referral to treatment. Participants first viewed 8-10 sixty-second video clips on a touchscreen tablet computer that displayed different Relational Agent characters that varied by hair, facial features, clothing, background setting, and synthesized voice (see Table 4 for examples).The Relational Agent dialogue was exactly the same in each clip, with the agent greeting the participant, inquiring how the participant was feeling, asking if the participant wanted to discuss their alcohol use, and providing participants with a demonstration of how the program worked through a pre-recorded video. Following the completion of each video clip, a brief semistructured interview was conducted by the research assistant under the training and supervision of the senior author.The interview included questions about the Relational Agent's voice, setting, and appearance.Participants were asked to rank each Relational Agent feature on a scale from 0 ("Not at all comfortable") to 10 ("Extremely comfortable") based on their comfort level interacting with the Relational Agent.After viewing all the video clips, participants were asked to rank all Relational Agent characters in order of preference from most to least preferred.Participants were also queried about preferred titles for the Relational Agent (e.g., health advisor, health coach, counselor).At the end of the interview, participants were asked to suggest modifications to the characters if they were to design the Relational Agent.Additional questions were asked about specific conversational items to gauge veterans' reactions.For instance, the Relational Agent engaged in social chat such as having friends who served and thanking participants for their service.Participants were offered $50 in compensation for their time. Design Phase Qualitative Analysis Approach All interviews were transcribed.Three research assistants performed a content analysis of the transcribed interviews, incorporating the principles of the immersion-crystallization method.This qualitative approach consists of repeated cycles of immersion into the collected data with subsequent emergence, after reflection, of an intuitive crystallization of the dominant themes (Borkan, 1999;Krueger, 1997;Malterud, 2001).The research assistants then met with two senior authors (AR and SRS) to discuss and agree on the final themes (see Table 1 for themes).Next, the interviewers coded the transcription of each interview separately and resolved any coding discrepancies until consensus was reached for all interviews.Qualitative analysis of design phase data was conducted using Microsoft Excel. We examined the mode and median comfort level and preference ranking for each Relational Agent across all participants.Any character that had very low comfort or preference ratings was ruled out.The character with the highest median score was selected, consistent with methods used in previous Relational Agent research to select final characters (Bickmore et al., 2008;Bickmore et al., 2009;Bickmore et al., 2005a, b;Bickmore et al., 2010a, b).The research team reviewed the quantitative and qualitative analyses to select a final Relational Agent character and modified her appearance and setting based on qualitative themes. Usability Phase After the Relational Agent character and setting was developed, the rest of the dialogue for assessment, intervention, and referral features were developed and programmed (Zhou et al., 2017).Before the entire system was finalized, the system itself underwent further qualitative analysis.Participants from the design phase of the study were invited to return to test the Relational Agent's usability.Fifteen participants from the design phase returned for the usability phase.The fifteen participants were split into two groups, with group 1 consisting of 8 participants who completed session 1 of usability phase and 7 participants completing sessions 1 and 2 (group 2).Both sessions in the usability phase were 30 min in duration.In the design phase of the study, participants could only view the videos of the Relational Agent.However, in the usability phase, participants were able to interact with the Relational Agent.The Relational Agent was programmed to conduct an alcohol screening using an SBIRT approach, just as a patient would be receiving from their primary care providers.This included brief intervention based on motivational interviewing strategies and asking for a commitment to change (Miller & Rollnick, 2012).Participants responded to the Relational Agent by choosing buttons on displayed on the screen containing different possible answer phrases updated at each turn of the dialogue. Participants were asked to provide verbal feedback, using a think aloud protocol (Fonteyn et al., 1993), on the dialogue and behavior of the Relational Agent while using the system.To encourage consistency of response regardless of individual experience, participants were asked to imagine that they were a movie character that consumed a high level of alcohol and report that character's level of drinking to the program, regardless of their own personal alcohol use level.The interviewer sat next to the participant, periodically prompted the participant for their thoughts, and took hand-written notes throughout the interview. After the completion of the "think aloud phase," a semistructured interview was conducted.Participants were asked to rate their comfort level of speaking with the Relational Agent and were asked about their preference of a name for the Relational Agent. Examples of questions include the following: • How did you feel about this computer advisor? • What did you like or dislike about him or her? • How comfortable would you be talking with this computer advisor about your alcohol use? • Do you trust that the computer advisor is knowledgeable about alcohol use?Would you be open to talking with the computer advisor about your alcohol use? Usability Phase Qualitative Analysis Approach Interview transcripts were analyzed via an inductive, content analysis approach described by Bradley et al. (2007) and Borkan's (1999) immersion/crystallization technique.The usability phase qualitative analysis team consisted of two research assistants trained and whose work was supervised by the senior author.To begin, three transcripts were randomly selected, read, and analyzed independently by members of the qualitative study team for initial emergent themes.The qualitative study team then came back together to discuss themes that emerged until reaching consensus.Two transcripts were then read independently by the analysts to confirm that no new themes emerged.After concluding the review of the two additional transcripts, the qualitative study team felt they had reached thematic saturation and that the codebook was complete (see Table 2).The analysts proceeded with independently coding all transcripts, including re-analyzing the transcripts previously reviewed during the codebook development stage. If new themes were identified during the coding process, the qualitative analysis team came back together to discuss the new themes for consensus before continuing with the remainder of the coding.All transcripts were independently coded by both qualitative analysts, and all discrepancies were discussed until consensus was reached.Coding of all transcripts was conducted in Microsoft Excel.Participants in the clinical trial who were randomized to use the Relational Agent and completed the intervention were recruited for the evaluation phase.Briefly, participants receiving primary care services through the VA who received a positive AUDIT-C screening (Bush et al., 1998) (a score of at least three for women and four for men within the last 3 months) and reported drinking above the National Institute on Alcohol Abuse and Alcoholism (NIAAA, 2022) guidelines, as collected using the QDS, within the past 30 days (i.e., more than 3 drinks per day and/or more than 7 drinks per week for women; more than 4 drinks per day and/or more than 14 drinks per week for men) (National Institute on Alcohol Abuse and Alcoholism), were eligible to participate in the trial and, thus, in the evaluation phase of this study.Participants were ineligible in the RCT to participate if they had received treatment for substance use in the past 30 days. Participants Twenty participants who participated in the Relational Agent clinical trial agreed to participate in the semi-structured interview about their experiences interacting with the Relational Agent during the trial.One participant was later removed due to audio data quality issues resulting in the data of nineteen participants included in the analysis. Participants were 79% male with a mean age of 53 ± 15.2 years; 63% were white.The median AUDIT-C score for participants was 5 (range: 3-9).Most participants self-reported themselves as using a computer regularly (n = 16, 84%) or being an expert computer user (n = 2, 11%).Only one (5%) participant reported only using a computer a few times. Study Procedures Participants were interviewed within 1 month of completing the randomized clinical trial.No participants that were involved in the Relational Agent development and usability phases were included in the trial or the subsequent evaluation of the Relational Agent intervention.Interview questions were developed under the guidance of a research physician (SRS) and a research psychologist (AR) with expertise in qualitative analysis and were designed to solicit feedback on Participants would often comment on the program as "their younger self" or as "someone who drinks a lot."This section contains those comments relating to what the participant believes would be the hypothetical reactions from these "made-up" personas throughout different parts of the program.Reactions to advice/comments about drinking that the advisor provides Participant's reactions to the advice and comments that the Relational Agent provides regarding the user's drinking habits.Reaction to reporting of consequences Participant's reactions to the computer advisor summarizing the consequences that the user reported during the AUDIT-C section.General reactions to pros and cons sections Participant's reactions to the pros and cons sections.Specifically, commenting on the options provided in each section and if items should be added or subtracted.General reactions to commitment section Participant's reactions to the commitment options provided."Thank you for your service" Participant's opinions and reactions to having the Relational Agent thank them for their service.Addition of non-alcohol-related conversation? Participant's opinions on whether additional, non-alcohol-related conversation topics should be included in the program.How do you feel about the advisor/how comfortable are you? Participant's reports on how comfortable they are and how they feel about the Relational Agent.In addition, participant's suggestions regarding how to make the computer program more "approachable" or "comforting." aspects of the intervention that participants found helpful, ease of use, and about areas that provided minimal value to the participant. Examples of questions include the following."Laura" is the name of the Relational Agent. • Did "Laura" make you feel comfortable or uncomfortable?Why? • Did you feel that "Laura" pushed you too hard to change your drinking, or not hard enough?Why? • Was there anything in particular that "Laura" asked or said that you thought was really useful or effective?Harmful or counter-productive? Participants were given the opportunity to view a brief sample of the intervention to help refresh their memory prior to the interview, which was then conducted by the senior author.Interviews were recorded with the permission of the participant and transcribed upon their completion.Participants received $20 in compensation for their time. Qualitative Analysis Approach The evaluation phase qualitative analysis team consisted of a bachelor's level and a master's level analyst, who were also trained and supervised by the senior author.The inductive, content analysis approach was conducted as described in the design and usability phases (Bradley et al. (2007)), involving regular meetings with members of the qualitative analysis team to ensure continued consensus throughout the coding process (see Table 3 for coding categories used in the analysis).All coding in the evaluation phase was conducted using NVivo qualitative analysis software (NVivo, 2012). Appearance Participants commented on a wide range of the Relational Agents' appearance that included facial features, hair, and clothing.Nine participants reported a preference for a female character, while seven participants reported no preference, and three participants preferred a male character.Participants stated they wanted the character to have natural hair, with five participants stating a preference for long hair (i.e., past shoulder-length).Participants (8/19) expressed a desire for the character to look professional (e.g., nice shirt with blazer) as opposed to a tee shirt; however, five participants stated not wanting the character to wear a doctor's white coat. Setting Nine participants expressed opinions on the settings of the characters.Most participants preferred a professional setting with an environment that felt warm and inviting.Four participants shared they felt more comfortable when General likes or dislikes General comments about liking or disliking Laura or the program Ease of use of operating computer or Relational Agent Ease or difficulty of use experienced while interacting with the program regarding the audio, visual, and any technical difficulties including using the mouse and/or laptop touchpad there was a lot of space in the room and when the room was open.Five participants suggested a background relating to military service (e.g., the Department of Veterans Affairs logo or the American flag).One participant preferred no windows in the background due to fear of lack of confidentiality.Only two participants favored a casual or outdoor setting. Animation The majority of participants who provided answers observed the characters' movements as robotic and mechanical.Five participants reported characters' movements to be stiff and unnatural, with participants noting that the characters' hand gestures often did not sync up with the characters' speech.Furthermore, participants commented on repetitive movements of the eyes and eyebrows that drew attention to them and felt distracting.Five participants reported the characters' voice to sound too robotic and/or too monotoned.Notably, participants preferred characters without toon-shading (i.e., a method to make the character look more cartoonish) compared to normal shading. Human Trustworthiness Participants responded to queries about (1) their comfort talking with the characters about alcohol use, (2) the characters' knowledgeability about alcohol use, and (3) whether the character seemed to belong to an outpatient setting.Participants stated that they preferred a character that was more familiar with their situation, similar to a real doctor who had access to the patient's health information.Secondly, participants reported they felt the Relational Agent could only have limited knowledge.Lastly, participants felt the majority of the characters did fit into an outpatient clinic setting.However, some participants were comfortable talking with the agent, and some stated they would do so if the doctor was not available. Animation Many participants reported negative reactions to the Relational Agent's animation.Participants reported that the graphics appeared "robotic" and "stiff."Participants also noted that it appeared that the Relational Agent could not engage in human-like eye contact with them, with one participant reporting that they felt as though the agent was looking and reading off a teleprompter behind them.Participants (7/15, 47%) reported that the agent's body language was too stiff, and her voice sounded "mechanical." Comfort Level Interacting with Relational Agent Most comments made by participants regarding the participants' comfort levels with the Relational Agent were positive.Fourteen participants commented on the Relational Agent stating she had "friends in the military" and "liked working with veterans," with most participants reporting positive comments (11/15, 73%).Many of the participants reported being comfortable with the computer character (10/15, 67%).The mean comfort ranking during this phase was 8.6 out of 10.Some participants (3/15, 20%) commented that the agent was judgmental and disingenuous in her interactions, but most of the participants (12/15, 80%) reported feeling that they would feel very comfortable speaking with the Relational Agent about their alcohol use.Despite the overall comfort level, eight (53%) participants thought that the phrasing of the questions was slightly abrasive or ambiguous, possibly leading to some confusion.However, only three (20%) participants noted that they would prefer to speak with a human being over the Relational Agent. As part of the program, the Relational Agent thanks the veteran for their service.In general, participants reported positive feelings regarding this statement (9/15, 60%).Two Vietnam-era Veterans noted to be especially happy to hear the phrase due to lack of gratitude received during their initial homecoming following their Vietnam deployment.Despite the majority of participants reporting positive comments about the statement, some (5/15, 33%) reported a negative reaction with participants describing the phrase as "shallow" and "not genuine." Final Selection of Relational Agent Overall scores were analyzed to select the final Relational agent.Female 1 received the highest overall median score (7), and female 2 came in second (6.5) with female 2 scoring in the top two for both average comfort and character preference categories (see Table 4).In addition, the lowest comfort score female 2 received was four, while other characters scored as low as a two on this rating (see Table 4).Based on the overall ratings of comfort and preference, female 2 was chosen as the final Relational Agent character (see Fig. 1).Female 2 was subsequently named Laura for the evaluation phase (see overall responses on characters' comfortability and preference in Table 4). Appearance and Animation Participants overall reported favorable opinions on the appearance of the Relational Agent.Participants were free to comment on any aspect, so not all participants commented on any particular attribute, with participants reporting positive opinions regarding her overall attractiveness (6/19, 32%), professional clothing (10/19, 53%), and her clear voice (5/19, 26%).One participant reported that the agent looked too professional and would feel more comfortable interacting with her if she was wearing more casual clothing.Similar to feedback received during the development and usability phases of the study, the majority of participants (10/19, 53%) reported that the agent's movements felt slightly to moderately robotic.Comments included reporting that the Agent was "missing some body language and eye contact that would happen during a normal conversation [with a human]" and that the Agent "moves like a robot but talks like a person."Only two (11%) participants reported that they would prefer to talk with a human being. Setting Nearly half of all the participants (9/19, 47%) commented favorably about the professional office setting that the Relational Agent was presented in.Participants liked that the setting was similar to that of their providers.Two participants commented that having the Relational Agent standing up in the office while they were sitting would decrease the overall comfort level. Personality and Comfort Level Nine participants commented that the Relational Agent's personality was professional, polite, and appropriate for the conversation.One participant noted "she's very easy to talk to so she relates back and talks back in a way you'd expect a normal human being to."Some participants reported that the agent felt "programmed" (4/19, 21%) and lacked a dynamic personality (3/19, 16%).Despite some feelings of awkwardness when first interacting with the agent, participants were overwhelmingly comfortable with their overall interaction with the agent (17/19, 89%).Only four participants (21%) felt that the agent was judgmental when discussing personal drinking habits. Change in Drinking Habits The Relational Agent was able to successfully help ten participants (53%) decrease their drinking habits or stop drinking altogether, per self-report.One participant commented that "[The Relational Agent] got me interested in [cutting] down the drinking.She helped me cut it down.She made me address the problem more openly."Another recounted "It did change my outlook, my whole thought process on alcohol."Some participants felt that the agent felt "pushy" at first when trying to change their drinking habits, but most found that, in the end, the agent's demeanor and approach to brief intervention were motivating.Interestingly, six participants (32%) thought about the agent while drinking or planning to drink which subsequently changed their drinking habit for at least that moment. Discussion In this study, we provide an outline for iterative and usercentered technology-based intervention development, which included providing various options, features, and designs for participants to choose from.Participant feedback was then used to refine and ultimately finalize the intervention for delivery of SBIRT within VA primary care clinics.In doing so, we helped to ensure that the finished intervention incorporated the perspectives and preferences of our end users, who were co-creators of the finished intervention.We also provide data from participants who completed the randomized clinical trial (Rubin et al., 2022) who weighed in on the finished product but also the perceived effectiveness of the intervention, having used it as designed and implemented within real-world primary care clinics.As a whole, this project provides an illustration of how technology-based interventions can be built with key end users in mind, and in a manner that appeals to the preferences of the majority of target patients.In addition, we also provide insight into the pros and cons of automated intervention technologies, and Relational Agents in particular, in relation to providing SBIRT, relative to in-person providers.We found that opinions and preferences about the Relational Agent differed quite a bit, with participants offering their preference on everything from the physical features and movements of the Relational Agent to the setting and physical positioning (e.g., sitting vs. standing position) of the Relational Agent in their reported observations.Key themes relating to participants' observation emerged including those indicating that the intervention appeared "mechanical" or noting its perceived limitations to providing a human connection.At the same time, the majority of participants interviewed from the RCT stated that they preferred the Relational Agent over in-person providers.Despite the observed limits of the technology, veterans provided generally positive ratings and felt comfortable discussing their drinking habits with Laura.Participants also believed that the intervention either had a positive impact on their drinking, or at least on the ways they think about drinking. Participants overwhelmingly endorsed that the "stiff" movements and personality of the Relational Agent decreased the feelings of authenticity of the Relational Agent.Despite feeling that the physical features of the Relational Agent were not human like and were, at times, limiting, participants preferred the Relational Agent without "toon-shading," thereby supporting previous research findings that showed that although characters with toon-shading were felt to be overall friendlier, for conversations involving medical content, human-like shading was preferred and felt to be more appropriate (Ring et al., 2014).Prior research has found that highest levels of persuasion by the Relational Agent are found when the degree of animation is minimized (Parmar et al., 2022), aligning with the Relational Agent developed in this study.Participants' preferences for a female Relational Agent follow the gender preference found in previous research (Esposito et al., 2021). Importantly, we observed that the attitudes of the participants towards the Relational Agent improved over the course of the project as we worked to refine the Relational Agent to better meet patients' needs and preferences.At each stage, we incorporated the participants' feedback into the changes being made (e.g., changing setting or changing the level of animation of the character).The improved and overall positive feedback and reviews of the Relational Agent at the conclusion of the larger trial exemplify the success of including patients in the development of such health technologies given that these participants were the ones interacting with the Relational Agent as part of the clinical trial intervention. The high variability in the preferences of participants during the development phases suggests that availability of multiple characters may allow for an increased reach to broader audiences.The availability of multiple Relational Agent character options for patients to choose from would allow for patients to choose a character that they are most comfortable disclosing information to and interacting with, ideally leading to increased rapport between the patient and "provider."Research has shown that a strong therapeutic alliance can be formed between virtual agents and patients and that patients that perceive themselves to be more like the agent (e.g., race and ethnicity) report higher therapeutic alliances (Bickmore et al., 2005a, b;Persky et al., 2013;Zhou et al., 2014) and greater trust in the Relational Agent. Automated Relational Agent technology provides promising means of bridging service gaps in alcohol screening and intervention without adding to provider or clinic burden.It is also possible that Relational Agent technology, while offloading tasks normally required of providers, could also deliver these services with greater consistency and fidelity.For example, in the clinical trial results from this study, we found that rates of brief intervention and referral to treatment were substantially higher among patients assigned to the Relational Agent condition; this was relative to patients receiving primary care services as usual, for whom annual alcohol screening and, when indicated, brief intervention and referral to treatment are considered best-practice.The potential for improving SBIRT fidelity relative to treatment as usual, coupled with the favorability ratings for the Relational Agent reported herein, supports ongoing research and development of similar intervention technology in healthcare settings, where digital health applications promote more honest disclosure as a result of decreased stigmas and levels of embarrassment (Berry et al., 2018;Olafsson et al., 2018Olafsson et al., , 2020)).Furthermore, fully constrained user input (i.e., multiple choice) has been demonstrated to work effectively in several prior Relational Agent-based interventions, including several with individuals of low health literacy and low computer literacy (Bickmore et al., 2010a(Bickmore et al., , b, 2015;;Zhou et al., 2014), and research has also demonstrated that patients are equally comfortable using the multiple-choice modality compare to unconstrained speech input (Murali et al., 2019). Relational Agents provide unique affordances relative to other health education media.Unlike text-only chatbots or static web pages, Relational Agents rely only minimally on text comprehension.The use of nonverbal conversational behaviors-such as hand gestures that convey specific information through pointing or through shape or motionprovides redundant channels of information for conveying semantic content also communicated in speech.The use of multiple communication channels enhances the likelihood of message comprehension, and the Relational Agents can emphasize and enhance recall of critical information through nonverbal emphasis.Relational Agents provide a much more flexible and effective communication medium than taped content or even combined video segments.The use of synthetic speech makes it possible to tailor each utterance to the user (e.g., using their name, gender, age, and other personal information), to the context of a given conversation (e.g., what was just said, whether it is morning or evening), and to changing parameters over a series of conversation (e.g., gradually increasing self-efficacy).Importantly, we have shown that current commercially available conversational systems (e.g., Siri, Alexa) frequently make unsafe health recommendations (Bickmore et al., 2018). The Relational Agent created for this study was made available through a VA laptop computer.This platform might be feasible in some but not all clinics, and the technology can be adapted for delivery on personal and handheld devices such as smartphones and tablets.With these flexible options, SBIRT delivery could take place in clinic waiting rooms, during or just prior to virtual care visits, or as part of routine check-ups with existing patients separate from a scheduled office visit (i.e., for intermittent screening and prevention between appointments) on the web. Limitations A relative strength of the study is that we assess a variety of domains and diverse aspects of the Relational Agent throughout the development phase, using both qualitative and self-report scale questions.Our phased approach to the design and usability of the Relational Agent also allowed for iterative refinement of the Relational Agent intervention before formally testing it in the trial.We also benefitted from the perspective of veterans drinking at unhealthy levels and who participated in the trial, which provided critical feedback about the intervention from target end users.For the development of the Relational Agent, we included veterans regardless of their drinking, to create an intervention that would be used to screen all veterans coming into primary care clinics.On the other hand, one potential limitation of this approach is that we did not target veterans drinking above recommended limits to participate during the intervention creation phase.In addition, the limited patient sample consisted of mainly White males, reflecting much of the patient population of the VA health care system where the research was conducted.Future research might identify and incorporate feedback from a more diverse patient sample, including individuals drinking above the recommended limit to ensure that the dialogue and functionality of the intervention are satisfactory to these veterans. Conclusions End-user involvement in the development of digital health tools and Relational Agents allows for customization of the tool for the population and setting it is aimed to serve.Through soliciting the feedback and perspectives of veterans in this study, we successfully developed a Relational Agent that met the needs of the veterans while optimizing the participants' comfort levels interacting with the Relational Agent.Continued development of the Relational Agent is necessary to overcome some of the characteristics (e.g., mechanical body movements and tone of voice) of the Relational Agent that may prevent patients from fully engaging with the agent.Despite the continued opportunity for additional refinement of the Relational Agent, participants overall found the agent to be trustworthy and effective at prompting participants to think about their unhealthy drinking habits. Table 4 Participants' overall rating and self-rated comfort level ratings for Relational Agents evaluated a Participants ranked the Relational Agents in order of preference (1=most preferred to 8=least preferred) b Participants used ruler-based rating (0=not at all comfortable to 10=extremely comfortable) to rank comfort level talking with the Relational Agent about alcohol use Table 1 Design phase: description of coding categories , and Participants for Evaluation of Developed Relational Agent Recruitment Rubin et al., 2022n of the design and usability phases, we conducted the larger clinical trial (reported elsewhere; seeRubin et al., 2022for primary outcomes). Table 2 Usability phase: description of coding categories General/ non-specific reactions referring to all aspects of program General reactions to the program and reactions at the beginning of the program relating to all aspects of the program (i.e., technical and non-technical aspects).Branch of military questionsParticipant's reactions to the Relational Agent asking the user which branch of the military they served in. Table 3 Evaluation phase: description of coding categories Julianne Brady: formal analysis, writing-original draft preparation; Nicholas Livingston: methodology, writing-original draft preparation; Molly Sawdy: data curation, writing-original draft preparation; Kate Yeksigian: data curation, formal analysis, writing-original draft preparation; Shou Zhou: conceptualization, methodology, writing-original draft preparation; Timothy Bickmore: conceptualization, methodology, writing-original draft preparation; Steven Simon: funding acquisition, conceptualization, methodology; Amy Rubin: funding acquisition, conceptualization, methodology, formal analysis, writing-original draft preparation Funding This research was supported by the Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development (HSR&D) Services (IIR11-3346; PI Simon).
2023-08-19T15:21:55.887Z
2023-08-16T00:00:00.000
{ "year": 2023, "sha1": "641d7f67a1f101efbbf66ec488c6ff95573b7838", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s41347-023-00332-3.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "5d84b74f8a552b9dc65aa2df6a110a63d962d4b9", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [] }
249644730
pes2o/s2orc
v3-fos-license
What the study of spinal cord injured patients can tell us about the significance of the body in cognition Although in the last three decades philosophers, psychologists and neuroscientists have produced numerous studies on human cognition, the debate concerning its nature is still heated and current views on the subject are somewhat antithetical. On the one hand, there are those who adhere to a view implying ‘disembodiment’ which suggests that cognition is based entirely on symbolic processes. On the other hand, a family of theories referred to as the Embodied Cognition Theories (ECT) postulate that creating and maintaining cognition is linked with varying degrees of inherence to somatosensory and motor representations. Spinal cord injury induces a massive body-brain disconnection with the loss of sensory and motor bodily functions below the lesion level but without directly affecting the brain. Thus, SCI may represent an optimal model for testing the role of the body in cognition. In this review, we describe post-lesional cognitive modifications in relation to body, space and action representations and various instances of ECT. We discuss the interaction between body-grounded and symbolic processes in adulthood with relevant modifications after body-brain disconnection. (Dis)Embodied approaches to cognition The traditional occidental concept of the human mind seems to be essentially based on mind-body dualism deriving from the Cartesian distinction between the mind (res cogitans) and the body (res extensa). The mind-body dichotomy has been taken to imply not only that basic perceptual and motor functions are separated from higher order ones (Block, 1995), but also that the latter are exclusively based on the manipulation of abstract, amodal symbols and are largely independent from the former (Newell & Simon, 1972). In the last few decades, this radical view has been challenged by ever increasing psychological and neuroscientific evidence that human cognition is profoundly influenced by basic sensorimotor processes and that even complex concepts such as the abstract aspects of language are largely grounded on body representations and their relations with the world. This is the central tenet of a group of theories that are included under the umbrella definition of 'Embodied Cognition Theories' (ECTs). According to these theories, all human experience is grounded in the body, not only perceptual and emotional processes and social interactions, but also the acquisition and creative use of language (e.g., the use of metaphors), judgment capacities and the creation of cultural artefacts (Gallagher, 2005). Since their original formulation (Glenberg, 1997), ECTs have attracted the interest of many disciplines, such as psychology, psychotherapy (Khoury et al., 2017;Tschacher et al., 2017), education (Pouw et al., 2014), philosophy, anthropology, robotics (Hoffmann et al., 2010), artificial intelligence (Shapiro, 2011) and, last but not least, neuroscience Kiefer & Pulvermüller, 2012;Mahon & Caramazza, 2008). However, ECTs do not refer to a unitary construct and each theory does in effect differ from another in the way it conceives the reciprocal relations between the body, the mind and the environment and the modalities by means of which bodily representations affect cognition. The various different theories range from a general idea of an instrumental role of the body in information processing (Körner et al., 2015) to a more radical view asserting that "all cognitive processes are based on sensory, motor and emotional processes, which are themselves grounded in body morphology and physiology" (Glenberg, 2015, p. 166). Importantly, however, a sort of continuum is identifiable within these various theories ( Fig. 1). At one extreme of this continuum, there is a hypothesis that presupposes the hierarchical organisation of cognition with a symbolic system that is separated from the sensorimotor system that can merely activate motor responses (Leshinskaya & Caramazza, 2016). At the other extreme is the idea that cognition emerges from a dynamic circle of interactions between the brain, the body, and the environment without the need for symbols (Brooks, 1991;van Gelder, 1998). What distinguishes these two perspectives regards the role that the body and its connection to objects plays in cognition (Shapiro, 2019). The body may be considered to 'participate' in building cognition since cognition may be altered depending on the shape, size and experiences of the body (Glenberg, 1997;Lakoff & Johnson, 1999;Varela et al., 1991). From a different perspective, the body can be considered to be 'constitutive' in the sense that cognition would not exist without it (e.g., the Perceptual Symbol theory; Barsalou, 2008;O'Regan & Noë, 2001). Objects are only taken into account in some of these theories in which it is suggested that they participate in building cognition (e.g., the Extended mind theory, Clark, 2006; the Dynamical systems theory, van Gelder, 1998). An example is the act of writing and thinking at the same time, a task that gives a specific result due to the interaction between the brain and the body and thence to a pen and paper, and from there back again to the brain (Clark, 2006). Accordingly, if one changes either the gesture or the object, the final product will also be different. One might ask whether in this case the mind extends to the body (e.g., the Peripheral mind theory, Aranyosi, 2013) and also to the objects (Clark, 2006) or, alternatively, the mind incorporates the body and the objects it is interacting with (Borghi, 2005). This is a question that remains unanswered. Recent studies on the link between embodiment and higher order functions in people with sensory deprivation highlight the importance of both sensory and conceptual representations (Ostarek & Bottini, 2021). For example, anterior temporal lobe activation in colour-knowledge tasks turned out to be very similar in congenital and early blind subjects (Wang et al., 2020). In contrast, activation in the ventral occipito-temporal colour perception regions was found only in sighted controls. This pattern of results points to the existence of two forms of object representation in the human brain: a sensory-derived and a cognitive-derived form of knowledge (Wang et al., 2020), with the former being experience-dependent and the latter experience-independent (Ostarek & Bottini, 2021). Crucially, the analyses Fig. 1 The various different models of embodied cognition theories are represented in a progression from one extreme with Disembodied Cognition to the other extreme positions inside the Embodied Cog-nition Theories. The co-existence of modal and amodal symbols in adulthood is suggested of connectivity in Wang et al.'s study shows that the two systems relating to colour knowledge are integrated and part of a widespread network (Wang et al., 2020). Thus, a crucial question concerns not only whether but also how the two levels interact and if the sensory level is able to modulate and modify the conceptual level. If so, one can conclude that knowledge is embodied, although embodiment is not the only way the brain understands the world. While no single clinical condition makes it possible to distinguish between the various different ECTs, alterations in the body may provide novel information on the different variables that play a role in these processes. Studies of amputees, for example, may highlight possible representational bodily changes that might, however, be due to multiple aspects, such as the visual appreciation of conspicuous changes in body shape as well as the somatosensory and motor disconnection between the body and the brain. In the following section, we focus on individuals suffering from spinal cord injury (SCI) in whom the general body shape is unchanged in spite of a massive somatosensory de-afferentation and motor de-efferentation. The specificity of this neurological model with respect to other clinical conditions will be analysed, then the changes in cognitive functions associated with SCIs are reviewed, starting from the representation of static and acting bodies, and continuing with an exploration of object and space representations. The potential contribution of these experimental data to the debate on embodied cognition will conclude the review. Spinal cord injury (SCI) as a model for understanding the role of the body in cognition Spinal cord injury (SCI) is a clinical condition in which a complete or incomplete lesion of the spinal cord induces a total or partial interruption of the bidirectional communication between the body and the brain, with the consequence that no somatosensory input from the body periphery below the lesion level (e.g., sensations of touch, pressure, the sense of limb position) is sent to the brain (de-afferentation) and no motor commands from the motor cortices can reach the muscles controlling the body parts below-the-lesion level, ultimately leading to paresis or paralysis (de-efferentation). The extent of deprivation depends on the lesion level. SCI at the cervical level brings about hypoesthesia/anaesthesia and tetraparesis/tetraplegia, a clinical condition with impaired/ absent somatosensory and motor functions affecting both upper and lower limbs (Fig. 2). SCI below the first thoracic spinal cord segment leads to paraparesis/paraplegia (i.e., the deficits affect lower limbs but spare upper limbs, neck and head regions). Given the topographic organisation of the spinal cord, exploring patients with lesions at different levels makes it possible to investigate in the same individual the representations of the body parts that are de-afferented/ de-efferented and those that are still connected to the brain. For example, in patients with high cervical lesions, the face and head regions are normally connected to the brain while the body regions below the neck are disconnected from the brain; in patients with lesions affecting the lumbar region, the deprivation exclusively involves the lower parts of the body. It is exactly this topography of damage that offers a unique opportunity to investigate the specificity of deprivation-related changes in cognition. Furthermore, a second aspect that makes the SCI model interesting and contributes towards a better understanding of the relevance of body in cognition is that the lesion is confined to the spinal cord and does not affect the brain. In this way, any cognitive changes recorded after lesion onset cannot be attributable to brain damage but are instead clearly the indirect effect of somatosensory and motor disconnections. To date, the link between the body and cognition has been investigated in patients suffering from brain damage (e.g., Canzano et al., 2014;Cocchini et al., 2010;D'Imperio Fig. 2 Graphical representation of the somatosensory and motor deficits following a SCI. A) The various levels of spinal cord are represented in the spinal column. The grey regions (B,C,D) indicate the body parts of altered processing of somatosensory and motor signals. B = cervical lesion with tetraplegia; C = thoracic lesion (T1) with paraplegia and partial deficit in the upper limbs; D = lower thoracic level (T12) with paraplegia Fossataro et al., 2018;Garbarini et al., 2015;Moro et al., 2011;Moro, Urgesi, et al., 2008b;Pazzaglia, Pizzamiglio, et al., 2008a;Tosi et al., 2018). However, the mere presence of brain damage makes it difficult to understand the potential specific role of body afferences and efferences in modulating cognitive functions. In fact, when analysing the results from tests carried out with brain injured participants, three limitations need to be taken into account. Firstly, it is always necessary to consider the extreme variability between patients with regard to their symptoms, even when they share the same diagnosis. Secondly, it may be difficult to distinguish the symptoms that directly depend on the lesion and the changes that are secondary to the somatosensory-motor disability. Thirdly, brain lesions may affect neural networks that extend beyond specific cortical regions and involve the white matter tracts and thus the connections between distant, not directly damaged brain areas. This induces alterations which cannot be explained in terms of mere lesion site but rather in terms of a disconnection between cortical regions, even those which are remote from one another (Pacella et al., 2019(Pacella et al., , 2020Thiebaut de Schotten et al., 2015). These limitations are overcome in cases of SCI in whom there is no primary brain damage. The SCI model also has advantages over other clinical conditions which involve body de-afferentation and deefferentation without direct, primary brain damage, but with clear signs of plastic remapping of sensorimotor and cognitive functions. For example, evidence of fast, profound somato-topography in remapping processes regarding body and space representations has been discussed in studies on individuals who have undergone amputation (Aglioti et al., 1994;Aglioti et al., 1997;Canzoneri et al., 2013;Ramachandran et al., 1992). It is worth noting that, unlike spinal cord lesions, amputations involve the real loss of a body part and, as a consequence, a conspicuous change in the shape of the body. Thus, any changes in cognition observed in amputees may at least in part depend on other non-somatosensory and motor factors, such as, for example, vision-mediated body representations. Conversely, in spinal lesions, the body shape is not altered, and any effects found in cognitive functions only depend on somatosensory-motor disconnection. A condition of massive deprivation that may permit an exploration of how alterations in the link between central and peripheral systems modulate cognition is amyotrophic lateral sclerosis, a neurodegenerative condition characterised by selective damage to motor neurons. However, this pathological condition induces a selective de-efferentation as a consequence of degeneration of the motor neurons connecting the motor cortices and the spinal cord, as well as those connecting the spinal cord to the muscles. Importantly, the degenerative nature of the pathology makes it difficult to classify the cognitive changes which frequently occur, in particular with regard to executive functions (Gillingham et al., 2017) and language (Abrahams et al., 2014), as symptoms resulting directly from the disease or as a consequence of motor deficits. A form of body-brain disconnection that may turn out to be even more massive than that which can be seen in SCI regards patients with locked-in syndrome in which changes in body representation have been documented (Conson et al., 2008;Conson et al., 2010;Pistoia et al., 2010). However, patients with locked-in syndrome typically have brain-stem lesions with the possible involvement of complex central hodological pathways. Moreover, an in-depth investigation of cognitive changes in these patients is difficult due to the rarity of the syndrome and the problems of establishing appropriate communication modalities in the experimental context (Rousseaux et al., 2009;Schnakers et al., 2008). On the whole, these considerations indicate that SCI may be considered as a unique model for testing the strengths and limits of embodied approaches to cognition. In fact, if cognition is in some way grounded in the link between the brain, the body and the environment, changes in the somatosensory-motor capacity to act and perceive the world should also lead to changes in cognitive functions. Moreover, since the disconnection in SCI is topographically organised (i.e., it typically involves some body parts and not others), one may observe, at least in principle, that changes in cognitive functions may be contingent on functions associated with the disconnected body parts. This hypothesis has been explored in studies that investigate the visual discrimination of body part, space and action representations as well as motor imagery. These will be discussed in the following sections with reference to theories regarding embodied cognition and its implications for further research. Sensorimotor pathways to mental body representations Before discussing the role of the body in cognition, it may be useful to analyse the ways in which the SCI model can give us information about how the body is represented in the brain (Berlucchi & Aglioti, 2010). The relationship between the body and the brain is a difficult matter, as a basic ambiguity persists that seems to be hard to resolve: the brain is an organ of the body, but at the same time it is capable of representing the body (Berlucchi & Aglioti, 2010;Tsakiris & Haggard, 2010). As a result of this complexity, an important question becomes how top-down (from the brain to the body) and bottom up (from the body to the brain) processes interact in the building of body representations and how this balance may be altered in SCI. According to the Perceptual Symbol Theory (Barsalou, 2008), cognitive symbols are actually simulations of sensorimotor and inner (e.g., interoceptive) states, thus suggesting that even seemingly ineffable constructs are built and maintained by means of bodily signals. In fact, experiences create 'sensorimotor contingencies', that is, a set of rules and regulations that relate sensory inputs to movements, postural and interoceptive changes and actions. Sensorimotor contingencies are independent from consciousness but crucially impact on it; in fact, with a contribution from inner states (i.e., emotions, memories, etc.), sensorimotor contingencies build perceptual states, which may be then simulated by the brain during cognitive activity (Barsalou, 2008). An apparent independence of sensorimotor contingencies from cognition may be found in procedural learning, namely, the implicit ability to learn motor sequences (e.g., driving a car or playing a musical instrument). In these processes, the contribution of an 'explicit', verbal cognition is very limited, if not downright confusing. Thus, procedural learning suggests that "what individuals are doing" is in some way separate from their knowledge of "how they do it". In other words, unlike declarative learning, procedural learning does not need any cognitive symbols or any manipulation of these symbols, suggesting that somewhat complex cognitive operations can be performed thanks to bodily signals. This independence of the body from cognitive representations is however a matter of debate. De Vignemont, for example, asserts that cognition "can be said to be embodied because it is affected not directly by the body but by the way the body is represented in the mind" (de Vignemont, 2011, p. 4), as in experiences of disownership of one's own limb (Jenkinson et al., 2018;Moro et al., 2016). In this condition, "there is not only the absence of the experience of ownership but also the experience of its absence" (de Vignemont, 2011, p. 23). This sensation of absence, according to De Vignemont, is mediated by the cognitive representations of the body. From this perspective, a stream of consciousness from the brain to the body seems to characterise the experience of the body (or its deficits) based on its representations. However, studies on SCI indicate that another parallel stream flows from the body to the brain and that sensorimotor peripheral changes can modify the cognitive representations of the body. Indeed, some studies suggest that afferences and efferences play a pivotal role in building and maintaining these representations of the body (Facchin et al., 2021). In the following two paragraphs, these aspects are discussed. In section 3.1, we present evidence regarding the effects of the lack of bottom-up sensory information in the multisensory visuo-tactile integration of information coming from the body (assessed by means of the rubber hand illusion). In section 3.2, we show how these effects expand beyond the sensorimotor domains toward cognitive functions such as complex body-related visual discriminations, indicating that sensorimotor variables modulate higher-order representations. The rubber hand illusion People with SCI report experiences of corporeal illusions, in particular distortions related to their body (Conomy, 1973;Curt et al., 2011;Scandola, Aglioti, Avesani, et al., 2017a), such as for example, disownership-like feelings (i.e., the feeling that some body parts do not belong to the self) and somatoparaphrenia-like sensations (e.g., the occurrence of delusional ideas relating to body part misidentification, personification or objectivation). Unlike brain injured patients, in whom these symptoms have long been described as a consequence of specific cerebral network alterations (for a review, see Jenkinson et al., 2018;Romano & Maravita, 2019; see also Moro et al., 2016) and are associated with delusional beliefs (i.e., convictions that are not amenable to change despite conflicting evidence), people with SCI recognise the irrationality of these body-generated feelings, which, however, they are unable to control. In apparent contrast with these corporeal illusions that are activated by a body-brain disconnection, it is worth noting that SCI individuals are less sensitive than healthy people to bodily illusions when these are induced by external stimuli (e.g., tendon-vibration; Fusco et al., 2016). In fact, when engaged in experimental paradigms, SCI patients respond to questions regarding their body basing their reply on cognitive representation and semantic information rather than on information coming from their body. For example, when they are asked to estimate the spatial position of a body part that is hidden from view (e.g., their hand), they base their judgment on their cognitive knowledge of the usual position of that body part, and do not change their estimation after spatial manipulation or a drift of the very same body part . It seems as if the balance between body functions and their representation is lost, with the prevalence of a given component in depending on the situation. The result is a type of imbalance between peripheral inputs and central representations that has been explored by means of the Rubber Hand Illusion (RHI). In the RHI paradigm, a visuo-tactile conflict is induced leading to rapid changes in the sense of body ownership (Botvinick & Cohen, 1998). In RHI studies, healthy individuals were asked to look at a rubber hand that was being stroked by an examiner, synchronously or asynchronously with their own hand, which was hidden from view. It turned out that only during synchronous stimulation was the rubber hand perceived as part of the participants' own body (i.e., a subjective index of ownership over an artificial hand) and the position of the real hand was perceived as having shifted toward the rubber hand ('proprioceptive drift', an objective index of illusory perception of body in space; Botvinick & Cohen, 1998;Ehrsson et al., 2005;Longo et al., 2008;Mohan et al., 2012;Schaefer et al., 2013;Tsakiris & Haggard, 2005). SCI participants with incomplete lesions tested in a RHI paradigm showed subjective indices of illusory hand ownership comparable to healthy participants (Lenggenhager et al., 2012). However, when the illusion was induced on their de-afferented legs, the SCI patients were less sensitive than the controls to multisensory stimulations (leg-illusion; Pozeg et al., 2017, see section 4.1). This supports the role of peripheral sensorimotor information in maintaining body representations. Studies of the RHI in SCI patients are in line with reports of the plastic remapping of bodily representations in limb or finger amputees. In these studies, tactile stimuli delivered to the face ipsilaterally to the amputation side brought about the sensation of being touched not only on the face but also on the phantom hand (Ramachandran et al., 1992) or finger (Aglioti et al., 1997). Given the representational contiguity of the face and the hand in the cortical somatosensory system, the results have been interpreted as supporting the existence of perceptual correlates of post-ontogenetic plasticity. In a similar vein, a vertical version of the RHI paradigm has been used in paraplegic and tetraplegic patients in whom synchronous and asynchronous stimuli are administered to a fake hand and to the subjects' ipsilateral hand or face. The results indicate that the tetraplegic (but not the paraplegic) participants reported the RHI when their ipsilateral face was stimulated. That only the patients who had lost the functionality of their hands were prone to the illusion suggest that the de-afferented hand region was willing to be driven by facial input with the consequence of across body region remapping processes Tidoni et al., 2014). Taken as a whole, these data demonstrate that the interruption of body-brain connections induces not only below lesion, sensory and motor deficits as expected but also changes in the brain networks involved in body representations, supporting the embodied cognition approach. Tellingly, these changes expand beyond the multisensory integration (as assessed in the presence of multisensory illusions) which involves higher-order processes, such as complex body-related visual discrimination, which will be discussed in the next section. Visual discrimination of the human body It is worth noting that until very recently, alterations in body representations were only discussed in relation to patients affected by brain damage and they were interpreted as being the result of modifications in specific networks (Jenkinson et al., 2018;Romano & Maravita, 2019). Interestingly, SCI studies also hint at a topographical remapping of non-deafferented/de-afferented body parts in domains other than somatosensory perception and motor control. Pernigo et al. (2012), for example, asked people with SCI to discriminate visual stimuli representing human bodies that differed both in shape and in the action they were performing. In a matching-to-sample task, the paraplegic participants looked at an image of a young man performing a movement and were asked to choose the identical image from among two images that were presented immediately after the first (Fig. 3). The differences between the two images were in the upper or lower body parts and regarded the body form (i.e., the choice lower line = changes in the lower body parts). In the two columns on the right the images differ in the form (i.e., identity) of the model but not in their action. B = experimental timeline. C = main results indicating a reduction in body representation of lower limbs in spinal cord injury was between two individuals performing an identical action) or action (i.e., the choice was between two images of the same individual performing two different actions). The SCI participants performed worse than the healthy controls only when they were requested to identify differences in the lower body parts (which in fact corresponded to the paralysed parts of their own body). In contrast, the patients' performance was similar to that of the controls for healthy upper body parts (Pernigo et al., 2012). The same trend was found in tasks involving the discrimination of body form and action (see below for a discussion regarding action perception; Fig. 3). It is worth noting that this was a visual discrimination task and thus apparently did not involve the sensorimotor system. Interestingly, the only way to understand these results is to posit that a sensorimotor simulation is activated during the task and thus that the body sensorymotor system participates in a cognitive function such as visual discrimination. A deterioration in whole-body representation in people suffering from SCI was also found in a study in which participants were asked to judge the laterality of rotated images of feet, hands and whole bodies while they were in two different postures (with their hands and feet held either (i) straight or (ii) crossed). A posture-dependent modulation in reaction times in terms of the mental rotation of the body parts in the images was interpreted as the effect of afferent somatosensory information relating to body representation. In fact, this result was found for the control group, but not for the paraplegic group for whom the effects of postural feet changes disappeared and the body representation progressively deteriorated in proportion to the degree of completeness of their SCI (Ionta et al., 2016;Scandola, Dodoni, et al., 2019b). These data confirm evidence found in a study involving a similar task carried out by patients suffering from focal hand dystonia (i.e., a group of movement disorders characterised by sustained or intermittent muscle contractions causing abnormal, often repetitive, movements, postures, or both; Albanese et al., 2013). Their responses were only slower when the body parts that they were requested to rotate corresponded to their own affected body parts (i.e., the dystonic hand), but not to their other hand or foot (Fiorio et al., 2006). As a whole, these results suggest that changes in the body-brain relationship impact not only somatosensory and motor processes but also higher-order functions. A theoretical explanation for these results is offered by the Perceptual Symbols Theory (PST; Barsalou, 2008), according to which all cognitive experiences are necessarily grounded in the sensory and motor contexts of their occurrence. During the experience, sensorimotor codes are recorded as multimodal perceptual, motor and interoceptive states. These codes shape conceptual representations. When similar representations are reactivated on other occasions, these are based on a new access to the sensorimotor information previously encoded (motor simulation). Thus, concepts are not an additional level of abstract, amodal representation that is separated from sensorimotor systems, but they use the same neural and cognitive processes used in sensorimotor processing (Barsalou, 2008;Barsalou, 2010). People with SCI are unable to simulate the states of the below-the-lesion body parts that are deprived of bodily outputs and somatosensory inputs. As a consequence of this deprivation, they fail in tasks requiring representations of those parts. Tellingly, the fact that changing sensory-motor potentialities alters cognition does not fit in with amodal theories of cognition. In fact, following an amodal approach, once built, symbolic representations should remain stable and not change due to 'peripheral' body changes. A representation of the body that is exclusively symbolic is not expected to be sensitive to sensorimotor influences, as happens in the case of SCI. The body in the world of people and objects When considering the role of the body in cognition, one needs to take into account that the body acts and perceives in specific contexts. According to the embodied cognition approach, this means that information from the environment contributes towards shaping cognition by means of the mediation of the body (Clark, 1999). The first consequence of a relationship between the kind of body that an organism possesses and the kind of concepts that the organism can acquire is that "to conceive of the world as a human being does require having a body like a human being's" (Shapiro, 2011, p. 71). According to this view, the organism's understanding of the world and its ways of categorising experiences are determined by the properties of the body. From this perspective, even in the case of language, which might be considered to be the most amodal, symbolic human function, the basic concepts would derive from physical experiences (e.g., Lakoff & Johnson, 1980;Liuzza et al., 2011). In the following section, we discuss experimental results which indicate how changing sensorimotor bodily functions may alter the perception of the environment and objects. Two embodiment processes are analysed: the embodiment of artificial virtual agents (avatar) and the embodiment of objects that may or may not be in contact with the body and may or may not subserve adaptive navigational or motor functions. Furthermore, the effects of object embodiment in the representation of space will be discussed. Embodying artificial agents The investigation of experiences of 'mediated embodiment' (i.e., the technologically induced illusion of experiencing the body of an avatar as one's own, independently of the technology used to produce the illusion; Aymerich-Franch, 2018; Aymerich-Franch & Ganesh, 2016) provides a relevant contribution to the comprehension of how embodiment mechanisms impact cognition. Studies indicate that artificial agents, be they digital (i.e., an avatar) or physical (i.e., a robot), can be embodied to varying degrees. The sensations produced by this process are so strong that they may influence cognitive and social behaviour (Cangelosi & Stramandinoli, 2018;Wykowska et al., 2016). For example, embodying adults in the body of a 4-year-old child (Banakou et al., 2013) or a small mannequin (a 'Barbie doll'; van der Hoort et al., 2011) causes an overestimation of object sizes. In contrast, object size is underestimated when adults are embodied in a giant body (van der Hoort et al., 2011). Furthermore, white people embodied in a black virtual body (Banakou et al., 2013;Peck et al., 2013) or 'enfacing' a black face (Bufalari et al., 2014) exhibit a decrease in implicit racial bias. Embodiment of an avatar has been induced in patients with SCI by means of the Virtual Leg Illusion (Pozeg et al., 2017). Synchronous and asynchronous visuo-tactile stimulation was applied to the participants' back, above the lesion level or at the shoulder (where afferences are spared, in the case of cervical lesions), while the virtual legs were seen on a head-mounted virtual-reality (VR) display in order to induce a visuo-tactile integration. People with SCI were less sensitive to the multisensory stimulations used to induce illusory ownership of the virtual legs than the controls, thus hinting at a specific role of body afferences and efferences in the illusion. Furthermore, the virtual-leg-illusion (as well as the full-body illusion, in which people look at the full body and not only the legs) was associated with a mild effect of pain reduction (e.g., Pozeg et al., 2017). This effect was confirmed in patients suffering from complex regional pain syndrome and peripheral nerve injury (Matamala-Gomez et al., 2019). An in-depth knowledge of the processes involved in embodiment may have an enormous impact on future clinical practices, in particular on the possibility of creating brain computer interfaces and robotic tools that can be useful in rehabilitation after SCI (Jarosiewicz et al., 2015;Osuagwu et al., 2016;Sørensen & Månum, 2019). In particular, a more profound knowledge of the mechanisms involved in embodiment might help clinicians to understand why some patients, for example amputees, express negative attributions towards their prosthesis (Senra et al., 2012) and may suggest a type of therapy that will facilitate acceptance (Holthe et al., 2018). The implications of these procedures in terms of cognition are to date unknown and future research is needed (Gorgey, 2018;Lee et al., 2019). Two complementary areas of interest emerge from the literature on embodiment processes. The first regards the need to take into account data concerning the changes in body-related cognitive functions as a result of de-afferentation and de-efferentation in order to design optimal robotic devices. The second concerns the necessity to investigate the effects of the use of robotic aids on cognitive functions (Beckerle et al., 2019). The embodiment of objects Clinical studies indicate that objects may be incorporated. Aglioti et al. (1996), for example, reported on a woman with right brain damage affected by disownership of her left hand who also denied the ownership of the rings she wore on that hand. When the same objects were put on her right hand or were held by the examiner, the patient correctly recognised them as her own. Other personal objects unrelated to her left hand (e.g., pins, earrings, a comb) were always correctly recognised as being hers. This indicates that the mental representation of one's own body may include inanimate objects that have been in contact with or in close proximity to the body itself. This has been confirmed in subsequent studies on body representation and peripersonal space (PPS; i.e., the region of space within which objects can be grasped and manipulated without the need to move the trunk). For example, the extensive use of a computer mouse extends the peripersonal space around the hand and this enlargement may include the screen monitor, at least during the time when the hand remains in contact with the mouse (Bassolino et al., 2010). Furthermore, in the case of blind people, the regular use of a cane to navigate extends their PPS to the full length of the cane (Serino et al., 2007), and the same occurs in amputees when they are wearing their prosthesis (Canzoneri et al., 2013). Similar plastic changes have been observed in healthy people after brief training sessions with a cane or two sticks, in particular when active movements with tools are requested (Maravita et al., 2002). In the same way, if a limb is forced into immobilisation, a reduction in the extent of the relative PPS occurs (Bassolino et al., 2014), along with a decrease in excitability at the cortical level (Facchini et al., 2002). Using a cross-modal integration paradigm in an experiment with paraplegic participants, Scandola et al. (2016) found that the somato-sensory and motor disconnection that characterises spinal cord lesions alters the representation of PPS in a specific way that impacts the space around the person's legs that are paralysed (but not the space around the hands; Fig. 4). In this case too, cognitive changes are not attributable to the generic adaptation processes that the individuals may have gone through in order to deal with their paralysis. On the contrary, the fact that these changes are somatotopographically specific suggests that sensory afferences and motor efferences do in fact play a causal role in building space representations and in rebuilding and adapting them to the changes which have occurred in the person's body-environment relationship. In keeping with this view, there is evidence that 15 min of passive mobilisation are enough to bring about a recovery of the representation of space around the feet in paraplegics (Scandola et al., 2016). In healthy people, congruency in the visuo-motor information coming from the avatar in a VR setting seems to be necessary for embodiment and in fact the PPS turns out to shrink when this information is incongruent . In contrast, in SCI patients, passive movement expands the PPS even in the absence of visual stimuli, indicating that the residual above-lesion sensory information probably plays a crucial role in PPS recovery after passive mobilisation . It is thus evident that although top-down factors may partially modulate a patient's response, they also certainly interact with information (or lack of information) coming from the body. These studies support the theories suggesting that peripheral components involved in sensory experiences are not merely involved in the generation of experience, but are constitutive of experiences (O'Regan & Noë, 2001;Varela et al., 1991). A possibly extreme position is expressed in the Peripheral Mind Theory (Aranyosi, 2013), in which it is proposed that "there is a peripheral presence of the mind in every part of the body that gets innerved, and these peripheral parts of the mind are not less central than the processing center which is the brain" (page 11). According to this idea, the mind is multiply located or co-located. People feel that they are not in a body but are the body. Proprioception, interoception and touch all interact so as to create an "enminded body experience" (page 144). In this way, the mind extends beyond the brain to the body, in particular to the connections between the peripheral nervous system and the body. The Extended Mind Theory (Clark & Chalmers, 1998) goes even further by claiming that cognition emerges from the interaction of individuals with the objects they use during cognitive activities. Clark suggests that gestures are a special reasoning system useful for spatial reasoning (Clark, 2008). When a person writes what they are thinking at that moment, the paper provides a medium for thought thus enabling the person to shape and build their ideas. The same goes for tools that people frequently use as aids to cognitive functions, for example, the notebook that a patient suffering from Alzheimer's disease uses in order to remember information plays the role usually played by memory networks (Clark, 2008). So, the question now concerns whether it is the mind that extends out towards the world or the world that is embodied in the mind. The principle of 'Economy of Action' (Proffitt, 2006) has been used to study SCI in order to find an answer to this question. This principle considers perception to be embodied in an individual's states, skills, goals and emotions. For example, the perception people have of the space surrounding them varies not only with any variation in the visual stimuli, but also with the individual's intent to minimise the cost of their actions in that space in terms of energy -something that is mandatory for survival from an evolutionary point of view (i.e., Economy of Action). It has been shown that wearing a heavy backpack makes the perception of a distance farther (Proffitt et al., 2003) and the perception of an inclination of a slope steeper (Bhalla & Proffitt, 1999). The same happens when people feel fatigued due to the fact that they are maybe physically unfit or elderly or in bad health (Bhalla & Proffitt, 1999). In contrast, expertise in physical exercise or sport influences and ameliorates a person's visual perception of any objects that have a key role in the sport they practise, for example, the hole in the course for golfers (Witt et al., 2008) or the ball for baseball players (Witt & Proffitt, 2005; but see Firestone, 2013, andFirestone &Scholl, 2016). In the case of people with SCI, an object of crucial importance for their autonomy in everyday life, and in general for their physical and social well-being, is the wheelchair. The interaction between the embodiment of Scandola et al., 2016). A = results in the comparison between spinal cord injury (SCI) and controls indicating a reduction of PPS around feet in SCI. B = PPS recovery of PPS representation around feet after mobilisation a wheelchair and the Economy of Action principle was investigated with SCI patients (Scandola, Togni, et al., 2019c) to test the hypothesis that the greater the degree of embodiment of the wheelchair, the better the person would be at estimating distances in space. That is to say, if the body feels better due to embodiment, the person will be better able to judge distances. The Body View Enhancement Task was considered as a measure of wheelchair embodiment. In this task, the participants either sat in their own wheelchair or in one that they had never used before. They were asked to respond to flashing lights on their body parts both above and below the level of the lesion and on the wheelchair. Similar or slower reaction times (RTs) to stimuli on the body and the wheelchair indicated, respectively, the presence or absence of tool embodiment. In particular, if the RTs were similar between the body and the wheelchair, that would indicate embodiment. In contrast, if RTs were slower in the wheelchair compared to the body, that would indicate an absence of embodiment. The results indicated that the SCI participants embodied their own wheelchair but not the one they had never used before. Moreover, in keeping with Pozeg et al. (2017), the SCI participants displayed disownership of their lower limbs, which were treated as external objects (i.e., with responses slower than those given for the lights in the afferented body parts). Crucially for the aim of this review, embodiment of their wheelchair enabled the SCI participants to estimate physical distances in their extrapersonal space (as shown by means of a 3D virtual plastic scenario, see Fig. 5) as efficiently as the healthy controls (i.e., errors in estimation increased as the distance increased). This did not happen when they were in the unfamiliar wheelchair. In addition to demonstrating that the processes of tool embodiment impact on cognitive functions, the results support the hypothesis of a potential extension of body boundaries towards objects (i.e., embodiment) rather than an extension of the mind. In fact, the participants modulated their perception of space based on their actual body conditions in that moment, and wheelchair embodiment increased their perception of the space around them. According to Aranyosi (2013), "tools are more plausibly to be taken as part of the nervous system and hence the case is not of extended mind, but what I have dubbed 'contracted world', a case in which a previous autonomous part of the world gets 'captured' by a nervous system and so ceases to count as an external from then on" (page 118). From this point of view, the mind is in effect extended but not beyond the body boundaries (or PNS). Taken together, the results discussed in this section suggest that sensorimotor de-afferentation and de-efferentation extend their effects beyond one's own body, in the representation of objects and environment. No cognitive changes at the level of the brain can explain these processes, which are totally mediated by the spinal cord lesion. The next step regards an investigation of potential changes in action representation and motor learning. How residual motor skills following SCI impact cognition Monitoring one's own performance is part of the interaction between the brain, the body and the environment. The ability to monitor skills is fundamentally important for regulating motor behaviour and learning. The complexity of the processes that underlie our daily-life decisions and actions in relation to objects and people may cause suboptimal performance in a variety of circumstances leading to errors. Within the theoretical framework of the predictive brain (Friston, 2010), an error can be conceptualised as a deviation from one's expectation based on previous (prior) knowledge about the regularity of events in the environment and in the social world. Detecting a mismatch between internal expectations (i.e., the underlying behavioural intention) and the actual spatio-temporal deployment of an event (e.g., perceived motor acts) is fundamentally important for updating and generating new internal models. Thus, performance monitoring has a crucial, adaptive role in forming, updating and using internal models concerning important aspects of behaviour (e.g., strong priors about what happens next). Studying the development of children provides an important source of information for investigating the role of action in cognition. In fact, babies build some fundamental concepts and competences by means of their movements, actions and errors (Thelen & Smith, 1994), not only, for example, space and time representation, object and shape categorisation (Smith, 2005), abilities in mental rotation (Frick & Möhring, 2013) and the learning of foreign languages (Toumpaniari et al., 2015), but also in the development of scientific concepts (Kontra et al., 2015), decision making and choice selection (Rivière & David, 2013). At the opposite extreme, the cognitive deterioration and daily-life impairment associated with old age might be at least in part due to deficits in embodiment, which can in part be linked to neuronal degradation at the sensorimotor level (Kuehn et al., 2018). Experimental studies also support the hypothesis of a causative role played by action in cognition (Schubert, 2004;Wells & Petty, 2010). Neurological patients demonstrate that the inability to perform actions also has consequences for higher-order non primarily motor tasks. Furthermore, patients affected by hemiplegia may show deficits in the visual discrimination of body parts, both in terms of action and form recognition (Moro, Berlucchi, et al., 2008a) and apraxic patients may be unable to recognise gestures Scandola et al., 2019a, b, c). A = an example of the stimuli (a ramp with a flag) used in the experiment. B = the pattern of errors in perception of distances from the flag. As in healthy subjects, when SCI participants are seated in their own wheelchair, errors in estimation increase as the distance increases. This result suggests that coding the distance between a given point in space and own's one position is based not only on visual estimation but also on body-related cues. It is worth noting that no such effect was found in SCI sitting in an unfamiliar wheelchair, suggesting that visual cues only are used in the absence of the association between one's own body and an external object (Canzano et al., 2014;Scandola et al., 2020Scandola et al., , 2021Zadikoff & Lang, 2005). Furthermore, an impairment in performing actions is also associated with disorders in comprehension and the identification of sounds related to human actions. Crucially, these symptoms are topographically specific, as patients with deficits in performing limb and bucco-facial actions are impaired in matching limb and mouth actionrelated sounds, respectively (Pazzaglia, Smania, et al., 2008b). Thus, it seems that motor production modulates action recognition, no matter whether it is mediated through visual, auditory or multimodal sensory inputs. Studies on SCI support this notion. Although these patients often report that they walk in their dreams (Saurat et al., 2011), there is evidence indicating that paraplegic patients may suffer from a dramatic reduction in their motor imagery capacities (Alkadhi et al., 2005;Chen et al., 2016;Di Rienzo, Collet, et al., 2014a;Di Rienzo, Guillot, et al., 2014b;Hotz-Boendermaker et al., 2008;Scandola, Aglioti, Pozeg, et al., 2017b) and in the discrimination of biological motion (e.g., the direction of ambulation of a point-light walker; Arrighi et al., 2011), even if they are aware of their motor deficits (Manson et al., 2014). Again, these disorders in action representation may be topographically specific, involving actions that would be executed by the paralysed below-lesion body parts but not those performed by the above-lesion body parts (Pernigo et al., 2012;Scandola, Aglioti, et al., 2019a). Thus, impairment of motor simulation seems to be linked to failures in motor imagery. Interestingly, studies indicate that people with SCI fail in motor imagery only when they are asked to carry out the task by assuming a first-person perspective (i.e., internal, first person visual imagery and kinesthetic mental imagery), while they do not differ from controls when the task is executed assuming a purely visual, third-person perspective (i.e., external motor imagery; Scandola, Aglioti, Pozeg, et al., 2017b), a condition in which they declare that they use strategies based on memory. This result should be taken into account when devising rehabilitation training focused on motor imagery as disorders in this ability may be present and impact on the efficacy of the training. To date, results from motor imagery interventions on pain severity are conflicting, while a certain degree of functional improvement has been found when mental imagery is combined with physical practice (for review, see Opsommer et al., 2020;Opsommer & Korogod, 2017). A specific deficit in learning implicit motor sequences was also observed in SCI individuals by Bloch and co-workers (2016), who tested healthy people and SCI paraplegic participants (with normal motor and sensory functions in their upper limbs) in a task where they were requested to press some buttons on a keyboard according to sequences indicated by cues shown on a computer screen. The order of the buttons was pre-arranged, and the same sequence was repeated for six blocks of stimuli. At the seventh block the sequence changed, and in the eighth block the first sequence was repeated. The RTs of the responses to the eighth block showed a learning effect related to the task in healthy but not in SCI participants. These results are interpreted by the authors as a deficit of SCI individuals in building a new motor expertise (Bloch et al., 2016). Particularly interesting for the aim of this review is the contrasting, complementary data coming from investigations of SCI patients that provide evidence that the motor abilities that paraplegics develop after lesion-onset and during rehabilitation can lead to new action discrimination skills. This was demonstrated in the study by Pernigo et al. (2012) in which paraplegics who regularly practise sports became particularly good at the visual discrimination of actions performed by the upper parts of their body. In other words, there is a correspondence between the body parts used in sports (e.g., in this case, the arms and the upper part of the trunk) and the actions that the participants were better able to perceive. The data were subsequently replicated (Scandola, Aglioti, et al., 2019a) by means of a Progressive Temporal Occlusion paradigm (Abernethy, 1987). A group of paraplegics were exposed to two series of videos showing a person in a wheelchair or on rollerblades who was trying to climb on a step. The participants were asked to predict how the video would end choosing from three alternatives in which the person in the video: (i) carried out the action successfully; (ii) was not able to get onto the step or (iii) fell to the ground. The performance of the SCI participants was compared to that of two other groups. One control group was composed of physiotherapists with experience in the rehabilitation of people with SCI, but who were inexperienced with rollerblades. The experimental paradigm allowed the experimenters to exclude the possibility that mere visual expertise or knowledge of the kinematics involved in the use of a wheelchair influenced the performance of the SCI participants in the videos involving a wheelchair. The other control group consisted of experienced rollerblade skaters who were, however, inexperienced with wheelchairs. This permitted the experimenters to compare the performance of the SCI group in the videos showing a rollerblader with that of a group of experts. The hypothesis was that if abilities relating to action anticipation are modulated by the person's sensorimotor experience, the paraplegic group would perform more accurately in the wheelchair videos and the group of skaters in the rollerblade videos. This was precisely what the main results of the study demonstrated (with the group of physiotherapists performing with average accuracy in both videos, Scandola, Aglioti, et al., 2019a). Differences were also found in an affordance-related reachability judgment task (Sedda et al., 2018) in which only the control group overestimated the range. In addition, while the controls were faster at making judgments on reachability when the objects were in their peripersonal (vs. extrapersonal) space, the SCI patients did not seem to have this advantage for objects that were close. Importantly, this finding was related to the patients' ability to perform everyday tasks. As a whole, these data indicate that de-afferentation and de-efferentation after spinal cord lesions impact body and action knowledge not only at the lower level of perception and execution, but also at the level of higher cognitive processes relating to representation, visual discrimination and mental imagery. How can the study of SCI contribute to the debate on the embodied cognition approach? In the previous sections, we have shown that de-afferentation and de-efferentation due to SCI do not impact only somatosensory and motor functions but extend to higher-order body-and space-related cognitive functions. In doing so, a progressive approach was followed, moving from the more expected changes in body perception towards modifications at representational levels relating to knowledge of objects and space. Experimental data also show that, although a SCI results in a reduction in functional autonomy, the changes in the body-cognition relationship may also lead to the learning of new abilities which are very specific to the patient's new post-lesional condition. It could be argued that these effects are not immediately obvious and in fact specific experimental procedures are required in order for them to be seen. Indeed, the experiments carried out in this area confirm what SCI patients have spontaneously reported on the subject of their daily experiences and the efforts they need to make in order to deal with their new condition (Conomy, 1973;Murphy, 1990;Papadimitriou, 2008;Scandola, Aglioti, Avesani, et al., 2017a). Along the continuum of the different positions shown in Fig. 1, the results of tests with SCI patients support the ideas that cognition depends on the experience of having a body with sensorimotor capacities (Varela et al., 1991), and that cognition and representational processes are built on sensorimotor information (Barsalou, 2008;O'Regan & Noë, 2001). The notion that in SCI patients this knowledge is at least partly experience-dependent (Ostarek & Bottini, 2021) is supported not only by the loss of abilities but also (and probably in a stronger way) by data on the post-lesional acquisition of new abilities (e.g., the capacity to discriminate wheelchair actions). Nevertheless, when considering the nature of human experiences, one needs to take into consideration not only sensorimotor but also cognitive and affective experiences. Thus, the role of the body is difficult to isolate. It is worth noting, however, that changes in body, space and action representation after SCI can be 'modality specific' (i.e., representations change as a consequence of sensorimotor deficits, without modifications in visual or other sensory systems, or in higher cognitive or affective functions). Furthermore, these changes are topographically organised, namely they only regard the de-afferented and de-efferented body parts. These specificities make it possible to conclude that, at least for these functions, cognition is embodied. De-afferentation and de-efferentation do not only change the perception of one's own body, but also impact the mental representation of the bodily self and the relationship that an individual has with objects and the environment. With respect to the body, there is a loss of the equilibrium between top-down and bottom-up processes, with either the latter or the former prevailing depending on the circumstances. With reference to the environment, SCI changes the individual's relationship with the surrounding space. Unfortunately, to date, the results coming from behavioural studies have not yet been supported by neurophysiological and neuroanatomical studies with respect to the possibly specific changes in brain networks. However, neuroimaging studies show that the neuroplastic reorganisation after SCI extends beyond the sensorimotor systems, and also affects neural networks involved in cognitive functions (Curt et al., 2002;Gustin et al., 2010;Solstrand Dahlberg et al., 2018;Vastano et al., 2021). Furthermore, although the results are still preliminary and need confirmation, a deterioration in cognitive functions such as attention, memory and executive functions has also been reported following SCI (Chiaravalloti et al., 2020;Guadagni et al., 2019;Molina et al., 2018). Crucially, in the absence of brain damage, the only possible explanation (although speculative at the moment) for these data is that bodily changes, and in particular sensorimotor deprivation, can modify the central networks involved in these cognitive functions. With a view to improving knowledge regarding the role of the body in cognition, it seems particularly important to investigate the role of sensory channels (e.g., sight and hearing) and interoceptive inputs coming from visceral organs (not affected by SCI) in the reorganisation of the body-cognition relationship in de-afferented/de-efferrented people. This research area promises to contribute substantially to the debate on embodied cognition theories. Conclusions and future research Taken as a whole, the results from research in SCI indicate that sensorimotor functions have an important role in cognition since a body-brain disconnection modifies not only the individual's mental representations of their body but also their knowledge of the environment around them. This may contribute towards a better understanding of the bodycognition relationship and further support the embodied approach to the study of cognition. Further studies are necessary to understand if these changes also involve symbolic functions such as language and social cognition. Moreover, future investigations should address the important issue of whether spared sensory systems, and in particular information coming from inside the body (i.e., interoception), may contribute towards the maintenance or rebuilding of the representation of one's own body in the environment. Conflicts of interest None. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2022-06-15T06:17:45.302Z
2022-06-13T00:00:00.000
{ "year": 2022, "sha1": "76a8114b8818ca35b7027017b9bdac3fdd50d75d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.3758/s13423-022-02129-6.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "4dba06c832f169fb773d2e74109876b3e6ac83ff", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
37841399
pes2o/s2orc
v3-fos-license
Divalent regulation and intersubunit interactions of human Connexin26 (Cx26) hemichannels Control of plasma membrane connexin hemichannel opening is indispensable, and is achieved by physiological extracellular divalent ion concentrations. Here, we explore the differences between regulation by Ca2+ and Mg2+ of human connexin26 (hCx26) hemichannels and the role of a specific interaction in regulation by Ca2+. To effect hemichannel closure, the apparent affinity of Ca2+ (0.33 mM) is higher than for Mg2+ (1.8 mM). Hemichannel closure is accelerated by physiological Ca2+ concentrations, but non-physiological concentrations of extracellular Mg2+ are required for this effect. Our recent report provided evidence that extracellular Ca2+ facilitates hCx26 hemichannel closing by disrupting a salt bridge interaction between positions D50 and K61 that stabilizes the open state. New evidence from mutant cycle analysis indicates that D50 also interacts with Q48. We find that the D50-Q48 interaction contributes to stabilization of the open state, but that it is relatively insensitive to disruption by extracellular Ca2+ compared with the D50-K61 interaction. Introduction Connexins constitute a family of transmembrane proteins, with 20 members in humans, which are expressed in almost all cellular types. 1 At the molecular level, connexins assemble as hexamers to form hemichannels. Hemichannels are transported to the plasma membrane, where most dock with hemichannels in apposed cells to form gap junction channels (GJCs). 2 GJCs allow propagation of electrical and/or molecular signaling among neighboring cells by direct intercellular communication. 3 Unpaired hemichannels, those not forming GJCs, seem to play an autocrine/ paracrine role by releasing transmitters, such as glutamate and ATP, into the extracellular medium. Several connexin mutations that cause human disease result in dysfunction of hemichannel regulation by extracellular Ca 2+ . 4,5 A molecular and mechanistic understanding of Ca 2+ regulation of hemichannel gating will help to elucidate how these mutations produce connexin channelopathies. Result and Discussion We assessed connexin hemichannel activation and deactivation in oocytes expressing hCx26 protein using the 2-electrode voltage-clamp technique. The peak tail currents and their relaxation kinetics following a depolarizing pulse from -80 to 0 mV were examined. Using this protocol, we previously showed that the peak tail currents increase with reduction of external divalent ions. Figure 1A shows current traces obtained at 1.8, 0.5, and 0.01 mM extracellular Ca 2+ and Mg 2+ from 2 different oocytes expressing moderate levels of hCx26 currents. The peak tail currents are reduced with increased external Ca 2+ , showing that external Ca 2+ inhibits activation of hCx26 hemichannels, as previously shown. 6 By comparison, the same concentrations of extracellular Mg 2+ are much less effective in reducing the tail currents. Consistent with this observation, the holding currents prior to depolarization are significantly decreased in the presence of Ca 2+ compared with those in the presence of Mg 2+ . These results suggest that Mg 2+ does not efficiently close hemichannels even at -80 mV. Figure 1B shows the dose-response relations for normalized hemichannel tail currents. The data are fit to a Hill equation of the form: I/I max = 1/[1+([X 2+ ]/K D ) n ] (Eq. 1), where the divalent ion concentration is [X 2+ ], the fractional current is I/I max , I is the tail current at a particular divalent ion concentration, I max is the maximal tail current activation at 0.01 mM divalent ion, K D is the apparent affinity and n is the Hill coefficient. The calculated values for the apparent K D for inhibition of hCx26 hemichannels by extracellular Ca 2+ and Mg 2+ are 0.33 mM and 1.8 mM, respectively. The best-fit parameter values for n are about 1.38 and 1.41 for extracellular Ca 2+ and Mg 2+ , respectively. At physiological Ca 2+ concentrations (1.0 -2.0 mM), hCx26 hemichannels are ≤ 15% of the maximal activation, but at the corresponding Mg 2+ concentrations they are ≥ 50% of the maximal activation. Figure 1C shows the deactivation time constants of the tail currents as a function of Ca 2+ and Mg 2+ . Deactivation kinetics are dramatically accelerated by Ca 2+ concentrations from 0.01 to 5.0 mM. In contrast, Mg 2+ starts to accelerate the closure only above 0.5 mM, and the effect is much less. The steady-state and kinetic data support the idea that, under physiological ionic conditions, extracellular Ca 2+ , but not extracellular Mg 2+ , plays a major role stabilizing and facilitating closing of Cx26 hemichannels. We previously showed that the Ca 2+dependent closing kinetics of hCx26 hemichannels are mostly mediated by disruption of an electrostatic interaction between positions D50 and K61 that stabilizes the open state. 6 Double mutant cycle analyses support a thermodynamic linkage between Ca 2+ and disruption of the D50-K61 interaction in open channels. In addition, single diseasecausing mutations at position D50(N/Y) eliminate Ca 2+ sensitivity of the hemichannel currents and accelerate deactivation kinetics, supporting the idea that in open wildtype hemichannels D50 forms an electrostatic interaction with K61 that is disrupted by external Ca 2+ . Consistent with these data, we observed that substitution of D50 with an alanine (D50A mutation) accelerates hemichannel closure as reflected in the deactivation time constants of tail currents in response to depolarizing pulses from -80 to 0 mV and essentially eliminates their Ca 2+ dependences ( Fig. 2A, upper, and orange data points in Fig. 2B). This further supports the notion that the lack of a negative charge at position D50 mimics disruption of a D50-K61 salt bridge by Ca 2+ . Interestingly, a recently revised version of the hCx26 GJC crystal structure supports an inter-subunit interaction between positions D50 and K61 at a distance of 2.8 Å, and also suggests that D50 forms an intersubunit interaction with position Q48. 7,8 A recent study using mutation and cysteine cross-linkages further supports a functional interaction between Q48 and D50, 8 but the contribution of this interaction to the Ca 2+ sensitivity of hCx26 hemichannel closure was not clear. For this reason, we tested the effect of this interaction on the Ca 2+ dependence of the deactivation time constants by substituting an alanine residue at position 48 (Q48A mutation). Figure 2A (lower) shows current traces in response to depolarizing pulses from -80 to 0 mV in the presence 0.1, 0.5, and 1.8 mM Ca 2+ for an oocyte expressing Q48A. The holding currents and peak tail currents increase as extracellular Ca 2+ is reduced. At 1.8 mM Ca 2+ , the deactivation kinetics display 2 components, a fast component of 1.6 ± 0.5 s and a slow component of 10 ± 0.4 s, the latter being similar to the wildtype behavior. Figure 2B (green) shows the Ca 2+ sensitivity of the rate-limiting slow component. In contrast to the D50A mutant, the deactivation kinetics (hemichannel closure) of the Q48A hemichannels are only slightly increased at low Ca 2+ concentrations (below 0.1 mM), but are markedly more rapid, compared with the wild-type channels, at higher Ca 2+ concentrations. Thus, mutation at Q48 suggests that, in wild-type channels, Q48 plays a role in stabilizing the open state of hCx26 hemichannels, but its stabilizing effect is less than that of the charge at D50 (i.e., it is insensitive to the Ca 2+ concentration below 0.2 mM, and above that level the deactivation is more rapid than wild-type, but slower than for the D50 mutant). The implication is that the effect of Ca 2+ that involves Q48 is a lesser component of the overall Ca 2+ sensitivity than are the interactions of D50 with other residues (e.g., with K61). To investigate the Ca 2+ -dependent coupling energy between residues D50 and Q48, we performed double mutant cycle analysis using apparent affinities for Ca 2+ from wild-type, Q48A and D50A single mutants, and D50A/Q48A double mutant channels. Figure 3A shows the [Ca 2+ ] dose-response relations for wildtype and mutant channels. The calculated values for K D are 0.33, 0.15, 0.75, and 1.7 mM for wild-type, Q48A and D50A single mutants, and D50A/Q48A double mutant channels, respectively. The pairwise interaction energy between positions D50 and Q48 was estimated using Equation 2 (see Methods). This analysis yielded a coupling energy (ΔΔG) of -0.63 kcal/mol (the cut off for non-specific interaction is below ± 0.5 kcal/mol). Strikingly, a similar energy of coupling has been obtained by others using analysis of the voltage dependence of activation, 8 rather than the Ca 2+ dependence explored here. These results support the idea that the side chains of D50 and Q48 residues, as suggested by the crystal structure, interact and are involved in open state stabilization but this interaction does not play a major role in the stabilization of the open conformation in the absence (or low) extracellular Ca 2+ concentrations. Channel expression and molecular biology cDNA for hCx26 was purchased from Origene. Wild-type Cx26 was subcloned in the pGEM-HA vector (Promega) for expression in Xenopus oocytes. Mutations of hCx26 were produced with QuikChange II Site-Directed Mutagenesis Kits (Agilent Technologies). DNA sequencing performed at the NJMS Molecular Resource Facility confirmed the amino acid substitutions. Nhe1-linearized hCx26 wild-type and mutant DNAs were transcribed in vitro to cRNAs using the T7 Message Machine Kit (Ambion). Electrophysiology Electrophysiological data were collected using the 2-electrode voltage clamp technique. All recordings were made at room temperature (20-22 °C). The recording solutions contained (in mM) 118 NaCl, 2 KCl, 5 HEPES (pH = 7.4), with a range of divalent concentrations from 0.01-20. Currents from oocytes were recorded 1-3 d after cRNA injection using an OC-725C oocyte clamp (Warner Instruments). Currents were sampled at 2 kHz and low pass filtered at 0.2 kHz. Microelectrode resistances were between 0.1 and 1.2 MΩ when filled with 3 M KCl. All recordings were performed using agar bridges connecting bath and ground chambers. Measurement of Ca 2+ dose-response curves Endogenous Cx38 expression was reduced by injections of antisense oligonucleotide against Cx38 (1 mg/ml; using the sequence from Ebihara et al., 9 ) 4 h after harvesting the oocytes. After (orange triangles) and Q48a mutants (green triangles). the solid and dotted lines represent the best linear fits for wild-type and mutant channels, respectively. the data represent mean ± SeM of at least 3 independent measurements. 1 d, the same oocytes were coinjected with 18-50 nl of cRNA (0.5-1 mg/ml) coding for hCx26 or for hCx26 mutants plus the Cx38 (1 mg/ml) antisense. Ca 2+ and Mg 2+ dose-response measurements were obtained by assessing the tail current peaks after reaching current saturation during a depolarizing pulse from -80 to 0 mV. Tail current measurements include the steady-state "holding" currents due to the opening of hemichannels at -80 mV induced by reduction of extracellular divalent concentrations. Deactivation time constants from tail currents were determined by fitting tail currents, up to 10 s after reaching steady-state, to exponential functions using Clampfit 11 software. Mutant cycle analysis Mutant cycle analysis was performed using the apparent affinities of Ca 2+ derived from steady-state currents. Coupling energy (ΔΔG) was calculated as: ΔΔG = RT ln(k [wild-type] × k [double mutant] / k [mutant1] × k [mutant 2] ) (Eq. 2) where R is the ideal gas constant and T is the absolute temperature. Significant coupling is indicated by any value above ± 0.5 kcal/mol. Disclosure of Potential Conflicts of Interest No potential conflicts of interest were disclosed. ] dose-response relations for oocytes expressing Q48a (green triangles) and D50a (orange triangles) mutants, and D50a/Q48a (blue triangles) double mutant hemichannels. the solid and dotted lines represent the best fits to the Hill equation for wild-type (from Fig. 1B) and mutant channels, respectively. the data represent mean ± SeM of at least 3 independent measurements.
2018-04-03T04:07:38.576Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "134052b251e47eb3f262a51bbe936d70d755685a", "oa_license": null, "oa_url": "https://www.tandfonline.com/doi/pdf/10.4161/chan.26789?needAccess=true", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "da04ee0a9e5c1b33f66223c5027c2fa3d7ad0c79", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
96442731
pes2o/s2orc
v3-fos-license
Dual-Conformal Regularization of Infrared Loop Divergences and the Chiral Box Expansion We revisit the familiar construction of one-loop scattering amplitudes via generalized unitarity in light of the recently understood properties of loop integrands prior to their integration. We show how in any four-dimensional quantum field theory, the integrand-level factorization of infrared divergences leads to twice as many constraints on integral coefficients than are visible from the integrated expressions. In the case of planar, maximally supersymmetric Yang-Mills amplitudes, we demonstrate that these constraints are both sufficient and necessary to imply the finiteness and dual-conformal invariance of the ratios of scattering amplitudes. We present a novel regularization of the scalar box integrals which makes dual-conformal invariance of finite observables manifest term by term, and describe how this procedure can be generalized to higher loop-orders. Finally, we describe how the familiar scalar boxes at one-loop can be upgraded to `chiral boxes' resulting in a manifestly infrared-factorized, box-like expansion for all one-loop integrands in planar, N=4 super Yang-Mills. Accompanying this note is a Mathematica package which implements our results, and allows for the efficient numerical evaluation of any one-loop amplitude or ratio function. Introduction One-loop amplitudes have been extensively studied in recent decades, leading to many important insights and discoveries about the structure of scattering amplitudes, and frequently serve as an important source of theoretical 'data' with which to test new ideas [1][2][3][4][5][6][7][8][9][10][11]. A powerful approach to computing loop amplitudes in any quantum field theory is the unitarity-based method, in which the amplitude is expanded into a basis of standardized scalar Feynman integrals (regulated if necessary) with coefficients fixed by on-shell scattering processes. Although very familiar and reasonably well understood, the way this approach has been realized in terms of existing technology does not make manifest several recently discovered aspects of loop-amplitudes-especially for the particularly rich case of scattering amplitudes in planar, N = 4 super Yang-Mills (SYM). The two principle shortcomings about the way generalized unitarity has been realized in terms of existing tools (at least for N = 4 SYM) are: (1) that it fails to reflect the rich symmetries observed in loop amplitudes prior to integration; and (2) that even those symmetries which survive integration-such as the dual-conformal invariance (DCI) of the ratios of scattering amplitudes (see e.g. [12])-are severely obfuscated in all existing regularization schema for infrared-divergent contributions. Because of this, manifestly-DCI expressions for ratio functions are known only in a few exceptionally simple cases (see e.g. [12][13][14][15][16]). In this note, we revisit this story and fully address both shortcomings, providing manifestly-DCI expressions for all oneloop ratio functions in N = 4 SYM, and describing how the familiar box expansion can be upgraded to a chiral box expansion which matches all one-loop integrands. This paper is organized as follows. In section 2 we review how generalized unitarity can be used to reproduce integrated amplitudes in N = 4 SYM, and in section 2.2 we (heuristically) derive 'DCI'-regularized expressions for all scalar box integrals, which are given in Table 1. In section 2.3 we summarize the computation of scalar box coefficients using momentum-twistor variables, and write explicit formulae for all one-loop box coefficients in Table 3. In section 3 we explore the general features of the 'DCI'-regularization proposal. In section 3.1 we show that this proposal correctly reproduces all finite observables of any planar theory, thereby justifying its description as a 'regularization scheme'. In section 3.2 we describe how this scheme can be extended beyond one-loop, and compare it with existing approaches. The 'DCI'-regulator is closely related to (and motivated by) the way that infrared divergences arise at the level of the loop-integrand. In section 3.3, we describe how the IR-divergences of loop amplitudes appear in terms of the 'DCI'-regularization scheme, and in section 3.5 we show how generalized uni-tarity realized at the integrand-level can be used to generate more powerful identities than would be possible after integration. In section 3.6 we illustrate the how these features persist beyond the planar-level. In section 4 we return to the familiar box expansion, and describe how it can be made 'chiral' in a natural way, allowing us to match the full, chiral integrand of any one-loop scattering amplitude in N = 4. For the purposes of concreteness and completeness, we review the basic kinematical variables-momentum twistors-used in most of this paper in Appendix A; and in Appendix B, we use these variables to give a closed-form specialization of the BCFW recursion relations for all one-loop integrands in N = 4 SYM. And finally, we have implemented the results described in this paper in a Mathematica package called 'loop amplitudes' which is documented in Appendix C. Revisiting Generalized Unitarity at One-Loop A major triumph of the unitarity-based approach to quantum field theory was the discovery that any one-loop amplitude can be written as a linear combination of standardized, scalar integrals with coefficients expressed as on-shell diagrams, historically known as 'leading singularities' (for a comprehensive review, see [11]). Because of the good UV-behavior of N = 4 SYM, only box integrals contribute-those involving four loop-momentum propagators-giving rise to the familiar 'box expansion': (To be clear, throughout this paper we will refer to ( d 4 )A (k),1 n as the (integrated) one-loop N k MHV amplitude, expressed in units of g 2 N c /(16π 2 ).) The objects appearing in (2.1) will each be described in detail below. But let us briefly remark on the motivation underlying the box expansion. Loop amplitudes are obtained by integrating the loop integrand over a four-dimensional contour of real loop momenta, ∈ R 3,1 . If this integrand were obtained from the Feynman expansion, for example, it would include many propagators for the internal, 'loop' particles. A co-dimension one residue of this integral 'enclosing' one propagator-a 'single-cut'-would correspond to putting one internal particle on-shell. Because the loop integral is four-dimensional, the highest-degree residues would be co-dimension four; these are the so-called leading singularities or 'quad-cuts' of the integrand 1 and are computed in terms of on-shell diagrams of the form shown in (2.2). The scalar box integrals I a,b,c,d are simply those involving precisely four Feynman propagators of a scalar field theory-normalized to have co-dimension four residues of unit magnitude. As such, we should be able to represent any integrated loop amplitude in N = 4 by dressing each box integral with the actual co-dimension four residues 'enclosing' the corresponding propagators as for the full loop integrand. A slight subtlety is that the residues of scalar boxes come in parity conjugate pairs, so in order to agree with the complete integrand the scalar boxes should be supplemented by parity-odd integrals, [8]. Since these integrate to zero, they are often ignored. For the purpose of computing the integrated amplitude in this section-as opposed to the integrand -we will also ignore them here. This mismatch will be addressed in greater detail in section 4, where we show how to upgrade the box expansion (2.1) in a way which allows us to match the full amplitude prior to integration. Scalar Box Integrals and their Divergences Let us start our analysis with the generic, 'four-mass', scalar box integral [17,18]one for which all four corners are 'massive' (involving at least two massless momenta): [19] and where 1/( , a) denotes the standard propagator, When all the corners are massive, this integral is a transcendentality-two, completely finite, symmetric function of the dual-conformally invariant cross-ratios (u, v): Notice that this form is manifestly symmetric under the exchange u ↔ v as this exchange results only in α ↔ β, under which (2.6) is obviously symmetric. The equivalence of (2.6) to existing formulae in the literature is easily verified 2 . It is easy to see that this integral becomes divergent when any corner becomes massless-for example, identifying legs p a and p B results in: (2.8) This causes the cross ratio u to vanish, introducing a logarithmic singularity from the term − 1 2 log(u) log(v) in (2.6). It is worth noting that (2.6) simplifies considerably when u is taken to be parametrically small, This divergence can be regulated in a number of ways, including dimensional regularization (see e.g. [18] for standard formulae). Another canonical way to regulate such divergences uses the Higgs mechanism, [20]. In the simplest setup, this is used to give masses only to internal propagators, and leads to the mass-regularized formulae found in e.g. [21,22]. However, because such regularization schema are predicated on a dimensionful parameter, the regulated formulae that result severely break dualconformal invariance, obscuring the ultimate invariance of even absolutely convergent (and hence DCI) combinations of box integrals. (A complete basis involving only manifestly finite integrals for all convergent one-loop integrals was given in ref. [16].) We are therefore motivated to find some way to make all the singular limits of the general integral (2.6) as dual-conformally invariant as possible, regulating the divergence caused by u → 0 in a way which depends only on some dimensionless parameter, denoted , and dual-conformal cross-ratios. Such a regularization scheme is described in the next subsection. The 'DCI' Regularization of Scalar Box Integrals We propose the following regulator for one-loop integrals: render all external legs slightly off-shell by displacing the coordinates according to, ; (2.10) this transformation can be understood graphically as follows: We will prove that this regulator produces the correct result for all finite observables and discuss its implications, generalizations, and extensions in greater detail in section 3; but first, let us understand its consequences for the one-loop integrals appearing in the box expansion (2.1). If the dimensionless parameter is small (the only regime in which we will be interested) then the invariants (a, b) which are already non-vanishing are modified by a negligible amount. However, when b = a+1 as in (2.8), for example, the invariants will be regularized by (2.10): This implies that the integrated expressions for all box integrals will be given by limits of the four-mass-box (2.6); in particular, it is not necessary for us to add any 'discontinuity functions' such as those discussed in e.g. [4]. Consider the leading, degenerate limit of the box function, the so-called 'threemass' integral. This corresponds to taking b = a+1 in (2.6), keeping all other external corners massive: (b, c), (c, d), (d, a) = 0. The cross-ratio v is unaffected by the regulator to O( ), while u transforms according to Using (2.9), the limit is easily seen to be given by All further singular limits of (2.6) are similarly regulated according to the shifts (2.10). This results in 'DCI-regularized' forms for all scalar box integrals, which we have listed in detail in Table 1. Importantly, these integrals depend only on the dimensionless parameter and dual-conformally invariant cross-ratios. One special case not given in Table1 is the so-called 'massless' scalar box-which is only relevant for n = 4. In this case, the shifts (2.10) are not strictly defined, but can be generalized in simple way so that u, v → 2 , resulting in the following, 'DCI'regulated massless box function: −I 1,2,3,4 = Li 2 (1) + 2 log( ) 2 + O( ) (only for n = 4). (2.14) Scalar Box Coefficients: One-Loop Leading Singularities Factorization dictates that the residues of the loop integrand-called leading singularities-are simply the products of tree-amplitudes, summed-over all the internal particles which can be exchanged, and integrated over the on-shell phase space of each. As mentioned above, we represent such functions graphically as on-shell diagrams of the form shown in (2.2). These are simply algebraic (super)functions of the external kinematical dataalmost always rational, and at one-loop involving at most the solution to a quadratic equation. Leading singularities have of course been known to the literature for quite some time, and can be computed in many ways. A comprehensive summary of these objects-their classification, evaluation, and relations-was described recently in ref. [11]. The physical content of any on-shell diagram (after blowing up all treeamplitudes at the vertices themselves into on-shell diagrams) is encoded by a permutation [11], and the permutations of the corners are simply 'glued' together to give the permutation of the 'one-loop' diagram: (2.15) Given the permutation labeling an on-shell diagram, it is trivial to construct an explicit formula for the corresponding on-shell function. This is done most directly in terms of an auxiliary Grassmannian integral as described in ref. [11]. (All the necessary tools involved in this story have been made available in a Mathematica package called 'positroids', which is documented in ref. [23].) We will not review these ideas here, but simply give the formulae that result. The most compact expressions for leading singularities are found when they are written in terms of the momentum-twistor variables introduced in ref. [24] since they simultaneously trivialize the two ubiquitous kinematical constraints-the onshell condition and momentum conservation. Momentum-twistors are simply the twistor-variables [25] associated to the region-momentum coordinates x a defined in section 2. (A brief introduction to momentum-twistor variables and an explanation of the notation used throughout this section is given in Appendix A.) Assuming a modicum of familiarity with momentum-twistors, let us now describe the form that leading singularities take. We start with the most general case: a leading singularity involving four massive corners. It turns out that this case is the only one we need to consider, as it will smoothly generate all the others in a very natural way. The most important data are the two solutions 1 , 2 to the kinematical constraints of putting all the internal lines on-shell, (2.16) Here, the lines (Aa), . . . , (Dd) in momentum-twistor space correspond to the regionmomenta x a , . . . , x d used in section 2.1; and the lines 1 and 2 correspond to the two solutions to the problem of putting four propagators on-shell. These 'quad-cuts' are the solution to a simple geometry problem in momentum-twistor space (viewed projectively as P 3 ): 1 and 2 are the two lines which simultaneously intersect the four generic lines (Aa), . . . , (Dd): (2.17) (The fact that there are two solutions to the problem of putting four-propagators on-shell is a classic result of the Schubert calculus-and continues to hold even when the four lines are non-generic; see ref. [16] for an exposition of these ideas.) We will give explicit formulae for the twistors α 1 , . . . , δ 1 and α 2 , . . . , δ 2 corresponding to the two lines 1 and 2 intersecting the lines (Aa), . . . , (Dd), respectively, in Table 2; but for now let us take it for granted that they are known. Given these twistors, it is easy to write the leading singularity for each particular quad-cut 1,2 in terms of momentum-twistors. In momentum-twistor space, we are dealing with MHV-stripped amplitudes (so (N k=0 )MHV tree-amplitudes are simply the identity), and polarization sums become simple multiplication of the corresponding MHV-stripped amplitudes. This allows us to 'peel-off' the tree-amplitudes at each corner from a standard on-shell graph involving only MHV amplitudes, [13]: where the on-shell graph on the right is the N k=2 MHV 'four-mass' function 3 [11], While simply replacing (α 1 , . . . , δ 1 ) → (α 2 , . . . , δ 2 ) in the formula above would give (minus) the other leading singularity, we will find it advantageous to use an alternate form of the four-mass function involving the 2 solution: In Table 2, we give the particular solutions to the quad-cuts represented graphically in (2.17)-where ∆ is defined as in equation (2.4). (The notation used here is fully defined in Appendix A.) The motivation for using two separate formulae for the four-mass leading singularities is that they separately evolve smoothly to all other cases. This is made possible by the fact that the multiplicative factors appearing in Table 3, which encode the shifts of the quad-cuts from each 'corner' of the box (see equation (2.17)), are all smooth and non-singular in limits where some of the legs are identified. (Notice that ϕ 1,2 → 1 in all limits where a pair of legs are identified.) As promised, the formulae given above for the four-mass leading singularities smoothly generate all one-loop leading singularities, which for the purposes of completeness and reference have been written explicitly in Table 3. The formulae given in Table 3 have been organized in order to highlight how each case descends smoothly from the general cases given above for the four-mass functions. The complete box coefficient for a given topology is given by the sum of the two corresponding on-shell diagrams, f a,b,c,d ≡ f 1 a,b,c,d +f 2 a,b,c,d , which involve the quad-cuts 1 and 2 , respectively: Here, each graph represents a sum over all such graphs with the same topologyinvolving any four amplitudes A n d such that k a + k b + k c + k d = k 2, and for which n a + n b + n c + n d = n+8, with 0 ≤ k ≤ n 4 for each (except when n = 3, for which A (−1) 3 is allowed). Table 2: Explicit solutions 1 , 2 to the Schubert problem involving four generic lines. (Cyclically-related leading singularities are related by Properties and Extensions of 'DCI' Regularization Using the 'DCI'-regularized box integrals I a,b,c,d and the box coefficients described in section 2, the scalar-box expansion becomes Let us now prove that this produces the correct result for all finite observables in planar N = 4 SYM by showing that the expressions given for I a,b,c,d can be obtained from a very concrete and simple regularization procedure. The 'DCI' Regularization Scheme Given any four-dimensional integrand I( ), we define its 'DCI'-regulated integral by deforming the integrand in the following way: The regulating factor R( ) suppresses all IR-divergent regions (for all integrands), making the result manifestly IR-finite. Ultimately, this works because all infrared singularities in a planar integral arise from predetermined integration regions-namely, It is trivial to see that the regulated expression (3.2) produces the correct result for all finite observables in the limit of → 0: since the factor R( ) ∼ 1+ O( ) everywhere except in the isolated regions responsible for infrared divergences (where it is O( )), it can be ignored for any convergent integral. We should emphasize a distinction we are making between convergent integrals and so-called "finite" integrals. Tautologically, a "convergent" integral is one which can be evaluated without regularization; this requires that it have no collinearlydivergent regions. This notion of convergence precludes the combinations of divergent integrals which happen to be "finite" in some particular regularization scheme (but for which integration does require regularization). The convergence of an integral can be tested as follows: multiply the integrand by any two adjacent propagators, and verify that the product vanishes in the corresponding collinear region: As pointed out in ref. [16] the integrand for the ratio function is in fact convergent. A similar integrand-level test of (partial) convergence for the logarithm of the 4-particle amplitude has been shown sufficient to completely fix the integrand through sevenloops, [27,28], and has also been used to find amplitudes involving more particles [29]. It remains for us to show that the regulated amplitude (3.2) is actually given by (3.1). To see this, consider the box expansion as an integrand-level statement: we can decompose any one-loop integrand into parity-even and parity-odd sectors: The scalar boxes form a complete basis for parity-even four-dimensional integrands (see e.g. [18,30,31]), and so the first term of (3.4) completely captures all parity-even contributions. The parity-odd contributions in (3.4) are often ignored because they vanish when integrated over the parity-even contour of R 3,1 . Importantly, all parity-odd integrals are not merely vanishing upon integration, but are in fact convergent in the sense described above, [16]. (This is not too surprising since the requirement for an integrand to vanish in the limit (3.3) is itself parity-invariant.) And because all parity-odd integrands are convergent, the regulator R( ) is 1+ O( ) everywhere, and can therefore be ignored. For the parity-even sector-that is, the scalar box expansion, (2.1) (but now understood at the integrand-level)-we only need to verify the following: This identity is proved by noting that all other factors in R( ) are approximately unity except in very small regions, where the unregulated box is not singular for lack of divergent propagators; with the explicit propagators regulated, the singular regions are all removed, resulting in precisely I a,b,c,d as described in section 2.1 and given in Table 1. This concludes our proof that (3.1), using the 'DCI'-regulated scalar box integrals, correctly reproduces all regulator-independent contributions to loop amplitudes, and therefore leads to correct formulae for all finite one-loop observables. Generalization of 'DCI'-Regularization to Higher Loop-Orders The failure of simple off-shell regularization beyond one-loop has been known for quite some time (see e.g. [19]). For example, for the two-loop 4-particle amplitude in N = 4 SYM, a simple off-shell prescription fails even to give the correct coefficient for the double-logarithmic divergence! Because the 'DCI'-regulator (3.2) is somewhat similar to an off-shell regulator at one-loop (but one involving non-uniform masses), this may seem like a bad omen for extending it beyond one-loop. However, the most natural generalization of (3.2) beyond one-loop cannot be interpreted as an off-shell regulator-which is good news, indeed! Let us first describe the generalization of the 'DCI'-regulator to higher loop-orders, and then illustrate how it differs from an of-shell regulator in the case of 4-particles. The integrand-level understanding of the 'DCI'-regulator, (3.2), provides an obvious generalization to higher loops-orders, We suspect that this regularization prescription will render all multi-loop integrands IR-finite; moreover, using the same arguments as in section 3.1, we expect that (3.6) will generate the correct result for all finite multi-loop observables. As one simple example of a regularized integral beyond one-loop-and as an illustration of the differences between (3.6) and off-shell regularization-consider the 'DCI'-regularized 4-particle double-box integrand, [32]: Because of the part of R( 1 )R( 2 ) which survives, this is clearly not an off-shell version of the double-box! In particular, the integrand (3.7) includes regions (both collinear and soft-collinear) where the regularization-factor cannot be approximated by unity. This is good news, since this had to happen for the regulator to have any chance of working beyond one-loop. It would be interesting to compute this integral explicitly, and verify for example that scheme-independent quantities such as the two-loop double-logarithmic divergences (the so called cusp anomalous dimension) are correctly reproduced; we leave this, however, to future work. The Infrared-Divergences of One-Loop Amplitudes Let us now consider how the IR-divergences of scattering amplitudes are organized in the 'DCI'-regulated box expansion of (3.1). These divergences arise from the parts of I a,b,c,d proportional to log( ) or log( ) 2 ; let us denote the combined coefficients of each of these divergences as follows: It is easy to identify the coefficient F 2 from Table 1: 4 the only integrals which include a factor of log( ) 2 are the so-called 'two-mass hard' and 'one-mass' boxes, which can be understood 5 as the 'BCF' representation of tree-amplitudes discovered in ref. [33] (see also [34]). Interestingly, the coefficient F 1 in (3.8) is also proportional to the tree-amplitude. (This follows from the ideas discussed in section 3.5.) Because the coefficient of log( ) must be of transcendentality-one, it should involve the logarithm of some dual-conformal cross ratio, which we denote Ω n ; ultimately, F 1 is found to be simply, a+3) . is simply the identity-we see that R (k),1 n is finite in the limit → 0. Therefore, any one-loop ratio function can be computed more simply as, Theories with Triangle Contributions We should briefly mention that the regulator (3.2) can be applied to any planar theory, not just N = 4 SYM. (Planarity being a consequence of the way the momenta p a are associated with region-momentum coordinates x a ; a possible generalization to the non-planar case will be discussed shortly.) In a more general planar field theory, one-loop amplitudes may also require contributions from 'triangle' and 'bubble' integrals in addition to the scalar boxes. These integrals manifestly break dual-conformal symmetry, but at least the triangles can be regulated in the same way as the boxes, (3.2). Indeed, regulated triangle integrals can be obtained from the 'DCI'-regulated scalar-box integrals without any additional work: one need only to send one of the points x a , . . . , x d to (space-like) 'infinity', denoted x ∞ . The correctness of this is easily seen from the geometry of the four-mass integral, (2.3). Thus, for example, the (no-longer 'DCI', but) -regulated two-mass triangle integral can be found by simply sending a point of the three-mass integral to infinity: where we simply take x d → x ∞ so that, Here, dual-conformal invariance is broken explicitly by the fact that 'd → ∞' picks out a preferred point-namely, x ∞ -in region-momentum space. The so-called 'bubble' and rational terms are unaffected by the infrared regulator. Our claim is that in any planar quantum field theory requiring triangle-contributions (which are absent for N = 4), these contributions are correctly reproduced by simply augmenting the box expansion with triangle integrals obtained from I a,b,c,d as described above, with coefficients fixed by on-shell diagrams. The inclusion of bubble and rational terms would be unchanged from their usual form (see e.g. [35]). Integrand-Level Infrared Equations and Residue Theorems The well-understood factorization structure of infrared divergences of one-loop amplitudes in gauge theory [36,37] leads to constraints on integral coefficients in the framework of generalized unitarity, resulting in the so-called 'infrared (IR) equations' (see refs. [33,34,38] for a few applications). These equations can be understood in terms of unitarity cuts, and in the planar case lead to n(n 3)/2 relations among the integral coefficients. By considering generalized unitarity at the integrand level, however, the factorization structure of soft-collinear divergences leads to more powerful identities. Recall that the box coefficients are so-called 'quad-cuts' (co-dimension four residues) of the loop integrand; let us now consider 'triple-cuts'-co-dimension three residues. Because the loop integral is four-dimensional, a triple-cut still depends on one integration variable; by applying Cauchy's residue theorem to the integral over this remaining variable, we find that the sum of all box coefficients sharing a triple-cut must vanish 6 . The richest of these residue theorems arise for triple-cuts involving at least one massless corner, as these separate into two distinct classes depending on whether the 3-particle amplitude at the massless corner is A (−1) 3 or A (0) 3 . Importantly, Cauchy's theorem applies to these two cases separately, leading to a pair of identities: where τ a,b,d = ±1 if d = b+ 1 or d = a 1, respectively, and vanishes otherwise. The sums appearing in the left-hand side of (3.14) are over all box coefficients sharing a particular (chiral) triple-cut, while the right-hand sides follow from the universal structure of soft-collinear divergences (and when τ = 0, (3.14) simply represents the famous 'BCF' formula for tree-amplitudes, [6,33]). It is easy to see that these constitute 2n(n 3) linearly-independent equations. Averaging the two lines of (3.14) results in n(n 3) relations among the (parityeven) box coefficients-which is twice as many as the "standard" IR-equations. This doubling of equations is a consequence of the fact that each collinear divergence of the integrated amplitude can arise from two distinct integration regions; and requiring factorization at the integrand level results in identities for each integration region separately. To better understand the nature of the vanishing terms appearing on the righthand side of (3.14), let us consider a loop-integrand in the neighborhood of a softcollinear divergence-where three consecutive propagators are simultaneously put on-shell. For the sake of concreteness, we may parameterize the loop integrand in such a region by writing → (L + , L − , ⊥ ) (see [39] for a similar discussion): We can take a residue in L − which puts the first propagator on-shell, resulting in (for the regime in which L + , ⊥ are both small). If we now write 2 ⊥ = z z and drop the condition that z is the complex conjugate of z, we see that we can take two more residues-for instance L + and then z-to localize to the triple-cut, and then finally take a residue involving the pole in z. Notice that this results in a 'quad-cut' (a co-dimension four residue) which only involves three propagators! This is a simple one-loop example of the phenomenon of 'composite leading singularities' discussed in refs. [8,40], and is the physical origin of the terms on the right-hand side of (3.14). It is not hard to show that the criterion for the convergence of an integral discussed in section 3.1 is equivalent to the requirement that all composite leading singularities vanish (in general, composite leading singularities are in one to one correspondence with IR-divergent triangle topologies, which are in correspondence with these equations); this implies that for ratio functions in N = 4 SYM the righthand sides of (3.14) must vanish. Moreover, it is possible to show that the difference between the 'DCI'-regularized boxes of (3.12), and more familiar-e.g. dimensionallyregulated-expressions for the box integrals is necessarily proportional to the the right-hand-sides of (3.14)-which provides an alternate proof of the equivalence between the two regularization schema when applied to manifestly finite observables. We leave the details of this discussion as an exercise for the interested reader. In summary, the residue theorems (3.14) encode the physical fact that IRfactorization occurs at the integrand level, and constitute n(n 3) independent parityeven relations among the box coefficients-twice as many as the IR-equations arising after integration. These relations are general-that is, not specific to N = 4 SYMprovided that triangle coefficients are included where they are required. Applications to Non-Planar Theories Although we have largely focused our attention on planar theories, it is worth emphasizing that the increased number of IR equations upon considering integrands instead of integrals is completely unrelated to planarity: the same result holds for non-planar theories as well-including gravity. This motivates a generalization of the integrand-level regulator (3.2) to the non-planar case, at least at one-loop order. As a simple illustration, consider the 4-particle amplitude in N = 8 supergravity. By the no-triangle property, [41], it is a combination of at most three box functions. It is not hard to see that the absence of collinear divergences-the vanishing of the right-hand-sides of the analogs of (3.14)-uniquely fixes their relative coefficients, leading to the following form for the amplitude: ∝ uF (p 1 , p 2 , p 3 , p 4 ) + sF (p 1 , p 3 , p 2 , p 4 ) + tF (p 1 , p 3 , p 4 , p 2 ), (3.17) where s, t, u are the usual Mandelstam invariants and the box integrals are labelled by the momenta coming into the vertices. It is not difficult to check that this matches the correct expression. As a less trivial example, consider the 6-graviton amplitude (with arbitrary heliciticies). In principle, this amplitude could involve any combination of 195 boxes; but the integrand-level IR-equations of the preceding subsection shows that the amplitude can be expressed in terms of at most 120 combinations. That is, we find 75 linearly-independent constraints of the form (3.14), as opposed to the mere 25 constraints arising from the previously known IR-equations. It would be interesting to explore if these relations could be used to obtain more compact analytic forms for one-loop graviton amplitudes. The Chiral Box Expansion for One-Loop Integrands As we have seen, while the familiar box expansion reproduces all one-loop amplitudes post-integration, it does not match the full structure of the actual loop integrand. These can be easily understood in the case of MHV loop amplitudes, where the actual MHV loop integrand only has support on two-mass easy boxes involving 1with vanishing support on all quad-cuts involving 2 . However, because the scalar box integrals are parity-even, their integrands always have unit-magnitude residues for both quad-cuts. This is easy to understand, as scattering amplitude integrands are generally chiral, while the scalar boxes are manifestly non-chiral. In this section, we describe a slight modification to the scalar-box integrands given above which leads to a fully-chiral generalization of the box expansion, allowing us to represent all one-loop integrands of N = 4 SYM. In fact, such a modification was discovered for MHV and NMHV one-loop integrands in ref. [16], but the generalization to more complicated amplitudes was unclear. Here, by revisiting the special case of MHV, we will find that the underlying structure naturally generalizes to all N k MHV one-loop integrands, A Here, X is an arbitrary, auxiliary line in momentum-twistor space (of course, the integrand is ultimately, algebraically independent of X). We will mostly take this formula for granted here; but let us see what role is played by the auxiliary line X, and how we may generalize this to reproduce any one-loop integrand. Because a pentagon integral has five propagators, it has 2× 5 4 = 10 fourth-degree residues-from the two ways of cutting any four of the five propagators. But because one of its propagators involves this auxiliary line X, only two such residues are physically meaningful: those which cut the lines {(a 1 a), (a a+1), (c 1 c), (c c+1)}. These are obviously 'two-mass easy' quad-cuts; but because the numerator of the integrand in (4.1) is proportional to (a 1 a a+1) (c 1 c c+1) , the pentagon's residue on 2 vanishes, while the residue on 1 is unity (see Table 3). But this is perfect: the one-loop MHV integrand only has support on 1 ! Because of this, we choose to view the pentagon contributions to (4.1) as something like a 'chiralized' version of the scalar two-mass easy box: a 1 a a+1) (c 1 c c+1) X c a a 1a aa+1 c 1c cc+1 X . (4.2) We are motivated to draw this as a box because it has precisely one physicallymeaningful quad-cut (in this case 1 ), upon which it has residue 1. Although not relevant for MHV integrands, we could similarly define the parity-conjugate version, which has residue of 1 on the quad-cut 2 , and vanishing residue on 1 . Although it may seem like we're nearly done, we must step back to observe that not all the terms in (4.1) are pentagons! This is indeed a good thing, because as described in ref. [16], all such chiral pentagons are convergent, while the actual MHV one-loop amplitude is of course divergent! The easy-to-overlook, non-pentagon contributions to the one-loop MHV integrand of (4.1), come from the boundary terms when c = a+1: a 1 a a+1) (a a+1 a+2) X a+1 a a 1a aa+1 aa+1 a+1a+2 X , = aa+1 a 1aa+1a+2 X a+1 a a 1a aa+1 aa+1 a+1a+2 X , = a 1aa+1a+2 X a+1 a a 1a aa+1 a+1a+2 X ≡ (4.4) We are motivated to draw this as a triangle, because it has only three non-X propagators. In fact, in the particular case where X is taken to be the point at infinity, I div a becomes precisely a scalar triangle. Therefore, although somewhat less concise than (4.1) (which is deceptively so), we can write the MHV one-loop integrand in the somewhat more suggestive form, where the divergent part is expressed solely in terms of triangles. (Although we have not yet defined all chiral boxes I 1 a,b,c,d , the two-mass easy boxes given in (4.2) are the only ones relevant for MHV (k = 0).) From this simple example, it should be clear that if we have 'chiral' versions of all the scalar boxes-ones with precisely one physical quad-cut with residue 1 on either 1 and 2 -then would have a 'chiral' version of the box expansion: Notice that this expression will be valid before integration-unlike the more familiar scalar box expansion described in section 2. To see that this formula must be right, first observe that the chiral boxes with coefficients are specifically engineered to have precisely the same residues on all physical quad-cuts as the actual amplitude's integrand. However, as mentioned above, every chiral box integral I 1,2 a,b,c,d ≡ d 4 I 1,2 a,b,c,d is convergent-and so the chiral boxes alone cannot fully represent the amplitude. The remaining contributions must therefore encode the divergence of the one-loop amplitude, which from (4.5) together with integrand-level factorization is simply the sum of the divergent "triangles" I div a . Thus, by construction, (4.6) will have the correct leading singularities on all physical quad-cuts (those that do not involve X) and have the correct infrared divergences. In order for (4.6) to reproduce the integrand, it obviously must be Xindependent; and so it must have vanishing support on all quad-cuts involving X. The preceding arguments only fix the infrared-divergent part but not any potential four-mass integrals involving X. Thus, in order to uniquely fix I 1 a,b,c,d and I 2 a,b,c,d , we must also require that they are parity-odd on all four-mass quad-cuts involving the auxiliary line X-that is, they must have equal residues (in both sign and magnitude) on all 'spurious' quad-cuts (as any parity-even integrand must have opposite residues on parity-conjugate quad-cuts, this will ensure that no four-mass integrals involving X will survive integration over a parity-invariant contour). Given chiral box integrands I 1 a,b,c,d and I 2 a,b,c,d which satisfy the conditions described above-being infrared convergent, having only one nonvanishing physical quad-cut and being parity-odd on spurious quad-cuts-it is not hard to prove that (4.6) matches all physical residues of the actual loop integrand, that it is ultimately free of any quad-cuts involving X, and that it is moreover algebraically independent of X. And so, (4.6) must give the correct integrand for the amplitude. (Using the Mathematica package documented in Appendix C, it is easy to verify that (4.6) directly matches the one-loop integrands obtained using the BCFW recursion relations (which is described in Appendix B).) In terms of the chiral boxes, the ratio function becomes, Notice that the divergent integrands I div a are manifestly canceled in the ratio function, leading to an expression involving only manifestly convergent chiral boxes. This is remarkable, as it provides an analytic form of any one-loop ratio function for which no regularization is needed! However, although the complete integrand is of course independent of X, each chiral box individually depends on X. Nevertheless, although it is generally difficult (algebraically) to prove the X-independence of the integrated expressions (this can be seen to amount to the integrand-level IR equations (3.14)), this remarkable fact can easily be verified numerically-for example, using the Mathematica package 'loop amplitudes' which is made available with this note on the arXiv, and documented in Appendix C. a 1 a a+1 a+2 X a+1 a a 1 a a+1 a+2 X a+1 a a 1 a b 1 b c 1 c d 1 d a 1 a c 1 c b 1 b d 1 d and u[a b;c X] ≡ a 1 a b 1 b c 1 c (X) a 1 a c 1 c b 1 b (X) . Conclusions In this note, we have revisited the familiar story of using generalized unitarity to reconstruct one-loop amplitudes, especially for the case of planar, N = 4 SYM. In order to make manifest all the known symmetries of the theory, we reconsidered the regularization of IR-divergences, and found a new, 'DCI'-regularization scheme (3.2) which makes manifest-term-by-term-the dual-conformal invariance of all finite observables of N = 4 SYM at one-loop, including all N k MHV ratio functions, (3.12). The existence of such a regularization scheme was motivated by considering the remarkable properties of one-loop amplitudes prior-to integration. Such considerations also led to integrand-level IR-equations, (3.14)-giving novel constraints among box coefficients, and having applications beyond the planar limit. And in section 4, we found that the familiar box expansion could be upgraded to a chiral box expansion for one-loop integrands, reproducing both the parity-odd and parity-even contributions to scattering amplitudes, and making the factorization of IR-divergences manifest. For the sake of completeness and reference, we have comprehensively described all the ingredients required to compute any one-loop amplitude in planar N = 4 SYM: we gave explicit formulae for all 'DCI'-regularized scalar box integrals in Table 1, and we gave expressions for all one-loop box coefficients in Table 3. Remarkably, these tables are incredibly redundant: all degenerate cases of both tables follow smoothly from the generic case-from the four-mass integral, (2.6), and the four-mass functions, (2.18) and (2.19), respectively. The notation used throughout this paper is reviewed in Appendix A. In Appendix B, we use the BCFW recursion relations described in [15] to explicitly represent all oneloop integrands in N = 4 SYM. All the results described in this paper have been implemented in a Mathematica package, 'loop amplitudes'; instructions for obtaining this package, and complete documentation of the functions made available by it are described in Appendix C. Natural extensions of this work include applying the 'DCI'-regulator to scalar integrals beyond one-loop, finding chiral representations of higher-loop integrands, and using the integrand-level IR-equations to find better representations of one-loop amplitudes beyond the planar limit. A. Review of Momentum-Twistor Variables and Notation Momentum-twistor variables, introduced by Hodges in ref. [24], trivialize the two ubiquitous constraints imposed on the external kinematics for all scattering amplitudes: the on-shell condition, p 2 a = 0, and momentum conservation, a p a = 0. By this we mean that generic momentum twistor variables Z ≡ z 1 · · · z n ∈ G(4, n), with z a ∈ C 4 , always correspond to a set of momentum-conserving, on-shell external momenta. This makes them especially convenient for use in scattering amplitudes. In section 2, we used region-momentum variables x a to encode the external fourmomenta according to p a ≡ x a+1 −x a . Momentum-twistors are so-called because they represent points in the twistor-space [25] of region-momentum x-space: The x-space polygon, whose definition depends on a choice for the cyclic ordering of the external legs, encodes the external momenta in a simple way. Each line in twistor space-spanned, say by the twistors (z a−1 z a )-corresponds to a point in xspace-in this case, the point x a . This is why, for example, integration over a point x in region-momentum space translates to integration over a line ( ) ≡ ( A B ) in momentum-twistor space (see e.g. [11,15,16]). Given a set of momentum-twistors z a -viewed as columns of the (4×n)-matrix Z-it is easy to construct the corresponding set of four-momenta. If we decompose each momentum-twistor z a according to z a ≡ λ a µ a , (A.1) and define a (2×n)-matrix λ ≡ λ 1 · · · λ n ⊂ (λ ⊥ ) according to, where a b ≡ det{λ a , λ b }, then λ· λ = 0 because Q·λ = 0; as such, we may identify p a ≡ λ a λ a , and these (on-shell) four-momenta will automatically conserve momentum. Conversely, given four-momenta written in terms of spinor-helicity variables according to p a ≡ λ a λ a , momentum twistors z a can be constructed by joining each λ a as in (A.1) with a corresponding µ a given by Supersymmetry is encoded by dressing each momentum-twistor z a with an anticommuting four-vector η a -collected into a (4×n)-matrix η acted upon by the SU (4) R-symmetry of N = 4 SYM. If we similarly define η ≡ η · Q, then the kinematical data specified by {λ, λ, η} will automatically be supermomentum-conserving. Dual-conformal transformations in region-momentum space translate to mere SL(4)-transformations in momentum-twistor space; hence, dual-conformal invariants are written in terms of simple determinants: a b c d ≡ det{z a , z b , z c , z d }. The simplest dual-superconformal invariant, however, involves five momentum-twistors, and is given by the familiar 5-bracket (sometimes called an 'R-invariant'), [11,12]: All one-loop leading singularities except the four-masses 7 can be written directly as products of 5-brackets-as evidenced by Table 3 and the fact that BCFW recursion (see Appendix B) directly gives tree-amplitudes in terms of products of 5-brackets. These often involve geometrically-defined, auxiliary points in momentum-twistor space such as "(a b) (c d e)" which represents "span{z a , z b } span{z c , z d , z e }". All such objects are trivially found via Cramer's rule, which represents the unique identity satisfied by any five generic four-vectors: Finally, we should recall that the Jacobian arising form the change of variables from momentum-space to momentum-twistor space is the full Parke-Taylor MHV super-amplitude [42], δ 2×4 λ· η δ 2×2 λ· λ 1 2 2 3 · · · n 1 , (A.8) which explains why (when written in momentum-twistors) all MHV amplitudes are simply A (0),0 n = 1, ensuring the dual-conformal symmetry of all amplitudes in planar N = 4 SYM, [12]; and so, throughout this paper, A (k), n should be understood as the color-ordered, single-trace contribution to the -loop integrand for the n-point N k MHV scattering amplitude divided by (A.8), and in units of (g 2 N C /(16π 2 )) . B. The BCFW Representation of One-Loop Integrands As described in ref. [15] (see also [11]), all -loop integrands for scattering amplitudes in planar N = 4 SYM can be found by the BCFW recursion relations. In terms of on-shell diagrams, the BCFW recursion relations correspond to: Being more explicit about the ranges for the terms involved, the recursion becomes, Working in momentum-twistor space, the BCFW bridge operation corresponding to the shift z n → z n + α z n−1 , for n R > 3, is given by: (1, . . . , a 1, a )[1 a 1 a n 1 n]A (k R ), R n R ( a, a, . . . , n 1, n), where a ≡ (a a 1) (n 1 n 1) and n ≡ (n n 1) (a 1 a 1); when n R = 3 and n L = n 1, the bridge simply results in, And so, the 'bridge' terms of (B.2) are fairly straightforward to compute in momentumtwistor space, and the operations involved are the same regardless of the loop-levels L and R of the amplitudes being bridged. (Of course, at tree-level, only the bridge terms contribute to the recursion; and so the discussion so far suffices to recursively compute all tree-amplitudes, A . It is easy to see that (B.2) gives rise to levels of nested forward-limits. As described in [11], determining which terms from the lower-loop amplitude are non-vanishing in the forward limit is generally difficult (even the number of terms which survive becomes scheme-dependent beyond one-loop). However, for one-loop amplitudes, we only need the forward-limits of trees; and as described in ref. [11], if the treeamplitudes are obtained by BCFW ,deforming the legs which are to be identified in the forward-limit, then the terms which vanish are precisely those involving threeparticle amplitudes on either side of the bridge. Therefore, the only non-vanishing contributions are: ( B , 1, . . . , a 1, a )[ A B a 1 a n]A (k R ),0 n R ( a, a, . . . , n, A ) , ( , 1, . . . , a 1, a )K[(1 a 1 a); (1 n 1 n) a, a, . . . , n 1, n, ), where a ≡ (a a 1) ( A B 1), n ≡ (n n 1) ( A B 1), and ≡ ( A B ) (n 1 n 1), and the "kermit" K[(a b c); (d e f )]-written in terms of the line ≡ ( A B )-is given by: (abc) (def ) 2 ab bc ca de ef f d , Putting everything together, the one-loop integrand for any amplitude is: (1, . . . , a 1, a )[1 a 1 a n 1 n]A (k R ), R n R ( a, a, . . . , n 1, n) a ≡ (a a 1) (n 1 n 1), n ≡ (n n 1) (a 1 a 1) Here, the first two lines represent 'bridge' contributions-identical in form to the treelevel recursion-while the last line represents the forward-limits. Notice the striking similarity of the roles of the 5-bracket [1 a 1 a n 1 n] in the bridge-terms and the 'kermit' K[(1 a 1 a); (1 n 1 n)] in the forward-limit terms. Indeed, the forward-limit terms can be understood as unitarity-cuts which are "bridged" by the kermit. This analysis can of course be continued to higher loop-orders by repeatedly substituting the structure above into the forward-limit contributions appearing in (B.2); this results in higher-loop "kermits" which can similarly be understood as 'bridging' amplitudes across a unitarity cut. However, as our present work requires only one-loop integrands (and as the the complexity involved in the higher-loop 'kermits' is considerable), we will leave a more general discussion to future work. C. Mathematica Implementation of Results In order to make the results described in this paper most useful to researchers, we have prepared a Mathematica package called 'loop amplitudes' which implements all of our results. In addition to providing fast numerical evaluation of loop amplitudes and ratio functions, the loop amplitudes package also serves as a reliable reference for the many results tabulated above (as any transcription error would obstruct numerical consistency checks). The package together with a notebook illustrating much of its functionality are included with the submission files for this paper on the arXiv, which can be obtained as follows. From the abstract page for this paper on the arXiv, look for the "download" options in the upper-right corner of the page, follow the link to "other formats" (below the option for "PDF"), and download the "source" files for the submission. The source will contain 8 the primary package (loop amplitudes.m), together with a notebook (loop amplitudes demo.nb) which includes many detailed examples of the package's functionality. Upon obtaining the source files, one should open and evaluate the Mathematica notebook 'loop amplitudes demo.nb'; in addition to walking the user through many example computations, this notebook will copy the file loop amplitudes.m to the user's ApplicationDirectory[]; this will make the package available to run in any future notebook via the simple command "<<loop amplitudes.m": • R[abcde ]:represents an undefined object that is used internally by the package to represent the 5-bracket involving twistors given by the sequence abcde. Analytic Expressions for Objects Involved in Box Expansions • boxRs[legList ]:returns the list {R 1 , R 2 } of the 5-bracket prefactors-for { 1 , 2 }, respectively-(obtained as an on-shell graph where all corners other than those involving A Kinematical Specification and Numerical Evaluation • evaluate[expression ]:numerically evaluates all superfunctions occurring in expression for the kinematical data defined by the global, (n × 4) matrix Zs. If expression involves an auxiliary line (X) or a loop-variable ( ) ≡ ( A B ), these are taken to be given by the last four entries of the global matrix Zs-that is, Zs≡ {z 1 , . . . , z n , z X 1 , z X 2 , z A , z B }. • exampleTwistors[n ]:it is sometimes convenient to evaluate analytic expressions using explicit kinematical data; under such circumstances, there are some conveniences afforded by using "well-chosen" kinematical data. Reasons for preferring one choice over another include: having all Lorentz invariants be integer-valued and relatively small; having all dual-conformal crossratios positive (so as to avoid branch-ambiguities when evaluating the polylogarithms that arise in scattering amplitudes at loop-level); and possibly to have all Lorentz-invariants be distinct (either to help reconstruct an analytic expression or to avoid 'accidental' cancelations). Of these, the following momentum-twistors meet the first two desires spectacularly: Zs ≡      1 1 1 1 · · · n 0 2 3 4 5 · · · n+1 1 3 6 10 15 · · · n+2 2 4 10 20 35 · · · n+3 3      . (C. 2) The function exampleTwistros [16] is evaluated when the loop amplitudes package is first loaded, allowing amplitudes involving as many as 16 particles to be evaluated without specific initialization. • randomPositiveZs[n ]:picks random kinematical data for which all cross-ratios are positive (the data Zs is positive when viewed as a four-plane: Zs∈ G + (4, n)). • setupUsingSpinors[lambdaList ,lambdaBarList ]:sets up the global variables Ls and Lbs for λ and λ, respectively, and defines the global (n×4) matrix Zs for momentum-twistors for use in numerical evaluation. • setupUsingTwistors[twistorList ]:sets up the global (n×4) matrix Zs encoding the momentum-twistor kinematical data, and defines the auxiliary variables Ls and Lbs for λ and λ, respectively. • showTwistors:returns a formatted table illustrating the kinematical data-as encoded by the currently used ones for evaluation by evaluate[]. • superComponent[component ] [superFunction ]:in the loop amplitudes package, a superFunction is always represented by a pair {f, C}-an ordinary function f (1, . . . , n) of the kinematical variables times a fermionic δ-function of the form, where C is an (n × k)-matrix of ordinary functions, and for each a = 1, . . . , n, η a is a fermionic (anti-commuting) variable. To be clear, we consider each particle as a Grassmann coherent state [39] -that is, the component-function involving the states |1 r 1 · · · |n rn . General Purpose and Functions and AEsthetical Presentation • nice[expression ]:formats expression to display 'nicely' by making replacements such as ab[x· · · y] → x · · · y , α[1] → α 1 , etc., by writing any level-zero matrices in MatrixForm, and by drawing figures to represent abstract objects given by objects such as onShellGraph or scalarBox.
2013-03-19T20:00:01.000Z
2013-03-19T00:00:00.000
{ "year": 2013, "sha1": "e3e72875048f625ed4a56c4bef5315bf85e1a23e", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP01(2015)001.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "e3e72875048f625ed4a56c4bef5315bf85e1a23e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
260662957
pes2o/s2orc
v3-fos-license
Neonatal Bloodstream Infection with Ceftazidime-Avibactam-Resistant blaKPC-2-Producing Klebsiella pneumoniae Carrying blaVEB-25 Background: Although ceftazidime/avibactam (CAZ/AVI) has become an important option for treating adults and children, no data or recommendations exist for neonates. We report a neonatal sepsis case due to CAZ/AVI-resistant blaKPC-2-harboring Klebsiella pneumoniae carrying blaVEB-25 and the use of a customized active surveillance program in conjunction with enhanced infection control measures. Methods: The index case was an extremely premature neonate hospitalized for 110 days that had been previously treated with multiple antibiotics. Customized molecular surveillance was implemented at hospital level and enhanced infection control measures were taken for early recognition and prevention of outbreak. Detection and identification of blaVEB-25 was performed using next-generation sequencing. Results: This was the first case of a bloodstream infection caused by KPC-producing K. pneumoniae that was resistant to CAZ/AVI without the presence of a metalo-β-lactamase in the multiplex PCR platform in a neonate. All 36 additional patients tested (12 in the same NICU and 24 from other hospital departments) carried wild-type blaVEB-1 but they did not harbor blaVEB-25. Conclusion: The emergence of blaVEB-25 is signal for the horizontal transfer of plasmids at hospital facilities and it is of greatest concern for maintaining a sharp vigilance for the surveillance of novel resistance mechanisms. Molecular diagnostics can guide appropriate antimicrobial therapy and the early implementation of infection control measures against antimicrobial resistance. Introduction Antimicrobial resistance (AMR) is a public health threat facing humanity as it tests the resilience of health systems worldwide [1,2].Various genetic elements are associated with the development of resistance because they manage via complex pathways to be transmitted between bacteria [3].In addition, other practices such as delayed and/or incorrect diagnosis and the prescription of broad-spectrum antibiotics reinforce the problem of AMR [4].Advances and innovations in the whole genome sequencing method and the bioinformatics revolution contribute to the immediate detection of the causes of resistance and the taking of timely and effective control measures [5]. A decisive factor in the development of AMR in healthcare facilities and especially in the intensive care units (ICU) of hospitals is the spread of multiresistant Gram-negative bacteria.Enterobacterales are the most important, among which Klebsiella pneumoniae is the main representative.K. pneumoniae is the second most common Gram-negative opportunistic pathogen and one of the most prevalent causes of community-and hospital-acquired infections [6].It is responsible for health-care-associated pneumonia [7] and bacterial neonatal sepsis in low-and middle-income countries [8].A serious public health threat is the emergence and dissemination of carbapenem-resistant K. pneumoniae (CRKP) that is associated with high morbidity and mortality, increased medical costs, and prolonged hospital stay [9].In addition, CRKP infections affect disability-adjusted life years (DALYs) per 100,000 population with a median value in the European Union of 11.5 years, while for these infections treatment options are limited [10,11].CRKP isolates have a variety of mechanisms, which may confer resistance to virtually all available β-lactam antibacterial drugs, including carbapenems.The main resistance molecular mechanism is the production of a range of carbapenemases, including KPC, NDM, VIM, and OXA-48-like carbapenemases [12,13].KPC-producing CRKP strains display the most extensive global distribution and represent a significant challenge due to their limited therapeutic options [14]. A novel β-lactam/β-lactamase inhibitor (BL/BLIs) combination is effective against strains of non-metallo-β lactamase producing Enterobacterales (Ambler class A, class C, and some class D β-lactamases) [15,16].Ceftazidime/avibactam (CAZ/AVI) [17] has become an important first-line option for treating adult and pediatric (>3 months of age) patients with serious infections caused by carbapenem-resistant organisms, but not yet for neonates (IDSA) [18].It is indicated for the treatment of complicated intra-abdominal and urinary tract infections, and infections caused by carbapenem-resistant Enterobacterales (CRE) or carbapenem-resistant Pseudomonas aeruginosa, in patients with limited or no other treatment options [19]. Although KPC-producing Enterobacterales strains are generally considered susceptible to CAZ/AVI, isolates resistant to this antimicrobial agent have been documented without the evidence of metallo-β-lactamases [20].In 2018, a rapid risk assessment conducted by ECDC identified CAZ/AVI resistance in CRE as a public health threat that merits careful monitoring [21].CAZ/AVI resistance mechanisms include the increased expression of the bla KPC gene product (acquisition of resistance was mostly associated with isolates harboring the substitution D179Y in bla KPC-3 or in bla KPC-2 ) [22,23], the presence of other genetic determinants of resistance against ESBL-producing Enterobacterales (SHV-, CTX-M-, or VEB-type β-lactamases) [24,25], changes in cell permeability (i.e., non-functional porins-OmpK35, OmpK36, and OmpK37) [26], and the expression of efflux pumps [27]. Herein, we report a successful treatment of bloodstream infection associated with CAZ/AVI-resistant bla KPC-2 -producing K. pneumoniae carrying bla VEB-25 in a preterm neonate hospitalized in the neonatal intensive care unit (NICU) of a tertiary hospital and the use of a customized active surveillance program in conjunction with infection control measures for the early recognition and prevention of an outbreak. Index Case The index case was the first neonate of a twin pregnancy born to a 33-year-old healthy primigravida at gestational age of 25w +5d (birth weight = 850 gr, appropriate for a gestational age neonate) due to the premature rapture of membranes and the onset of labor.Postnatally, the patient presented respiratory distress syndrome, patent ductus arteriosus, severe bronchopulmonary dysplasia and need for prolonged mechanical ventilation, posthemorrhagic ventricular dilation, gastro-oesophageal reflux disease, retinopathy of prematurity, and episodes of late onset sepsis (LOS).The first LOS occurred on the fourth day of life due to carbapenem-resistant Acinetobacter baumannii, which was successfully treated.The patient was colonized with carbapenem-resistant A. baumannii and Providencia stuartii between Day 4 and 25, respectively.During that time, the neonate had been exposed to multiple antibiotic regimens for prolonged time periods, including meropenem, aminoglycosides, colistin, tigecycline, and CAZ/AVI due to episodes of suspected LOS and colonization by CR Gram-negative bacteria. At Day 108, the neonate was on nasal continuous positive airway pressure due to chronic lung disease, and presented with fever and impaired peripheral perfusion.Empiric antibiotic treatment with colistin (300,000 IU/kg/day every 8 h), tigecycline (2 mg/kg/day every 12 h) and daptomycin (10 mg/kg/day once daily) was immediately initiated for suspected sepsis and due to the previous administration of multiple antimicrobial regimens.Blood culture was positive for a Gram-negative rod within 24 h since the onset of symptoms.A multiplex PCR platform (Biofire ® FilmArray ® , Biomeriuex, Marcy-l'Étoile, France) was used within an hour from positive blood culture.A bla KPC producing K. pneumoniae was detected and CAZ/AVI at a reduced dose of 31 mg/kg/d every 8 h was added to the antimicrobial regimen in attendance of the Antimicrobial Susceptibility Testing (AST). During the first 48 h of this sepsis episode, the neonate deteriorated, requiring mechanical ventilation and possessing high inflammatory indices (max CRP value of 394 mg/L) and thrombocytopenia.At Day 110, the AST displayed a high level of resistance to almost all antimicrobial agents, including piperacillin/tazobactam, cefepime, cefoxitin, ceftazidime, ceftriaxone, imipenem, meropenem (MIC ≥ 16 mg/L), amikacin, gentamicin, ampicillin/sulbactam, aztreonam, ciprofloxacin, levofloxacin, fosfomycin, and trimethoprim/sulfamethoxazole.It was also resistant to novel agents, like ceftolozane/tazobactam and CAZ/AVI, while it was only susceptible to tigecycline and colistin.The isolate displayed a positive phenyl boronic acid phenotypic test and the lateral flow immunoassay, and the PCR method confirmed that the isolate carried bla KPC . A favorable clinical and microbiological response was documented including defervescence and a decrease in CRP within 48-72 h, the first negative blood culture within 4 days, and the discontinuation of invasive mechanical ventilation within 8 days of colistin and tigecycline initiation.The administration of both daptomycin and CAZ/AVI was discontinued, whereas ciprofloxacin was empirically added four days after the first positive blood culture for a total of 13 days.The neonate was successfully treated with colistin and tigecycline for a total of 18 days. NGS Report A variety of genes conferring resistance to antimicrobial agents and heavy metals, as well as genes related to virulence, capsule, and efflux, and regulator systems were detected (Table 1).Only one serine-carbapenemase was detected, which was the bla KPC-2 gene and belonged to ST35.Another five β-lactamases (bla SHV-33 , bla TEM-1B , bla VEB-25 , bla DHA-1 , and bla OXA-10 ) were co-detected, including the bla VEB-25 .The co-production of bla KPC-2 and bla VEB-25 in K. pneumoniae has been associated with CAZ/AVI resistance in the absence of metallo-β-lactamase [24]. Molecular and Phenotypic Surveillance within the NICU and the Hospital Thirteen K. pneumoniae strains were isolated from stool samples of neonates hospitalized in the NICU within a period of 3 months upon the recognition of the index case.Among these isolates, only the index case was bla VEB-25 positive (Figure 1A), confirming the NGS result.Based on the AST results, 24 additional carbapenem-resistant K. pneumoniae strains collected from various hospital sites were also analyzed with targeted PCR; even though they contained bla VEB-1 , they did not harbor bla VEB-25 (Figure 1B).MLST Molecular and Phenotypic Surveillance within the NICU and the Hospital Thirteen K. pneumoniae strains were isolated from stool samples of neonates hos ized in the NICU within a period of 3 months upon the recognition of the index Among these isolates, only the index case was blaVEB-25 positive (Figure 1Α), confirmin NGS result.Based on the AST results, 24 additional carbapenem-resistant K. pneum strains collected from various hospital sites were also analyzed with targeted PCR though they contained blaVEB-1, they did not harbor blaVEB-25 (Figure 1B).Based on the AST results of the 24 carbapenem-resistant K. pneumoniae strain lected from various hospital sites, half were characterized as pan-drug-resistant non-susceptibility to all agents in all antimicrobial categories (i.e., bacterial isolates a Based on the AST results of the 24 carbapenem-resistant K. pneumoniae strains collected from various hospital sites, half were characterized as pan-drug-resistant [PDR, non-susceptibility to all agents in all antimicrobial categories (i.e., bacterial isolates are not susceptible to any clinically available drug)], and the other half as extensively drug resistant [XDR, non-susceptibility to at least one agent in all but two or fewer antimicrobial categories (i.e., bacterial isolates remain susceptible to only one or two antimicrobial categories)].Therefore, all 24 CRKP isolates displayed high levels of resistance to almost all antimicrobials including imipenem (MIC ≥ 16 mg/L), meropenem (MIC ≥ 16 mg/L), amikacin (MIC ≥ 16 mg/L), gentamicin (MIC ≥ 16 mg/L), ampicillin/sulbactam (MIC ≥ 32 mg/L), piperacillin/tazobactam (MIC ≥ 128 mg/L), aztreonam (MIC ≥ 64 mg/L), cefepime (MIC ≥ 64 mg/L), cefoxitin (MIC ≥ 64 mg/L), ceftazidime (MIC ≥ 64 mg/L), ceftriaxone (MIC ≥ 64 mg/L), ciprofloxacin (MIC ≥ 4 mg/L), levofloxacin (MIC ≥ 8 mg/L), fosfomycin (MIC ≥ 256 mg/L), and trimethoprim/sulfamethoxazole (MIC ≥ 320 mg/L).These isolates were also analyzed with targeted PCR; even though they contained bla VEB-1 , they did not harbor bla VEB-25 . Overall Assessment This index case was the last neonate that was infected with A. baumannii and colonized by P. stuartii within the NICU after the implementation of enhanced infection control measures targeting these two pathogens.Upon the recognition of the first K. pneumoniae producing bla KPC-2 and bla VEB-25 and a combination of intensified and targeted infection control actions in the unit, there were no other cases within the NICU for the next 6 months. Discussion We report a neonatal case of a bloodstream infection caused by a K. pneumoniae strain co-producing bla KPC-2 and bla VEB-25 β-lactamases and emphasize the use of precise medicine to customize infection control measures.Treatment options for infections caused by carbapenem-resistant bacteria are extremely limited in neonates.The "off label" use of either "last-line" antimicrobial agents (such as polymyxins and tigecycline) or the currently available newer β-lactam/β-lactam inhibitor combinations, such as CAZ/AVI, meropenemvaborbactam, and imipenem-cilastatin-relebactam that are not yet licensed for neonates, for the empirical treatment of neonatal sepsis in areas endemic for CRKP is still questionable due to limited pharmacokinetic data and local epidemiology of resistant genes [30]. One of the mechanisms that confers resistance to CAZ/AVI is the new bla KPC variants that are constantly appearing worldwide.Very recently, Shi et al. reported multiple novel variants in a K. pneumoniae strain carrying bla KPC-2 from two separate patients during their exposure to CAZ/AVI.In one patient, the bla KPC-2 mutated to bla KPC-35 , bla KPC-78 , and bla KPC-33 during the same period, while in the other patient it mutated to bla KPC-79 and bla KPC-76 , thus enhancing the level of resistance [31].ST258 K. pneumoniae is considered the most frequent type in the majority of bla KPC -associated infections resistant to CAZ/AVI [32]. The bla KPC-2 -harboring K. pneumoniae isolated in our study belonged to Sequence Type ST35.To the best of our knowledge, this is the first report of ST35 CRKP bearing both bla KPC-2 and bla VEB-25 that confers resistance to CAZ/AVI.Findlay et al. identified two isolates as belonging to Sequence Types ST147 and ST258, harboring bla VEB-25 on the plasmid, that confer resistance to CAZ/AVI [33]. To date, there are three reports of CAZ/AVI-resistant KPC-producing K. pneumoniae emergence in Greece, all in adults (six infected and five colonized patients) [24,34,35].Notably, the first CAZ/AVI-resistant clinical isolate was detected in Greece before the introduction of CAZ/AVI in clinical practice.The resistance was due to the existence of bla KPC-23 (variant differed from bla KPC-3 by one -V240A, and from bla KPC-2 by two amino acid substitutions -V240A and H274Y) [34].CAZ/AVI resistance due to the harboring of bla VEB-25 has been reported in two additional cases (one isolate from blood and one from the lower respiratory tract) from patients without prior CAZ/AVI exposure [35].Eight more CAZ/AVI-resistant CRKP isolates were detected in patients not previously exposed to CAZ/AVI (two patients with catheter-related bloodstream infections, one with ventilatorassociated pneumonia, and five with colonization); the resistance was conferred by the harboring bla VEB-25 and bla VEB-14 [24].After intense epidemiological and microbiological surveillance in our NICU, as well as in pediatric and adult departments within our general hospital (especially pediatric and adult intensive care units), we could not find the source of this resistant organism.However, our index patient had been previously exposed to multiple courses of antimicrobial agents, including CAZ/AVI, and also had gut colonization with XDR Gram-negative bacteria, such as A. baumannii and P. stuartii. This was the first premature neonate presenting with sepsis due to CAZ/AVI-resistant bla KPC-2 -harboring K. pneumoniae carrying the bla VEB-25 that was successfully treated with non-conventional "off-label" antimicrobial agents.Currently, available diagnostic platforms detect the presence of the most prevalent carbapenemases, such as KPC, VIM, NDM, and OXA.Neonatologists and infectious disease specialists should be cautious when interpreting the results from these molecular platforms for decision making in empiric and targeted treatment for neonatal sepsis.The mechanism of resistance, especially for the newer β-lactam/β-lactamase inhibitors, may differ in times and in different parts of the world and even within the same institution [36].In addition, various mechanisms of CAZ/AVI resistance emphasize the need for the surveillance of CAZ/AVI-resistant pathogens, as well as for its judicious use. Risk Assessment and Bundle of Actions Taken after Index Case This was the first case of a bloodstream infection caused by KPC-producing K. pneumoniae that was resistant to CAZ/AVI without the presence of metalo-β-lactamase in the multiplex PCR platform in a neonate.The bundle of actions implemented is summarized in Figure 2 and included: (1) enhanced infection control measures including strict isolation of the case index; (2) continuation of active surveillance for CRE and tests for CAZ/AVI susceptibility reported for all isolates recovered from surveillance; (3) application of nextgeneration sequencing (NGS) and molecular testing for the index case to identify probable mechanism(s) of CAZ/AVI resistance; (4) targeted PCR analysis in all CRE isolates from all neonates in the ICU, independently to CAZ/AVI susceptibility; and (5) targeted PCR analysis specifically for CAZ/AVI-R isolates from other departments of the hospital to identify potential sources and/or burden of a potential outbreak. Infection Control Measures The NICU was already on strict infection control measures, including the cohorting of all neonates colonized/infected with an XDR A. baumannii strain.Upon recognition of this index case, extra measures were taken: isolation of index case, dedicated nurse for all shifts, universal application of contact precautions, written reports of active surveillance, and daily audits by infection control team (with a dedicated infection control nurse and a dedicated pediatric infectious disease specialist). Active Surveillance Already in place with twice weekly colonization cultures.Specifically, stool samples were taken from the neonates on the NICU and cultured on MacConkey agar plates supplemented with 1 mg/L meropenem.AST was applied to all isolates as written in Section 4.2, including CAZ/AVI susceptibility.Active surveillance included not only gut and pharyngeal colonization but also environmental cultures. Microbiological Methods, Antimicrobial Susceptibility Testing, and Phenotypic Analysis CRKP was identified with a VITEK 2 automated system (Biomeriuex, Marcy-l'Étoile, France) using the GN ID according to the manufacturer's instructions.The AST of K. pneumoniae was performed using the AST 376 and XN10 cards; the interpretation of results was according to the European Committee on Antimicrobial Susceptibility Testing (EUCAST) breakpoints of January 2022.Susceptibility testing to CAZ/AVI was performed using MIC test strips (Liofilchem srl, Roseto, Italy), while susceptibility testing to colistin was performed using the broth microdilution method (Liofilchem srl, Roseto degli Abruzzi, Italy).Tigecycline was evaluated using the susceptibility breakpoints approved by the US Food and Drug Administration (MIC ≤ 2 mg/L for susceptibility and ≥8 mg/L for resistance). Next-Generation Sequencing (NGS) DNA was extracted using the DNA extraction kit (Qiagen, Hilden, Germany).The Qubit double-strand DNA HS assay kit (Q32851, Life Technologies Corporation, Grand Island, NY, USA) was used for measuring the dsDNA concentration.All procedures regarding shearing, purification, ligation, barcoding, size selection, library amplification and quantitation, emulsion PCR, and enrichment were conducted according to the manufacturer's guidelines.After template enrichment, sequencing was performed on an Ion PGM™ semiconductor sequencer using a Hi-Q View Sequencing Kit and a 316 Chip V2 BC (Thermo Fisher Scientific, Waltham, MA, USA).The sequence reads were de novo assembled and annotated using Geneious Prime version 2021.2.1.The sequence of the K. pneumoniae NTUH-K2044 strain (Accession number NC-012731) was used as reference. Targeted PCR Analysis Molecular surveillance at the NICU and hospital level: After the recognition of the existence of bla VEB-25 as the mechanism of CAZ/AVI resistance in KPC K. pneumoniae, targeted PCR protocol was initiated to investigate transmission within the NICU, but also to other carbapenem-resistant K. pneumoniae isolated from other pediatric and adult departments in the hospital (particularly, pediatric and adult intensive care units).A total of 37 K. pneumoniae strains were tested for the presence of bla VEB-1 .Thirteen of them were isolated from stool samples collected from neonates in the NICU where the bla VEB-25 index case was identified, and twenty-four strains were isolated from different clinical sources (blood, urine, tracheal aspirate, trauma, and central venous catheter) collected from several departments of the hospital to investigate potential sites of outbreak.Plasmid DNA was extracted using the alkaline lysis method, as described previously (H.C.Birnboim and J.Doly NAR 7: 1513-1523, 1979).For PCR amplification, VEB-F (5 -CGA CTT CCA TTT CCC GAT GC-3 ) and VEB-B (5 -GGA CTC TGC AAC AAA TAC GC-3 ) primers were used as diagnostic primers to amplify a 642 bp internal VEB-1 DNA segment, whereas the external primers VEBcas-F (5 -GTT AGC GGT AAT TTA ACC AGA TAG-3 ) and VEBcas-B (5 -CGG TTT GGG CTA TGG GCA G-3 ) were used to amplify the entire gene for DNA sequencing. Conclusions Applying next-generation sequencing technology is crucial for guiding the prediction of underlying resistance mechanisms facilitating the study of the evolution and molecular epidemiology of multidrug-resistant pathogens, especially in endemic areas.The emergence of bla VEB-25 is a warning for the horizontal transfer of plasmids at hospital facilities, and it is of greatest concern for maintaining a sharp vigilance for the surveillance of novel resistance mechanisms.The use of molecular diagnostics may guide appropriate antimicrobial therapy and the early implementation of strict infection control measures, and therefore could play an important role in the fight against antimicrobial resistance. Author Contributions: C.Z.: investigation, formal analysis, review and editing, and writing the initial draft.E.I.: investigation, formal analysis, review and editing, and writing the initial draft.M.S.: formal analysis and review and editing.S.P.: investigation and review and editing.A.K.: provision of study materials and review and editing.E.R.: methodology, data curation, provision of study materials, formal analysis, review and editing, and supervision.A.P.: conceptualization, methodology, data curation, formal analysis, visualization, review and editing, supervision, and funding acquisition.C.Z., E.I., E.R. and A.P. contributed equally to this work.All authors have read and agreed to the published version of the manuscript.Funding: This work was supported by the European Union s Horizon 2020 project VEO (grant number 874735). Institutional Review Board Statement: This study was performed in line with the principles of the Declaration of Helsinki.This study was approved by the Ethics Committee of Aristotle's University Medical Faculty (no. of approval 5.160/ 18-12-19).Since this was a mainly microbiological retrospective analysis of the bacteria isolated from the index case and during surveillance from other hospitalized patients according to the policy of the Infection Control and Prevention Committee of Hippokration General Hospital, there was no need for informed consent from the parents or the patients. Informed Consent Statement: Informed consent for publication was signed by the father of the index patient and is available in the medical chart of the patient. Figure 1 . Figure 1.Agarose gel electrophoresis profile of the blaVEB-25 variant.Panel A shows the blaVEB-2 tive variant, whereas panel B shows the 642 bp amplified products of blaVEB-25-negative carbap resistant K. pneumoniae strains.The amplified products of 1070 bp and 642 bp were produced the external VEBcas-F/VEBcas-B (lane 2) and internal VEB-F/VEB-B primer pairs (lane 3).Th plified product containing the entire gene (1070 bp) was used to deduce the nucleotide seq The 100 bp DNA ladder with reference bands ranging from 100 bp to 1500 bp is indicated in Figure 1 . Figure 1.Agarose gel electrophoresis profile of the bla VEB-25 variant.Panel (A) shows the bla VEB-25 positive variant, whereas panel (B) shows the 642 bp amplified products of bla VEB-25 -negative carbapenem-resistant K. pneumoniae strains.The amplified products of 1070 bp and 642 bp were produced using the external VEBcas-F/VEBcas-B (lane 2) and internal VEB-F/VEB-B primer pairs (lane 3).The amplified product containing the entire gene (1070 bp) was used to deduce the nucleotide sequence.The 100 bp DNA ladder with reference bands ranging from 100 bp to 1500 bp is indicated in lane 1. Figure 2 . Figure 2. Summary of a bundle of actions followed in a premature neonate with a ceftazidimeavibactam-resistant KPC-2-producing Klebsiella pneumoniae bloodstream infection carrying the VEB-25 gene.LOS: late sepsis, DOL: day of life, CAZ-AVI: ceftazidime-avibactam. For each PCR reaction, 50-70 ng of K. pneumoniae plasmid DNA was used in a standard PCR reaction using Kapa Hi Fi DNA polymerase (KAPA Biosystems) with the following amplification program: 1 cycle of 95 • C 3 min, 35 cycles of 20 s at 94 • C, 30 s at 55 • C, 30 s at 72 • C, and a final extension step of 1 min at 72 • C. The PCR products were Sanger sequenced.Nucleotide sequence analysis and pairwise alignments were performed using the National Center of Biotechnology Information website [https://www.ncbi.nlm.nih.govaccessed on 4 August 2023)]. Table 1 . Genetic characteristics of the neonatal blood K. pneumoniae isolate of the study via NGS.
2023-08-07T15:05:09.680Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "8792def73f6b263200ac28f8353586a63db35043", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6382/12/8/1290/pdf?version=1691219569", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b306b897f180eae60f75fbe7417aa41c3f455d81", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
86840458
pes2o/s2orc
v3-fos-license
Antecedent access mechanisms in pronoun processing: evidence from the N400 ABSTRACT Previous cross-modal priming studies showed that lexical decisions to words after a pronoun were facilitated when these words were semantically related to the pronoun's antecedent. These studies suggested that semantic priming effectively measured antecedent retrieval during coreference. We examined whether these effects extended to implicit reading comprehension using the N400 response. The results of three experiments did not yield strong evidence of semantic facilitation due to coreference. Further, the comparison with two additional experiments showed that N400 facilitation effects were reduced in sentences (vs. word pair paradigms) and were modulated by the case morphology of the prime word. We propose that priming effects in cross-modal experiments may have resulted from task-related strategies. More generally, the impact of sentence context and morphological information on priming effects suggests that they may depend on the extent to which the upcoming input is predicted, rather than automatic spreading activation between semantically related words. Introduction Accurate sentence interpretation often depends on the correct treatment of pronouns. For example, speakers are likely to judge a sentence as semantically coherent if the preamble Susan always knew that the bachelor loved his … is followed by words like freedom or family, but certainly not wife. The unsuitability of wife can be detected because speakers link the pronoun to the antecedent the bachelor and retrieve its semantic features, which indicate that bachelors are unmarried individuals. Thus, the speed of antecedent retrieval will determine how quickly properties of the sentence and the discourse can be computed. The current work uses event-related potentials to investigate the mechanisms underlying this retrieval process in comprehension. A major challenge to study antecedent retrieval during coreference is to locate an early and implicit measurement that indicates whether an antecedent has been successfully retrieved. Previous studies have tried to diagnose antecedent retrieval through the presence of semantic relatedness or priming effectsevidence of facilitated processing for words that follow a pronoun and that are semantically associated with its antecedent. These effects could in principle arise due to several processing mechanisms. First, since the representation of the antecedent's discourse referent contains its conceptual features, its reactivation in speakers' working memory of the discourse could result in spreading activation to semantically related concepts (Cloitre & Bever, 1988;Lucas, Tanenhaus, & Carlson, 1990). Second, encountering a pronoun might additionally result in the reactivation of its lexical antecedent in speakers' longterm memory (e.g. the word bachelors in the lexicon). The reactivation of this word in the lexicon should result in spreading activation to semantically related words, according to accounts in which semantically associated words are stored close together or share heavily-weighted connections (Collins & Loftus, 1975;Forster, 1976;Levelt, Roelofs, & Meyer, 1999;Morton, 1979). Seminal work with cross-modal lexical decision in English proposed that semantic association effects occurred early and that they could be used as an effective measure of antecedent reactivation (Leiman, 1982;Nicol, 1988;Shillcock, 1982). In these studies, participants would hear a sentence with a pronoun and would perform a visual lexical decision task (the asterisks below mark the points in which the visual probe appeared in different trials). The target of the lexical decision was a semantically related word (e.g. books), or a semantically unrelated word that was matched in length and frequency (e.g. trees): The motorbike could not be returned to the author * as was originally planned, principally * because at the time he * was in the South of France. Cross-modal studies reported that immediately after the pronoun's offset, participants made faster lexical decisions to semantically related words, consistent with processing facilitation. By contrast, when the pronoun was replaced by a non-coreferential form, such as it, lexical decision times did not differ between related and unrelated words. This suggested that the previously observed facilitation was not due to residual activation from the initial occurrence of the antecedent noun, but rather by its reactivation specifically due to coreference. However, the conclusions that can be drawn from cross-modal priming paradigms have several limitations. First, cross-modal paradigms may engage conscious strategic computations, as detecting semantic relationships between words improves participants' performance in the lexical decision task, which might encourage them to focus on semantic relationships to perform better (Neely, 1991). As a result, priming effects in cross-modal studies may be partly attributable to task-related strategies, rather than automatic antecedent reactivation. Second, it is difficult to estimate the time course of antecedent reactivation from lexical decision responses, which measure keypress latencies to visual probes, typically around 600-800 ms post-probe onset. These latencies provide very rough timing measures, because they simultaneously reflect the effect of antecedent reactivation and the delay associated with manual motor responses. More recently, work using eye-tracking during reading has cast some doubt on whether early semantic effects occur during implicit comprehension, at least in languages like English (Lago et al., 2017). Lago and colleagues examined whether priming effects varied as a function of the grammatical properties of a language. They compared German, a language with syntactic gender (masculine, feminine or neuter) with English, a language in which gender is only conceptual (male, female). They hypothesised that antecedent retrieval in German might include reactivation of both the discourse referent and the pronoun's antecedent in the lexicon, where grammatical gender is stored (Cacciari, Carreiras, & Barbolini-Cionini, 1997;Frazier, Henstra, & Flores d' Arcais, 1996;Garnham, Oakhill, Erlich, & Carreiras, 1995). By contrast, the reactivation of a lexical antecedent may not be needed in English, as the retrieval of a discourse referent should be sufficient to license the pronoun's features, such as conceptual gender, animacy and number. To address this hypothesis, the study implemented semantic relatedness paradigms using English and German passages with possessive pronouns. Crucially, the passages varied whether the antecedents were semantically related or unrelated to the word after the pronoun: The maintenance men told the singer RELATED / deputy UNRELATED about a problem. They had broken his piano and would have to repair that first. Across experiments, English and German speakers showed reading facilitation for the target word, piano, when it was preceded by a semantically related antecedent. But crucially, whereas both groups showed facilitation in late reading measures (i.e. re-reading and total reading times), only German speakers showed facilitation in early measures (i.e. single fixation and first fixation times) immediately after the pronoun. The authors proposed that the lack of early facilitation effects in English was due to the sole reactivation of the discourse referent, whereas in German both the referent and the lexical representation of the antecedent were reactivated in order to license the grammatical gender of the pronoun. They suggested that whereas lexical reactivation resulted in spreading activation to semantically associated words, the reactivation of the discourse referent did not, thus failing to pre-activate them. By contrast, facilitation effects in late measures, which were observed across languages, were proposed to index sentence-level integration processes that were likely occur well after lexical and discourse antecedent reactivation. If this previous account is correct, it suggests that semantic relatedness effects may not serve as a measure of antecedent reactivation cross-linguistically, because they result from a lexical retrieval process that is engaged only in languages with grammatical gender. However, there are several reasons for caution. First, the absence of early relatedness effects in English contrasts with the earlier cross-modal priming results (although as noted above, these could also be attributed to strategic processes not characteristic of normal comprehension). Second, semantic relatedness effects in previous English eye-tracking studies have often shown weak or unreliable effects on the eye-movement record, at least in studies that have compared them with discourse-level variables, such as information structure and plausibility (Camblin, Gordon, & Swaab, 2007;Morris, 1994;Traxler, Foss, Seely, Kaup, & Morris, 2000). Therefore, it is possible that the lack of relatedness effects in previous English eye-tracking work was partly due to properties of the paradigm used. In sum, semantic relatedness effects could serve as a powerful tool for investigating the antecedent retrieval mechanisms that support pronoun interpretation, but prior evidence is mixed as to whether these effects can reliably reflect antecedent reactivation in languages like English, which lack grammatical gender. In the present study, we sought to resolve this conflict by using a different measure of comprehension, eventrelated potentials (ERPs). ERPs are ideally suited to examine this question because they have high temporal resolution, they are well-established to be sensitive to semantic relatedness and they do not require participants to make conscious decisions about the semantic relationships under investigation. The present study The ERP experiments presented below examined the question of whether English pronouns rapidly reactivate a semantic representation of their antecedents during coreference. We constructed passages with possessive expressions that consisted of repeated noun phrases (henceforth repeated noun phrases or NPs, e.g. the prince's in 1a,b) and pronouns (his/her in 1c,d,e). Similarly to previous studies, we varied the relationship between the word after the possessive expression, such that it could be semantically related (castle in 1a,c) or unrelated (estate in 1b,d,e) to the possessive's antecedent: (1) Preamble: Genevieve travelled with the prince for many weeks. In order to assess the presence of semantic facilitation effects, we measured brain responses to the word after the possessive (henceforth the target word) using an ERP component known as the N400. The N400 is a negative deflection that peaks around N400 ms and is typically maximal over centro-parietal electrodes (for review, see Kutas & Federmeier, 2011). Importantly, the N400 component is highly sensitive to semantic relatedness, such that when a word is preceded by a semantically related word in lexical decision tasks, its N400 response is reduced (less negative), than when it is preceded by a semantically unrelated word (Brown, Hagoort, & Chwilla, 2000;Holcomb, 1988;Kutas & Van Petten, 1988, 1994Rugg, 1985). Our experimental predictions were as follows. With repeated NPs (1a,b, the prince's), we expected the preceding possessive NP to prime the semantically related target word castle, yielding a smaller N400 response than for the unrelated word estate. In the critical conditions, the repeated NP was replaced by a coreferential pronoun (1c,d, his). We hypothesised that if comprehenders rapidly reactivated the semantic features of the antecedent upon encountering the pronoun, N400 responses to semantically related words should yield an N400 modulation similar to that in the repeated NP conditions. In addition, we sought to ensure that facilitation to the target word castle was not due to residual activation from having read the NP the prince in the context sentence. This was achieved by the coreference control condition (1e, her), which kept the context containing the related target word constant but used a pronoun that no longer referred to the semantically related antecedent, but to the proper name antecedent (e.g. Genevieve). We predicted that if semantic facilitation was specifically due to coreference rather than a more general context-priming effect, then responses in the coreference control condition (1e) should not be facilitated as compared with (1c). The ERP data was analyzed using Bayesian models, because they offer several advantages over frequentist models (Gelman et al., 2014;McElreath, 2015). One limitation of frequentist statistics is that they do not provide direct information about the alternative to the null hypothesis (which is the hypothesis that a given effect is indeed absent), even though this alternative is often the hypothesis of interest. Rather, a frequentist p-value describes the probability of the observed data given the null hypothesis. When the observed data is deemed very unlikely, the null hypothesis can be rejected and thus the alternative hypothesis is indirectly supported. Notably, the rejection of the null hypothesis does not afford any conclusions about the size of the effects of interest. But in modern psycholinguistics, obtaining realistic estimates of effect sizes is crucial, as computational models need not only qualitative information about the presence or absence of an effect, but also accurate quantitative estimates that allow the evaluation of the model's predictions, such that theory building, modelling and empirical work can mutually inform one another. For the present study, the importance of precise effect size estimates was particularly relevant given previous eye-tracking findings that priming effects due to the antecedent reactivation were very small in English comprehension (Lago et al., 2017). The Bayesian paradigm provided the statistical tools for the calculation of precise effect estimates and the uncertainty associated with them. There are three notions that are central in this paradigm: the posterior, likelihood and prior. The posterior represents a probability distribution over different model parameters given the data and the model. Thus, the marginalised posterior distribution of an effect can be intuitively understood as the probability of different effect sizes given the data. In contrast, the likelihood defines the distribution that is believed to underlie the generation of the data, with the parameters of this distribution representing the entities of interest to be estimated. Finally, the prior distribution reflects the researcher's prior belief about how likely different parameter values are. Priors can be either uninformative or regularising. Uninformative priors make all parameter values equally likely, and thus the prior does not strongly influence the posterior. By contrast, regularising priors are more informative and they do make assumptions about the distribution of the parameter values, thus ensuring that whenever there is not enough data to support an effect, the posterior tends towards a conservative estimate. The computation of Bayesian statistics was performed with hierarchical linear models, also known as linearmixed models. We used random intercepts and slopes to account for the variance between items and participants. This approach allowed us to avoid the aggregation of ERP data over items or subjects, thus increasing statistical power. For research using hierarchical linear models for the analysis of ERP data within the frequentist paradigm see previous work by Franck and colleagues (Frank, Otten, Galli, & Vigliocco, 2015) and Dambacher and colleagues (Dambacher, Kliegl, Hofmann, & Jacobs, 2006). Bayesian hierarchical models have been used to analyze ERP data in recent work by Nieuwland and colleagues (Nieuwland et al., 2018). Overview of the studies The experiments were organised as follows. Experiment 1 tested materials such as the ones shown in (1). Experiments 2 and 3 were direct replications of Experiment 1 but split the 5-condition design into two separate studies to increase the number of trials per participant and condition. Experiment 2 focused on the repeated NP conditions (1a,b), and Experiment 3 focused on the pronoun conditions (1c,d,e). Experiments 4 and 5 addressed the existence of priming effects in singleword paradigms. Experiment 4 presented the word pairs as bare nouns (prince-castle vs. prince-estate) and Experiment 5 presented the prime words with possessive case markers, rendering them more similar to the phrasal stimuli in the sentence experiments (prince's-castle vs. prince's-estate). Experiment 1 Experiment 1 tested 5-condition item sets such as (1). Based on previous findings that N400 responses to a word are facilitated when it is preceded by a semantically unrelated word (Brown et al., 2000;Holcomb, 1988;Kutas & Van Petten, 1994;Rugg, 1985), we expected priming for words that were preceded by a semantically related possessive NP (1a vs. 1b). Our research question was whether the same priming effect would occur when the preceding NP was replaced by a coreferential pronoun (1c vs. 1d). Further, if the priming effect was specifically due to coreference, rather than to residual activation from the NP the prince in the previous context, then responses in the coreference control condition (1e) should not be facilitated as compared with (1c). Participants Participants in this and following ERP studies were righthanded adult native speakers of American English recruited from the University of Maryland. Participants received monetary compensation or course credit for their contribution and gave informed consent in accordance with the Institutional Review Board of the University of Maryland. Thirty-seven subjects participated in Experiment 1, but seven were excluded from analysis due to excessive artifacts (more than 40% of trials rejected). Thirty participants were included in the final analysis (mean age = 22 years, age range = 18-30 years, 19 females). Materials We created 150 experimental passages distributed across five conditions, as shown in (1). The first sentence in the passage was identical across conditions and introduced two characters as the subject and object of the main verb. The object was always a noun with a clear gender bias (e.g. prince), while the subject was a proper name biased towards a different gender (e.g. Genevieve). The two characters always differed in gender to ensure that only one was a suitable antecedent for the possessive expression in the following sentence. The second sentence manipulated whether the possessive expression was a repeated noun phrase or a pronoun and whether the word immediately after the possessive expression was semantically related or unrelated to the target antecedent. In the remaining condition, the pronoun differed in gender from the antecedent noun, thus referring unambiguously to the proper noun. To select the antecedent nouns, we began by choosing nouns describing strongly gender-biased occupations or roles, such as tailor or prince. These nouns were normed for gender bias by twenty participants recruited on Amazon Mechanical Turk using a 1-7 scale counterbalanced by list and standardised for analysis (for details, see Chow, Lewis, & Phillips, 2014). Antecedent nouns had a clear gender bias (i.e. rated lower than 3 or greater than 5) or were combined with a stereotyped modifier to bias the reader towards the intended gender (e.g. the pregnant instructor, the bearded manager). We note that more of the occupations were gender-stereotyped towards male than female referents (86% vs. 14%), and so the proportion of male and female pronouns in the pronoun conditions were correspondingly asymmetrical. For each antecedent noun, we chose a semantically related word (e.g. castle) using the South Florida Free Word Association Norms as a guide (Nelson, McEvoy, & Schreiber, 1998). We avoided target words that overlapped orthographically and phonologically with the antecedent noun and highly frequent words that could elicit floor effects in the N400 response (Van Petten & Kutas, 1990). For each semantically related target word, we then used the English Lexicon Project database (Balota et al., 2007) to select an unrelated word that was matched to the related noun on word length (mean RELATED = 6.30; SD RELATED = 1.95; mean UNRELATED = 6.36; SD UNRELATED = 1.87) and log word frequency (mean RELATED = 2.86; SD RELATED = 0.66; mean UNRELATED = 2.85; SD UNRELATED = 0.66), as determined by the SUBTLEXus database (Brysbaert & New, 2009). Finally, the semantic association between related and unrelated target words and the antecedent nouns was quantified by forty-two participants from the University of Maryland using a 1-5 ("not related"-"very related") scale. The results indicated a large difference in ratings between conditions: related pairs had a mean rating of 4.54 (SD = 0.46), whereas unrelated pairs had a mean rating of 2.05 (SD = 0.74). In addition to the semantic association norming, we also measured the cloze probability of related and unrelated target words, as it is known that predictability can impact the N400 response (e.g. Federmeier, Wlotko, De Ochoa-Dewald, & Kutas, 2007;Kutas, 1993). Eighty participants recruited on Amazon Mechanical Turk were asked to complete the experimental passages truncated after the possessive expression (split evenly between full NP possessives and the corresponding possessive pronoun). Each presentation list contained half of the items, resulting in 40 completion responses per item. The results indicated that the sentence contexts did not strongly or consistently predict related targets. The mean cloze value for related target words after gendermatching possessive expressions was 5% (for words after pronouns, SD = 12%) and 5% (for words after repeated NPs, SD = 12%). By contrast, the mean cloze value for unrelated target words after gender-matching possessive expressions was 0.7% (for words after pronouns, SD = 2%) and 0.6% (for words after repeated NPs, SD = 2%). Approximately half the items had a cloze of 0. Experimental items were distributed across 5 lists in a Latin Square design, such that each list contained exactly one version of each item and 30 items per condition. Each participant saw exactly one version of each item. The experiment also contained 72 two-sentence filler items of comparable length and complexity. Filler items contained other kinds of referential expressions and anaphors, particularly feminine pronouns to balance the larger proportion of masculine pronouns in the experimental items. Procedure Participants were seated in a dimly lit room. Stimuli were visually presented one word at a time on a computer monitor in white 24-point case Arial font on a black background. Each word appeared for 300 ms with an interstimulus interval of 200 ms (SOA = 500 ms). The experimental session was divided into 5 blocks separated by rest intervals. A third of the trials was followed by yes/ no comprehension questions, which ensured that participants were attending to the stimuli. The questions never alluded to the referential dependency, to avoid focusing participants' attention on this relationship. Half of the questions asked about the first sentence and half asked about the second sentence. Target yes/no answers were in a 1:1 ratio. The order of experimental and filler items was randomised for each list. Experimental sessions lasted approximately 60 min, in addition to preparation and clean up time. Electrophysiological recording Twenty-nine tin electrodes were held on the scalp by an elastic cap (Electro-Cap International, Inc., Eaton, OH) in a 10-20 configuration (O1, Oz, O2, P7, P3, Pz, P4, P8, TP7, Cp3, CPz, CP4, TP8, T7, C3, Cz, C4, T8, FT7, FC3, FCz, FC4, FT8, F7, F3, Fz, F4, F8, FP1). Bipolar electrodes were placed above and below the left eye and at the outer canthus of the right and left eyes to monitor vertical and horizontal eye movements. Additional electrodes were placed over the left and right mastoids. Scalp electrodes were referenced online to the left mastoid and rereferenced offline to the average of left and right mastoids. Impedances were maintained at less than 10 kΩ for all scalp electrode sites and less than 5 kΩ for mastoid sites and ocular electrodes. The EEG signal was amplified by a NeuroScan SynAmps® Model 5083 (Neu-roScan, Inc., Charlotte, NC) with a bandpass of 0.05-100 Hz and was continuously sampled at 500 Hz by an analog-to-digital converter. Analysis Averaged ERPs time-locked to the possessive expression were formed offline from trials free of ocular and muscular artifact using preprocessing routines from the EEGLAB (Delorme & Makeig, 2004) and ERPLAB (Lopez-Calderon & Luck, 2014) toolboxes. 15.39% of trials were rejected due to excessive artifacts (rejections range: 1%-34%). Epochs with artifacts were excluded, and channels with a disproportionate number of epochs containing peakto-peak fluctuations of 100 μV or more were interpolated. For the nine channels of interest, this procedure affected only one channel in one participant. A 100-ms pre-stimulus baseline was subtracted from all waveforms before statistical analysis, and a 40-Hz low-pass filter was applied to the ERPs offline. ERP data for this and following experiments, as well as all experimental materials, are publicly available in the Open Science Framework repository (https://osf.io/). All analyses were conducted on mean ERP amplitudes for the target words in the 300-500 ms time-window, across a set of 9 centro-parietal electrodes in which the N400 effect is typically maximal: P3, Pz, P4, CP3, CPz, CP4, C3, Cz, C4. The timewindow and electrodes of interest were kept constant across experiments to avoid selection bias. Statistical analyses were conducted with R (R Development Core Team, 2018). Bayesian analyses were performed with the package rstan (Stan Development Team, 2017a), which makes use of the probabilistic programming language Stan (Carpenter et al., 2017), and with the package brms (Bürkner, 2016). After aggregating the data over the electrodes of interest, we fit hierarchical linear models with a full random effects structure, which included varying intercepts andslopes and full variance-covariance matrices for the random effects (Barr, Levy, Scheepers, & Tily, 2013). The model specifications used in all analyses were as follows. A hierarchical linear model assumes that each data point (i.e. the mean ERP amplitude in the time window of interest) a i is generated from a normal distribution with mean m i and standard deviation σ: where m i is defined as a hierarchical linear regression: where x i is a vector containing the contrast coding of the fixed effects (including the intercept), b is a vector containing the estimates of the respective fixed effects, and b subj[i] and b item[i] contain the random effects estimates of the subject or item that produced the i th data point. This assumption about the generative process underlying the data is referred to as likelihood. We estimated the parameters of this likelihood from the data: the fixed effect estimates b, the random effects estimates b subj and b item and the standard deviation σ. The fixed effect estimates b were the parameters of theoretical interest for our research questions. For each parameter we assumed some prior distribution which specified the a priori probability of different parameter values. We used weakly informative priors which have a regularising effect on the parameter estimates, meaning that more conservative effect estimates are preferred (for discussion, see Gelman et al., 2014). Specifically, as a prior for the fixed effect intercept parameter b 0 we used a normal distribution with a mean of zero and a standard deviation of ten N(0, 10). The value of 10 for the standard deviation was intended to allow for a wide range of possible estimates, as the N400 overall magnitude varies largely across studies depending on multiple experimental factors (e.g. SOA, presentation modality). Given this standard deviation, 95% of the probability mass of the prior covered the interval ranging from −20 to 20 μV. As priors for all other fixed effects we used a standard normal distribution N(0, 1). The standard deviation of the fixed effects was smaller than that of the intercept, such that 95% of the probability mass of the prior covered a range from −2 to 2 µV. This was the range of semantic priming effect sizes that we deemed likely given previous studies. As a prior for sigma we used a standard normal distribution truncated at zero N + (0, 1), in order to ensure positive values for sigma. As a prior for the random effects we used a multivariate normal distribution MVN(0, Cov) with a mean of 0 and a covariance matrix Cov, where Cov can be expressed in terms of a diagonal matrix Var and a correlation matrix R containing the variances and correlations of the random effects parameters: Cov = Var × R × Var (for details, see McElreath, 2015). As priors for the variance of each random effect we used a truncated standard normal distribution N + (0, 1) and as a prior for the correlation matrix R we used a so-called LKJ prior, i.e. a distribution over possible correlation matrices (Joe, 2006;Lewandowski, Kurowicka, & Joe, 2009). The LKJ distribution has one parameter, h, which controls how much probability mass is assigned to extreme correlations (e.g. setting h to 1 means that all correlation matrices are equally likely; for more detail, see Stan Development Team, 2017b). In all our analyses, we set η to 2 in order to slightly disfavour extreme correlations, following recent recommendations for doing Bayesian analyses in psycholinguistic experiments (Sorensen, Hohenstein, & Vasishth, 2016). To address the research question, we fit two models with different fixed effect structures. First, we fit a model with a main effect of possessive type (NP/pronoun) and two pairwise comparisons that estimated semantic priming (or relatedness) effects for the possessive NP and pronoun conditions separately. The conditions with possessive NPs before the target words (1a, b) were coded as 0.5 and the conditions with possessive pronouns before the target words (1c,d) were coded as −0.5. In this and all following analyses, the unrelated conditions were coded as 0.5 and the related conditions as −0.5, with facilitatory priming always being indicated by negative effects. Note that the main effect of possessive type was not of theoretical interest, as we did not have any predictions as to whether N400 responses to target words overall (collapsing across related and unrelated conditions) would vary depending on whether they were preceded by a NP or a pronoun, which vary on several lexical variables such as length, frequency and imaginability. Rather, this effect was included to account for the variance in the data associated with this factor. The research questions addressed by the first model were whether there was a priming effect for NPs and whether there was a priming effect for pronouns. Second, to address the question of whether the pronoun related condition differed from the coreference control condition, we fit a model that directly compared the related pronoun condition, coded as −0.5, with the coreference control condition, coded as 0.5. To facilitate comparison with previous work that used by-subject ANOVAs and a null-hypothesis testing approach, all experiments were also reanalyzed using Type III SS repeated-measures ANOVA. These results are presented in the Supplementary Materials. Results Mean accuracy in the comprehension questions was 91% (SD = 5.1%). For the Bayesian analyses, we calculated the marginalised posterior distribution of each fixed effect, reflecting the probability of different effect sizes given the statistical model and the observed data. We present the mean of these posterior distributions and 95% credible intervals (CrI), which represent the interval where it is 95% certain that the true effect lies given the data. The analysis is considered to provide strong evidence for the presence of an effect when the 95% CrI of the effect of interest does not include 0. Finally, we present the posterior probability of there being a facilitatory priming effect, which for the relatedness factor represents the posterior probability of a less negative N400 response for related than unrelated target words. In the remainder of the manuscript, the shorter denomination probability of a priming effect is used for conciseness. ERP responses to target words are shown in Figure 1. Crucially, the second model showed some evidence of a difference between related pronouns and the coreference control condition, as evidenced by a more negative N400 response in the coreference control condition. The mean of the posterior distribution was −0.72 µV with a credible interval of [−1.54, 0.12] and a probability of a negative effect of 0.954. Discussion The results of Experiment 1 were consistent with the experimental predictions. In the repeated NP conditions, words semantically related to the preceding NP showed facilitated N400 responses as compared with unrelated words. Moreover, quantitatively similar priming effects were obtained with coreferential pronouns. This supports the claim that upon encountering a coreferential pronoun such as his, readers rapidly reactivate the lexico-semantic features of its antecedent (e.g. the prince), which in turn preactivate semantically related words and thus reduce processing cost when one of these words is later encountered. Critically, the priming effect with coreferential pronouns was unlikely to result from residual activation of the NP the prince in the sentence context, as the comparison between the related pronoun and coreference control conditions showed that semantically related target words no longer displayed facilitated N400 responses when the pronoun referred to the proper name antecedent. Therefore, as the context sentence was identical between these two conditions, priming effects were more likely attributable to coreference than to general context-priming effects. However, although these results were consistent with the experimental predictions, they did not yield strong evidence of priming. This is clear in the credible intervals for both repeated NPs and pronouns, which included 0. Particularly surprising was the small magnitude of the priming effect in the repeated NP conditions, which was designed to provide a baseline measure of semantic relatedness. In order to account for these relatively small priming effects, we considered two explanations. The first was that semantic priming effects may be indeed very small in sentence contexts, as suggested by previous work in English (Camblin et al., 2007;Morris, 1994;Traxler et al., 2000). Although most of this previous work was done with eye-tracking, if small priming effects also affect ERP measures, this might have reduced experimental power in Experiment 1, limiting our ability to detect priming effects. Alternatively, it is possible that the semantic relationship between the critical word pairs (e.g. prince-castle) was not strong enough to differentially modulate participants' brain responses, despite having produced clear differences in their behavioural ratings (see Materials). The following studies addressed these possibilities. Experiment 2 Experiments 2 and 3 were designed to address one of the possible explanations for the small priming effects in Experiment 1, namely that this experiment did not have enough trials to detect priming effects. With this goal, we divided the five experimental conditions into two separate experiments, allowing us to increase the number of trials per condition. We also modified the presentation of the preamble sentence and increased the difficulty of the comprehension questions to encourage careful reading. Experiment 2 focused on addressing the reliability of priming effects within the repeated NP conditions. Participants Thirty-two volunteers participated, with no exclusions, and were entered in the analysis (mean age = 20 years, age range = 18-29 years, 18 females). Materials Experiments 2 and 3 used a subset of 90 passages from the 150 in Experiment 1, focusing on those whose NP antecedent was strongly gender-biased and markedly related to the related target word. The two conditions with repeated NP possessives were tested in Experiment 2, and the three pronoun conditions were tested in Experiment 3. This resulted in 45 trials per condition per participant in Experiment 2, and 30 trials per condition per participant in Experiment 3. In both experiments, no item was shown in more than one condition to the same participant. Only minor changes were made to the item sets, in order to improve the felicity of the discourse scenario. We also balanced the plurality of the target word after discovering that target words in Experiment 1 were more often plural in unrelated conditions than in related conditions. Further, the number of filler items was reduced to 60 and revised to be more engaging. Lastly, the difficulty of the comprehension questions was increased to encourage depth of processing (Love & McKoon, 2011;Sanford & Sturt, 2002;Stewart, Holler, & Kidd, 2007). As in Experiment 1, experimental and filler items were combined and randomised. Procedure and analysis Experiment 2 followed the same procedure and analysis as Experiment 1 with a few exceptions. First, the entire preamble sentence appeared on screen and participants read it at their own pace, pressing a button to initiate the RSVP presentation of the second sentence. Informal beta testing revealed that participants found this method of presentation more pleasant and less tiring than full RSVP of the two-sentence text. Second, participants were given feedback on their comprehension question accuracy at the end of each block to encourage attention throughout the task. Materials were divided into five blocks that lasted approximately 6-8 min each, with in-between breaks. An entire experimental session lasted approximately 40 min. The equipment was identical to Experiment 1, except that new caps from Electro-Cap International were used for this and all following ERP experiments. The electrode configuration for these caps used a second frontal electrode (FP2) instead of the central occipital electrode (Oz). Approximately 8.8% of the trials were rejected due to excessive artifacts (range 0.3%-36.1%). Channel interpolation did not affect any of the electrodes of interest. The analysis was similar to Experiment 1, with ERP responses being averaged across the same nine electrodes 300-500 ms after the presentation of the target word. Bayesian hierarchical linear models were fit with the same random effects structure and the same priors as in Experiment 1. In order to address the research question of whether there was a priming effect with possessive NPs, the model included a fixed effect of priming within the NP conditions. Results Mean accuracy for the comprehension questions was 93.9% (SD = 5.1%). ERP responses to target words are shown in Figure 2 (left panel). As in Experiment 1, Discussion The results of Experiment 2 were consistent with those of Experiment 1: target nouns after semantically related possessive NPs showed reduced N400 amplitudes relative to semantically unrelated words. However, although Experiment 2 attempted to increase the number of trials by focusing on the possessive NP conditions, the statistical results showed only some indication of priming. In fact, the mean effect size estimates of priming in Experiments 1 and 2 were remarkably similar (−0.35 µV and −0.39 µV, respectively). This suggests that small priming effects with possessive NPs may result indeed from a reduced influence of semantic priming in sentence contexts, rather than an insufficient number of trials in Experiment 1. Experiment 3 addressed whether this conclusion extended to priming effects with coreferential pronouns, by focusing only on conditions (1c,d,e) of Experiment 1. Participants Fifty-seven volunteers participated in Experiment 3, but nine were excluded due to a technical error in the experiment presentation. Of the remaining 48 participants, six were excluded due to excessive artifacts, such that fortytwo participants were included in the analysis (mean age = 21 years, age range = 18-27 years, 24 females). Materials, procedure and analysis Experiment 3 used the same 90 item sets and procedure as Experiment 2, but it only presented the three pronoun conditions. Approximately 8.9% of trials were rejected due to excessive artifacts (range 0%-34.4%). Channel interpolation did not affect any of the electrodes of interest. The item sets were distributed across three lists in a Latin Square design, combined with the sixty filler items and randomised. Bayesian analyses were performed using maximal random-effects structures and the same weaklyinformative priors as in previous experiments. The first statistical model included a fixed effect of priming and the second model directly compared the related pronoun and coreference control conditions. As in Experiment 1, the related pronoun condition was coded as −0.5 and the coreference control condition as 0.5, such that a negative effect indicates more facilitation for the related pronoun than the coreference control condition. Main analyses Mean accuracy in the comprehension questions was 90.1% (SD = 4.9%). ERP responses to target words are shown in Figure 2 (right panel). The results in the coreferential pronoun conditions were consistent with Experiment 1, and it revealed only some indication of priming, with the mean of the posterior distribution of the priming effect being −0.51 µV with a credible interval of [−1.27, 0.25] and a probability of a priming effect of 0.901. The results of the second model, however, differed from that of Experiment 1. The analysis revealed no evidence of a difference between the coreference control and the related pronoun condition. The mean of the effect's posterior distribution was positive with a value of 0.31, a credible interval of [−0.47, 1.05] and a posterior probability of a negative effect of 0.218. Supplementary analyses The results of Experiment 3 differed from those of Experiment 1 in the behaviour of the coreference control condition. In Experiment 1, the pronoun related condition showed priming when compared to the coreference control condition, but in Experiment 3 there was no evidence of such difference, with the numeric pattern being the opposite. One explanation for the variable behaviour of the coreference control condition is that the gender marking of the pronoun did not render it fully referentially unambiguous, such that even in this condition, readers may have still reactivated the occupation antecedent for some items, yielding semantic priming and thus reducing or obscuring the N400 contrast between the pronoun related and the coreference control conditions. In other words, it is possible that in the coreference control condition of some items, such as "Nicole stopped the president in the hallway before the broadcast. She feared that her administration … .", readers still reactivated (at least partially or temporarily) the antecedent "the president", despite the mismatch between the feminine gender of the pronoun and the male bias of the occupation antecedent. Note that although the gender bias of the occupation antecedents was verified in a separate norming study (see Experiment 1), we did not measure its downstream consequences in our sentence contexts. To address this shortcoming, we conducted an additional rating study to quantify the referential ambiguity of the coreference control condition, and we used the ratings as a by-item predictor of the ERP responses to see if they could explain some of the variability in the N400 differences between the pronoun related and coreference control conditions in Experiments 1 and 3. The format of the rating experiment followed Kehler, Kertz, Rohde, and Elman (2008: Experiment 1). 150 participants (mean age = 32 years, age range = 19-56 years, 67 females) read sentence fragments in rapid serial visual presentation (SOA = 500 ms) until the target word, and then answered a forced-choice question targeting the referent of the pronoun: e.g. "Genevieve traveled with the prince for many weeks. She wished that his / her castle … "; "Who did the castle belong to? -Genevieve, -the prince". Participants saw each item in either the pronoun related or coreference control conditions (e.g. "his castle" vs. "her castle"). The results of the norming study revealed strong evidence of a contrast between the two conditions: the occupation antecedent was chosen as a referent on 92% of the trials in the pronoun related condition but only on 30% of the trials in the coreference control condition (mean of the posterior distribution = 4.67 log odds, CrI = [3.88, 5.65], probability of a negative effect = 1.00). However, there were differences across items in the referential ambiguity of the coreference control condition: whereas in some items the occupation antecedent was never selected, in other items it was selected on 50% or more of the trials. Thus, to assess whether this variability modulated the N400 differences between the coreference control and the pronoun related conditions, we entered the mean by-item ratings as an additional centered predictor to the model that targeted the contrast between these conditions (i.e. the second model presented in the Main analyses section). However, there was no statistical evidence of an interaction between the ratings and the difference between the coreference control and pronoun related conditions in Experiment 1 (mean of the posterior distribution = −0.68 µV, CrI = [−2.38, 0.96], probability of a negative interaction = 0. 771) or Experiment 3 (mean of the posterior distribution = −0.19 µV, CrI = [−1.87, 1.50], probability of a negative interaction = 0.595). Therefore, we did not observe evidence that differential degrees of referential ambiguity in the control condition could explain the lack of a reliable difference between this condition and the pronoun related condition across experiments. Discussion The results of Experiment 3 only partially replicated those of Experiment 1. First, target words semantically related to the pronoun's antecedent showed facilitated N400 responses compared to unrelated words. This priming effect replicated the one in Experiment 1 and it had a stronger mean amplitude (−0.37 µV vs. −0.50 µV). However, this result is unlikely to reflect facilitation due to antecedent reactivation, because the coreference control condition, which was intended to rule out a general effect of context-priming, did not show a larger N400 than the related pronoun condition: in fact, N400 responses were as much facilitated in the coreference control as in the related pronoun conditions. Therefore, it cannot be ruled out that the reduced N400 to words like castle resulted from residual activation from reading the NP the prince in the sentence context, rather than its retrieval specifically due to coreference. Before addressing the implications of this finding, we present two final experiments, which aimed to rule out a potential problem in the experimental materials. Experiment 4 Experiments 1 and 2 showed surprisingly small N400 facilitation effects, even when possessive repeated NPs were used immediately before the target word. This observation limits the conclusions that can be drawn from the failure to find strong priming effects in the pronoun conditions, because it suggests that the "base" relatedness effect that the antecedent retrieval hypothesis would predict would have been quite small to begin with. However, it is not obvious why the N400 relatedness effect was so small in the NP conditions, given that the behavioural norming showed clear differences in relatedness. Experiment 4 was designed to shed more light on this puzzle. We conducted an ERP experiment using a "word pair" paradigm typical of classic semantic priming manipulations. Our goal was to determine whether semantically related words also showed small N400 priming effects in a word pair paradigm. Participants Thirty-two participants took part in the experiment, but three participants were excluded due to excessive artifacts and five were excluded due to poor accuracy in the memory test (see Procedure). Twenty-four participants were included in the final analysis (mean age = 20 years, age range = 18-27 years, 20 females). Materials Stimuli consisted of the 150 related (e.g. prince-castle) and unrelated (e.g. princeestate) noun pairs from Experiment 1. The pairs were divided into two lists and intermixed with thirty fillers consisting of ten strongly related pairs (e.g. migraineheadache), ten unrelated pairs (e.g. physician-flower), and ten weakly related pairs, (e.g. helicopterbicycle). The order of experimental and filler items was randomised for each list. Procedure and analysis Word pairs were presented using the same stimulus-onset asynchrony interval (500 ms) as in the sentence experiments. To ensure continued attention during the experiment, participants performed a memory test after the experiment. The test consisted of ten pairs of nouns that had appeared together during the experiment and ten pairs of nouns that had appeared previously but not as a pair. Participants were told about the memory test prior to the experiment and its format was described to them. They were advised that although all the words in the test had been displayed previously, the critical question was whether they had been presented together as a pair. Participants who were less than 60% accurate in the memory test were excluded from analysis. The experimental session lasted 10-15 min, with two resting intervals. As the overall length of the experimental session was short, experiments 4 and 5 were preceded by another experimental session. The majority of the participants in each group participated in an unrelated word pair experiment (75% of participants in experiment 4, 85% in experiment 5), and the remaining participants took part in an unrelated sentence experiment. Data was recorded and preprocessed as in the preceding experiments. Approximately 15.5% of the trials were rejected due to artifacts (range 0%-40%). Channel interpolation did not affect any of the electrodes of interest. We performed a similar Bayesian analysis as for the preceding experiments. As before, we aggregated the data over the nine electrodes of interest and fit a hierarchical linear model with a fixed effect of priming, a full random effects structure and the same weakly informative priors as in previous analyses. Results Mean accuracy in the memory test was 75.9% (SD = 7.7%). ERP responses to target words are shown in Figure 3 (left panel). The analysis of the ERP responses revealed strong evidence of priming in the N400 time-window, and numerically, the effect appeared to onset even earlier. 1 The mean of the priming effect's posterior distribution was −1.36 µV with a credible interval of [−1.95, −0.77] and a probability of a priming effect of 1.000. Discussion Experiment 4 showed a large N400 priming effect for the pairs that served as the antecedent and target word in the sentence experiments. These results replicate prior work that has reported a reduction in N400 amplitude for words following semantically related primes (Brown et al., 2000;Holcomb, 1988;Kutas & Van Petten, 1994;Rugg, 1985). Notably, the priming effect was estimated to be larger than that observed for exactly the same pairs in Experiments 1 and 2, which used the same stimulus-onset asynchrony between the "prime" and "target" words. There are at least two explanations for this difference across experiments. First, prior literature has sometimes reported that N400 effects of semantic relatedness are smaller in sentence contexts than in word pair paradigms (Coulson, Federmeier, Van Petten, & Kutas, 2005;Ledoux, Camblin, Swaab & Gordon, 2006). Therefore, the smaller effect in Experiments 1 and 2 may be due to the fact these studies displayed the critical words within sentence contexts. This would suggest that coreference paradigms involving sentences inevitably yield small N400 relatedness effects. But a second possibility relates to the fact that prime words in the sentence experiments were marked with possessive morphology (e.g. prince's), in contrast to the word pair experiment, in which prime words appeared without case morphology (e.g. prince). Specifically, it is possible that the possessive marker on the prime word altered its effect on the target word. This could occur if possessively-marked nouns are costlier to process and thus delay or interfere with semantic priming, or if the possessive marker on the prime word alters the pre-activation process, resulting in different words being preactivated due to the occurrence of possessive vs. bare case primes. To address this possibility, Experiment 5 used a word pair paradigm identical to Experiment 4 but added possessive morphology to the prime word. Experiment 5 Experiment 5 was designed to examine the presence of N400 facilitation effects in word pairs where the prime carried possessive morphology. Therefore, the same word pair paradigm of Experiment 4 was used, but the genitive marker's was added to the prime word. Participants Twenty-nine volunteers participated, but nine participants were excluded from analysis, two due to excessive artifacts and seven due to accuracy below 60% in the behavioural task. Twenty participants were included in the final analysis (mean age = 20 years, age range = 18-23 years, 15 females). Materials, procedure and analysis Materials were identical to those presented in Experiment 4 with the addition of the possessive marker to the prime words. Approximately 13.43% of the trials were rejected due to artifacts (range: 2.67%-38.67%). Channel interpolation only affected one channel of interest: (participant "5-17", channel C3). The same Bayesian model as in Experiment 4 was fit to the data. Results Mean accuracy in the memory test was 73.0% (SD = 9.8%). ERP responses to target words are shown in Figure 3 Discussion Experiment 5 examined whether possessively-marked primes facilitated N400 responses for semantically related following words. The results supported this hypothesis, but they also showed that the mean size of the N400 relatedness effect was reduced compared to the bare primes used in Experiment 4 (−0.76 µV vs. −1.37 µV, respectively). However, a potential caveat about the comparison between Experiments 4 and 5 is the fact that the numerical divergence between the related and unrelated conditions was notably earlier in Experiment 4. One might be concerned that this earlier divergence reflects a distinct effect that overlaps with the N400 time-window, creating the impression of a larger N400 effect size. Although we cannot rule out this possibility, we believe it is unlikely, both because large N400 effects often onset prior to 300 ms in the literature (e.g. Federmeier & Kutas, 1999;Holcomb, 1988;Lau, Holcomb, & Kuperberg, 2013), and because the centro-parietal distribution of the effects in both the 200-300 ms window and the 300-500 ms window are similar to those standardly reported for N400 effects. A second potential caveat about the comparison between Experiments 4 and 5 is that it was based only on the numeric estimates of the priming effects. Therefore, in order to provide a more direct comparison and to precisely estimate the size of the N400 difference between Experiments 4 and 5, we conducted an analysis of the data pooled across experiments, as described below. Pooled analysis To address the variability of the findings across experiments, we conducted an analysis of their pooled data. Our goal was to achieve higher statistical power and hence to get more precise estimates for the experimental effects. Our questions of interest were: (1) whether the five pooled experiments offered strong evidence of a priming effect for full NPs; (2) whether there was strong evidence of priming for coreferential pronouns; and (3) whether the coreference control condition reliably differed from the pronoun related condition, consistent with the priming effect for pronouns being specifically due to coreference, rather than a more general context priming effect. The pooled analysis also allowed us to address two additional observations that emerged during the experiments about the role of the experimental paradigms and of the case morphology of the prime words. Specifically, we wanted to examine: (4) whether priming effects with possessive NP primes were reliably smaller within sentences than word pair paradigms, and (5) whether priming effects with NP primes were stronger with bare than with genitive primes. Pooling the data allowed us to directly assess these questions by examining whether there were interactions between priming and paradigm, on the one hand, and between priming and case, on the other. In order to address the five questions outlined above, the following models were coded. The first model included as fixed effects a priming effect within full NPs (relevant for question 1), a priming effect within pronouns (relevant for question 2), an interaction between paradigm and priming (relevant for question 4), and an interaction between case and priming (relevant for question 5). Note that the two interactions were only computed over trials that contained NP primes, as only these were presented in both sentence and word pair paradigms and manipulated for case. Furthermore, the interaction between paradigm and priming was only computed over the possessive trials, since having bare case primes in sentences would have rendered them ungrammatical (e.g. She wished that the *prince castle wasn't so far away). In addition, three other fixed effects were used: a main effect of possessive type (NP/ pronoun), a main effect of paradigm (sentence/word pair) and a main effect of case (bare/possessive). These effects were not of theoretical interest because they were orthogonal to the research questions. For example, the main effect of paradigm quantified whether trials with bare case primes elicited more negative N400 responses than trials with possessive primes (regardless of the factor relatedness). Since our research question was whether case morphology enhanced (or diminished) priming effects specifically, overall effects of case (or paradigm or possessive type) were not of interest. However, these effects were included to reflect the experimental design, and thus to account for the variance associated with these manipulations and because the interpretation of the critical interactions would not be possible otherwise. Fixed and random effects were coded consistently with the individual experiments. For the factor priming (or relatedness), related trials were coded as −0.5, and unrelated trials as 0.5, such that a negative estimate reflected facilitatory priming. For the factor possessive type, trials with coreferential pronouns as primes were coded as −0.5 and trials with NP primes as 0.5. For the factor paradigm, sentence trials were coded as −0.5 and word pair trials as 0.5. For the factor case, trials with possessive primes were coded as −0.5 and trials with bare case primes were coded as 0.5. The randomeffects structure of the model included the effects of priming (for both NPs and pronouns) and the effect of possessive type as within-subject comparisons. All other fixed effects were between-subject comparisons, as they had been manipulated across subjects. The pooled analyses used the same weakly informative priors as the individual experiments. In order to follow-up on the interactions between priming and case and priming and paradigm, two additional models were fit, which replaced the interactions with the corresponding pairwise comparisons. Lastly, to address question (3), regarding the difference between the related pronoun and the coreference control conditions, the data was subset to include only these conditions and their difference was coded as a fixed effect together with a full random effects structure, with the same weakly informative priors as in the previous models. Results The results of the pooled analysis are summarised in Table 1 and Figure 4. Note that only the effects of theoretical interest are discussed in the text, although the table presents all estimates for completeness. As with the individual analyses the term probability of a priming effect is used as a shorthand for the posterior probability of there being a facilitatory priming effect, representing the probability of a less negative N400 response for related than unrelated target words. The pooled analysis showed strong evidence of priming when target words were preceded by related NP primes. There was also some indication of priming in the coreferential pronoun conditions, although this priming effect was smaller than for the NP conditions, with a broader posterior distribution and a 95% credible interval that included 0. Furthermore, the model comparing the coreference control condition with the related pronoun condition did not support a reliable difference between conditions. Taken together, these results do not provide strong evidence that semantic facilitation with pronouns are driven specifically by coreference. Interestingly, there was some evidence for an interaction between priming and paradigm. Pairwise comparisons showed that this interaction was due to a larger priming effect when the target stimuli were presented as word pairs as compared to within sentences. Furthermore, there was strong evidence of an interaction between case and priming, confirming the numeric contrast between Experiments 4 and 5: Priming effects were reduced when the prime word was presented with possessive case, compared with a bare case presentation. Overall, these results show that semantic priming effects are reduced by both sentence context and possessive case morphology. General discussion This study examined the mechanisms supporting pronoun interpretation during coreference. We asked whether the retrieval of the antecedent of a pronoun could be indexed by semantic priming effects, as suggested by previous cross-modal studies (Leiman, 1982;Nicol, 1988;Shillcock, 1982). According to these studies, if readers rapidly reactivate the lexical representation of an antecedent upon encountering a pronoun, then words that are semantically associated to the antecedent should become preactivated, resulting in processing facilitation when one of these words is later encountered. However, cross-modal priming measures rely on an explicit decision task and are thus potentially prone to task-related strategies. In the current study, the N400 effect was used to provide an implicit time-sensitive diagnostic of (the consequences of) antecedent retrieval, in the absence of an overt decision component. Overall, our results did not provide reliable evidence of semantic facilitation specifically due to coreference. First, the analysis of the combined experiments only yielded weak evidence of N400 reductions for words semantically related to the pronoun's antecedent (e.g. his castle vs. estate). Second, N400 responses in the semantically related condition (e.g. his castle) did not reliably differ from the condition in which the pronoun no longer referred to the target antecedent (her castle, coreference control condition). The coreference control condition was designed to ensure that readers' facilitated processing of the target words was specifically due to coreference, rather than from having read the semantically associated noun prince in the prior context. Therefore, the absence of a clear contrast between the pronoun related and the coreference control conditions suggests that the weak facilitation effects for the former may have been more due to general context-priming effects than to antecedent reactivation. In what follows, we discuss the implications of our findings for psycholinguistic accounts of pronoun processing, focusing on the role of antecedent retrieval during coreference. Then, we turn to some practical limitations of the semantic facilitation paradigm as a tool to study coreference, as well as the additional findings that it yielded about the influence of context and case morphology on semantic priming effects. What is accessed during pronoun processing? Our study was motivated by previous cross-modal work, which had reported semantic priming effects immediately after the presentation of a pronoun. However, a longstanding concern about the crossmodal paradigm is that it is prone to strategic effects, as detecting semantic relationships between pronouns and their antecedents can help participants improve their task performance. The current results are consistent with our failure in prior work to observe antecedent relatedness effects in eye-tracking-while-reading in English (Lago et al., 2017). Similarly, another recent ERP study failed to show effects of antecedent concreteness at either the pronoun or the content words Table 1. Statistical results from the pooled analysis. Each effect is presented together with the mean of its posterior distribution (µV), a 95% credible interval (µV), and the probability of there being a facilitatory effect. Effects whose credible interval excludes zero are bolded, and they are described in the text as providing strong evidence for the effect of interest. following it (Smith & Federmeier, 2015). Therefore, the fact that antecedent relatedness effects were not reliably observed in studies using implicit measures suggests that cross-modal priming effects in previous work may have partly resulted from task-related strategies, rather than the automatic retrieval of the pronoun's antecedent. Under this interpretation, two outstanding questions are why coreference in English might not elicit automatic semantic priming effects and what this entails for models of pronoun processing in comprehension. One explanation for the lack of strong evidence of priming effects is that pronoun interpretation in English requires contact with the referent of the pronoun in speakers' discourse model but does not involve the retrieval of the antecedent's representation from long-term memory. Specifically, a previous eyetracking study by Lago et al. (2017) showed evidence of semantic facilitation effects in German (a language in which pronouns carry grammatical gender) but not in English. To account for the cross-linguistic contrast, it was proposed that the retrieval of discourse referents and linguistic antecedents during coreference was conditioned by the grammatical properties of each language. In languages without grammatical gender like English, the features necessary to identify an antecedent, which include conceptual gender, would all be bound to the discourse representation of the pronoun's referent (Cloitre & Bever, 1988;Lucas et al., 1990), thus obviating the need for long-term memory retrieval and preventing the occurrence of spreading activation between semantically-related lexical items. This is because the process of spreading activation is assumed to rely on differentially weighted connections between long-term memory units, and thus it may not happen without these units' reactivation (Collins & Loftus, 1975;Forster, 1976;Levelt et al., 1999;Morton, 1979). By contrast, in languages with grammatical gender, like German, the syntactic gender of an antecedent often has no conceptual correlates and might be solely stored in the lexicon. In this case, the reactivation of the antecedent representation in long-term lexical memory would also be necessary to identify an appropriate antecedent and to license the agreement features of the pronoun. Under this account, the lack of reliable semantic priming effects occurred because the English pronouns of the current study did not reactivate antecedent representations in comprehenders' long-term memory. Rather, identifying an appropriate referent for an English pronoun could be achieved by probing the most salient discourse referents for their conceptual gender. Critically, discourse referents are either shortterm memory representations or "recently constructed" long-term memory representations (e.g. the prince in the current discourse may have idiosyncratic properties distinct from the more general concept of prince). Therefore, there is no reason to think that discourse referents would have the long-term connections weighted by past experience that would yield spreading activation effects (e.g. Smith & Federmeier, 2015). This idea is consistent with prior work suggesting that even full noun phrase references to a discourse entity do not continue to activate the features associated with the lexical item that introduced them initially. Most famously, when the prior discourse establishes that a peanut is romantically entangled, further references to the peanut drive rapid facilitation for predicates like "in love" relative to lexically associated predicates like "salted" (Nieuwland & Van Berkum, 2006). Therefore, if nouns themselves often do not elicit reactivation of the full set of lexical features associated with an antecedent noun in long-term memory, it is unclear why pronouns should do so, unless as a byproduct of a grammatical requirement (as proposed for languages with grammatical gender). The current results leave open interesting questions about the nature of the memory operations needed to link a pronoun with a discourse referent. These operations depend on the type of memory architecture involved in language processing. Early accounts proposed a multistore system, which distinguished between long-and short-term memory stores (Baddeley, 1986(Baddeley, , 1992(Baddeley, , 2000James, 1890;Repov & Baddeley, 2006). Within this type of multistore architecture, current discourse referents might be represented in short term memory, and coreference in English would involve selecting among a set of referents on the basis of morpho-syntactic information, world knowledge, and discourse structure (e.g. Garrod & Sandford, 1982). Alternatively, more recent memory accounts do away with the distinction between long-and short-term stores. Instead, these accounts propose that all items must be retrieved from a single memory store, with the exception of a very limited set of items that are currently under processing, which are assumed to be held in speakers' focus of attention (Anderson et al., 2004;Cowan, 1988Cowan, , 2000McElree, 2001;Oberauer, 2002). In these architectures, one could capture the presence or absence of semantic facilitation effects in several ways. One way would propose that certain classes of memory representations are organised into associative networks that yield spreading activation (i.e. words) whereas others are not (i.e. the conceptual features bound to discourse referents). Whereas this proposal is logically possible, we are not aware of any independent functional or neurobiological motivations for it. Alternatively, it could be proposed that retrieving the memory representation of a discourse referent does not automatically trigger retrieval of the conceptual features that are bound to it. Rather, the retrieved object may be a discourse ID or label, such as "discourse referent 2", and the information bound to that referent (i.e. its conceptual features and predicated attributes) may be only retrieved when other parts of the sentence prompt an enriched interpretation. For example, the awareness of the unusual state of affairs described by the sentence The farmer worked in a skyscraper might not occur automatically, but rather because "working in a skyscraper" triggers a new interpretive goal of inferring the kind of work done by the farmer, which subsequently prompts reaccess of the conceptual features of the discourse referent that is the agent of the working event. Finally, it is worth noting that an assumption that underlies the current work and much prior work is that antecedent access occurs immediately at the pronoun. However, this assumption can be challenged by findings that have shown pronoun interpretation to be sometimes temporally delayed or even skipped altogether (Carpenter & Just, 1977;Greene, McKoon, & Ratcliff, 1992;Love & McKoon, 2011; see also discussion by Sanford & Garrod, 1989). In addition, recent work has argued that pronoun interpretation can begin predictively, before a pronoun is even encountered. For example, comprehenders often have expectations about which discourse entity will be referred to next, and these expectations can rapidly impact pronoun interpretation (Arnold, 2010;Rohde & Kehler, 2014). Although more work is needed to identify the variables that can affect the time course of coreference, these findings question the need for a retrieval process timelocked to the appearance of a pronoun, thus providing a possible explanation for the lack of immediate priming effects in the present study. Semantic facilitation effects in sentence processing: beyond pronouns Beyond the pronoun conditions that were of interest for the current study, we found that N400 facilitation effects were relatively small in sentence contexts, even when the semantic relationships occurred between two adjacent noun phrases (the prince's castle vs. estate). This resulted in smaller N400 priming effects in sentences (Experiments 1, 2 and 3) than in isolated word pair paradigms (Experiments 4 and 5). These results replicate previous eye-tracking and ERP studies, which have found smaller and/or delayed facilitation effects in sentences compared to word pairs (Camblin et al., 2007;Coulson et al., 2005;Carroll & Slowiaczek, 1986;Morris, 1994;Traxler et al., 2000; for review see Boudewyn, Gordon, Long, Polse, & Swaab, 2012). However, Experiment 5 showed that N400 reductions could also be partially attributed to the use of possessive morphology in sentence contexts, as genitive-marked primes clearly reduced the size of the N400 relatedness effects even in word pairs. Therefore, an important question is why possessive morphology and sentence context would reduce the size of priming effects. With regard to the role of possessive morphology, one possibility is that the processing of the additional possessive marker was costlier and that this cost interfered with (or delayed) access to the lexical or conceptual features of the prime, in turn reducing the impact of word associations on the target. This hypothesis may seem to conflict with recent ERP findings suggesting that case information (e.g. accusative vs. nominative markings) takes a longer time to affect verb predictions than their lexico-semantic properties (e.g. Chow, Wang, Lau, & Phillips, 2018;Momma, Sakai, & Phillips, submitted). Nonetheless, case marking may play different roles (or affect the time-course of predictions differently) when implemented on nouns vs. pronouns (or when informing verb vs. noun predictions). Therefore, future work on the role of case marking in coreference is needed, for example by systematically manipulating the amount of time allocated to the presentation of a possessive pronoun, in order to give participants more time to process case morphology. Alternatively, the possessive marker on the prime word may have affected how participants created expectations about following words, by generating predictions that did not necessarily consist of semantic associates. An advantage of this explanation is that it could potentially explain why the N400 effect may have been affected by both possessive morphology and sentence context. This explanation relies on classic theories of semantic priming, which proposed that priming effects in word pair paradigms had at least two different sources: automatic spreading activation and prediction (Neely, 1991;Posner & Snyder, 1975). Automatic spreading activation was proposed to drive much of the priming effect at shorter primer-target delays and/or when the prime was masked from consciousness. By contrast, more "controlled" processes like prediction would dominate at longer delays. Importantly, subsequent word-pair ERP studies demonstrated that a large part of N400 semantic facilitation effects at long delays was due to predictive processes, indicating that participants could rapidly (and perhaps implicitly) recognise the semantic associations between word pairs, and then use these relationships to predict which target would follow the prime (Brown et al., 2000;Lau et al., 2013). Under this account, the use of bare-case primes in Experiment 4 may have forced participants to rely mainly on semantic association, which was the only source of information available to them. However, the use of possessive case, which entered the prime and target words into a structural relationship, as well as their embedding in sentence contexts may have made other sources of information more salient, reducing participants' reliance on semantic association and thus the role of spreading activation mechanisms. The broader implication of this explanation for the pronoun interpretation literature is that manipulations based on semantic association are unlikely to yield robust effects in sentence contexts, where predictions will be primarily dominated by structural and discourse information, more than by associations between individual lexical items. Therefore, studies aiming to use this approach in the future should exercise caution. It is also worth noting that in order to conclude that semantic facilitation effects on the target word reflect retrieval of information at the pronoun rather than prediction of the target itself, such designs must ensure that the contexts do not themselves predict the critical targets. On the other hand, if the question of interest is about the speed of antecedent identification rather than the types of representations involved, then two more promising paradigms are visual world eye-tracking, given its ability to detect rapid attentional changes, as well as ERP designs in which pronoun resolution shifts predictions about upcoming words, given the broadly observed effects of prediction in N400 sentence studies (DeLong, Urbach, & Kutas, 2005;Van Berkum, Brown, Zwitserlood, Kooijman, & Hagoort, 2005;Wicha, Moreno, & Kutas, 2004 but see Nieuwland et al., 2018). Conclusion Using an implicit and time-sensitive measure, the N400, we did not observe strong evidence of semantic facilitation effects specifically due to coreference in English. We believe that these data raise questions about the origins of such effects in cross-modal work and they raise the possibility that semantic relatedness effects in comprehension are not a cross-linguistically effective tool to probe for antecedent retrieval. Furthermore, the impact of linguistic information and sentence context on the size of N400 priming effects suggests that they may largely reflect the extent to which the upcoming input is predicted, rather than automatic spreading activation between lexical representations. We have also argued that prior work and broader theoretical considerations make it unlikely that English pronouns elicit the retrieval of the long-term memory representations that would trigger spreading activation to semantic associates of the antecedent. Rather, we suggest that pronoun interpretation involves either linking a pronoun with a discourse referent in working memory or retrieving a discourse referent without immediately reactivating its bound conceptual features. Note 1. Although we had no prior hypotheses about priming effects prior to 300 ms, we conducted an exploratory analysis examining whether the mean ERP amplitude in the 200-300 ms time window differed between conditions. This analysis showed strong evidence that the unrelated condition was more negative than the related condition: mean of the posterior distribution = −0.82 µV, CrI = [−1.38, −0.24], probability of a negative effect = 0.998. The topography of the effect was similar to that observed in the subsequent N400 time-window (see Supplementary Materials).
2019-02-15T01:01:25.411Z
2019-02-01T00:00:00.000
{ "year": 2019, "sha1": "c824ec6f6b63269441f944e1ece5ff48fe85d8a8", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23273798.2019.1566561?needAccess=true", "oa_status": "HYBRID", "pdf_src": "TaylorAndFrancis", "pdf_hash": "469c88ad7fc4fb4dc22bee9b86b0d06566812902", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
19058881
pes2o/s2orc
v3-fos-license
Dietary assessment in UK Biobank: an evaluation of the performance of the touchscreen dietary questionnaire UK Biobank is an open access prospective cohort of 500 000 men and women. Information on the frequency of consumption of main foods was collected at recruitment with a touchscreen questionnaire; prior to examining the associations between diet and disease, it is essential to evaluate the performance of the dietary touchscreen questionnaire. The objectives of the present paper are to: describe the repeatability of the touchscreen questionnaire in participants (n 20 348) who repeated the assessment centre visit approximately 4 years after recruitment, and compare the dietary touchscreen variables with mean intakes from participants (n 140 080) who completed at least one of the four web-based 24-h dietary assessments post-recruitment. For fish and meat items, 90 % or more of participants reported the same or adjacent category of intake at the repeat assessment visit; for vegetables and fruit, and for a derived partial fibre score (in fifths), 70 % or more of participants were classified into the same or adjacent category of intake (κweighted > 0·50 for all). Participants were also categorised based on their responses to the dietary touchscreen questionnaire at recruitment, and within each category the group mean intake of the same food group or nutrient from participants who had completed at least one web-based 24-h dietary assessment was calculated. The comparison showed that the dietary touchscreen variables, available on the full cohort, reliably rank participants according to intakes of the main food groups. UK Biobank is a prospective cohort of half a million men and women from across the UK. Information on a broad range of exposures, including diet, was collected from the participants at an assessment centre, and linkage to cancer and death registries, as well as other medical records, enables many hypotheses to be examined (1) . The UK Biobank dataset is an open access resource; any bona fide researcher can apply to use the data for health-related research that is in the public interest (1) . At recruitment, the touchscreen questionnaire used in UK Biobank asked twenty-nine questions about diet, most of which gathered information about the average frequency of consumption of main foods and food groups over the past year. Prior to using the data from the touchscreen questionnaire in diet-disease analyses, it is important to examine the reproducibility of the dietary questions. Future studies in UK Biobank may rank participants according to dietary intake from the touchscreen questionnaire and assess relative risk of disease across categories of intakes; misclassification in the ranking of participants according to dietary intakes will be expected to underestimate associations between diet and disease risk (2) . Using a subsample of approximately 20 000 participants who completed a repeat of the assessment centre visit (3) , about 4 years after recruitment, enables an examination of the combination of the variation in response to the questionnaire as well as true changes in intake over time, both of which contribute to misclassification of long-term dietary intakes (4) . The agreement between the touchscreen questionnaire, which asked about frequency of consumption, and a more detailed dietary assessment method which gathers information on actual amounts of food consumed, can be used to further evaluate the touchscreen dietary data. A web-based 24-h dietary assessment tool (5) was also used in UK Biobank to gather additional information on dietary intakes; over 200 000 participants completed at least one 24-h dietary assessment, and the mean intakes from the 24-h dietary assessments can be used for this purpose. The objectives of the present paper are to describe the reproducibility of the touchscreen questions, using the subsample of participants who repeated the assessment centre visit, and to examine the agreement between the dietary touchscreen variables and the group mean intakes from the web-based 24-h dietary assessments conducted on a large subsample of participants. UK Biobank UK Biobank is a prospective cohort of half a million middle-aged men and women recruited from the UK in 2006 (pilot phase) and 2007-2010 (main phase). People aged 40-69 years who lived within reasonable travelling distance (25 km) of one of the twenty-two assessment centres in England, Scotland and Wales were identified from National Health Service patient registers and invited to attend an assessment centre. Permission for access to patient records for recruitment was approved by the Patient Information Advisory Group (subsequently replaced by the National Information Governance Board for Health and Social Care) in England and Wales, and the Community Health Index Advisory Group in Scotland. At the UK Biobank assessment centres, a touchscreen questionnaire was used to collect information on sociodemographic characteristics, diet and other lifestyle exposures, general health, and medical history. Physical measurements were also taken and participants provided blood and urine samples. Participants are followed up via linkage to cancer and death registries, as well as other health records (1) . This study was conducted according to the guidelines laid down in the Declaration of Helsinki and all procedures involving human subjects/patients were approved by the North West Multi-centre Research Ethics Committee. At the touchscreen station, all participants gave informed consent to participate in UK Biobank and be followed up, using a signature capture device. The UK Biobank protocol is available online (http://www.ukbiobank.ac.uk/wp-content/uploads/ 2011/11/UK-Biobank-Protocol.pdf). The touchscreen questionnaire and other resources are available on the UK Biobank website (http://www.ukbiobank.ac.uk/resources/). Dietary assessment Touchscreen questionnaire. The touchscreen questionnaire used in the main study contained twenty-nine questions about diet and eighteen questions about alcohol. The touchscreen questionnaire asked about the frequency of consumption over the past year of the following food groups: cooked vegetables, salad/raw vegetables, fresh fruit, dried fruit, oily fish, other fish, processed meats, poultry, beef, lamb, pork, cheese, salt added to food, tea, water, as well as questions on the type of milk most commonly consumed, type of spread most commonly consumed, number of slices and type of bread most commonly consumed, number of bowls and type of breakfast cereal most commonly consumed, cups of coffee and type most commonly consumed, as well as questions on the avoidance of specific foods and food groups (eggs, dairy products, wheat, sugar), age last ate meat (for participants who reported never consuming processed meats, poultry, beef, lamb or pork), temperature preference of hot drinks, changes in diet in the past 5 years, and variation in diet. Four of the dietary questions used in the pilot study were altered slightly for the main phase: these were the questions on avoiding specific foods and food groups; spread type; bread type; and variation in diet. A total of 3776 participants completed only the pilot version of the touchscreen; for analyses on these questions the participants answering only the pilot version were excluded. Details of the possible answers for each dietary touchscreen question are given in the Supplementary Methods (6,7) . We also generated a partial fibre score from the touchscreen questionnaire using the questions on fresh fruit, dried fruit, raw vegetables, cooked vegetables, bread type and bread intake, and breakfast cereal type and breakfast cereal intake. Further detail on how we generated the partial score is given in the Supplementary Methods and Supplementary Table S1. Web-based 24-h dietary assessments. In early 2009, the main study protocol was modified to include a number of enhancements to the assessment centre visit (8) . These enhancements included the Oxford WebQ, a web-based 24-h dietary assessment tool, which asks about the consumption of up to 206 types of foods and thirty-two types of drinks during the previous 24 h. The mean daily intakes of nutrients were calculated by multiplying the frequency of consumption of each food or drink by a standard portion size and the nutrient composition of that particular item. The web-based 24-h dietary assessment has been compared with an interviewer-administered 24-h recall completed on the same day, with Spearman's correlation coefficients for the majority of nutrients calculated from the WebQ ranging between 0·5 and 0·9 (mean of 0·6) (5) . Participants who were recruited between April 2009 and September 2010 completed the 24-h dietary assessment at the assessment centre. In addition, after the recruitment period closed, an email was also sent out every 3- to participants who had provided an email address at recruitment, inviting them to complete the Oxford WebQ online using their own computer. The email invitations were sent on variable days of the week, and participants were given 3 d to complete it for cycles 1 and 2, and this was extended to 14 d for cycles 3 and 4, after which the link expired. For all analyses, we excluded 24-h dietary assessments where the energy intakes were greater than 20 000 kJ for men (1758 records from a total of 203 955 (0·86 %)), and 18 000 kJ for women (1736 records from a total of 254 798 (0·68 %)). In a sensitivity analysis, we also excluded 24-h dietary assessments where participants specified that their diet for that day was not typical because of fasting or illness; this did not have a large effect on the results, so all results are reported with these participants included. Repeatability of the touchscreen questionnaire. Approximately 20 000 participants who resided in the area surrounding UK Biobank's coordinating centre in Stockport undertook a full repeat of the assessment centre visit, between August 2012 and June 2013, approximately 4 years after recruitment (3) . To assess the long-term repeatability of the dietary questions on the touchscreen questionnaire, as well as the new partial fibre score, we used the subsample of participants who had completed the repeat assessment centre visit and examined the agreement between participants' responses to the dietary questions on the touchscreen questionnaire completed at baseline and the repeat visit. For this analysis, questions where the possible responses were categorical, i.e. questions on fish, meat, cheese, types of milk, spread, bread, cereal, salt added to food, temperature of hot drinks, major changes to diet, and variation in diet, we cross-tabulated the answers as recorded. For questions that used direct entry responses, we truncated or collapsed answers into categories to enable crosstabulation as follows: for servings of fruit and vegetables we used 0, 1, 2, 3, 4, ≥5; for the derived partial fibre score we categorised participants into fifths based on the whole cohort. For age last ate meat we used 0-10, 11-20, 21-30, 31-40, 41-50, 51-60, ≥61 years; for slices of bread we used 1-5, 6-10, 11-15, 10-20, 21-25, 26-30, ≥31; for bowls of breakfast cereal we used 0, 1, 2, 3, 4, 5, 6, 7, ≥8, for cups of tea and coffee and glasses of water we used 0, 1, 2, 3, 4, 5, ≥6. For all questions, participants selecting 'do not know', 'prefer not to answer' or 'less than one' were assigned to separate categories, except for number of bread slices where 'less than one' was combined with '0' because of very low numbers for both of these groups. For the question on foods avoided we created binary variables for each food item, e.g. consumers/nonconsumers of dairy products. After excluding participants who answered 'do not know' or 'prefer not to answer' at either baseline or the repeat visit, we also assessed agreement using the κ coefficient. Bootstrapping with 10 000 replications was used to calculate CI around the κ coefficient. In a separate analysis, we also further excluded participants who reported, at the repeat visit, making a major change to their diet in the past 5 years. We also examined κ coefficients by sex, age (<55 years, ≥55 years) and BMI (<25 kg/m 2 , ≥25 kg/m 2 ). For most of the dietary touchscreen questions, the categories of responses to the dietary questions were ordinal, ranging from least frequently eaten to most frequently eaten; therefore for these questions the κ coefficient with quadratic weighting was used, which is equivalent to the intra-class correlation coefficient and allows for the fact that a change from category 1 to category 2 reflects closer agreement over time than, for example, a change from category 1 to category 4 (9) . κ Values >0·80 indicate excellent agreement, values between 0·61-0·80 substantial agreement, 0·41-0·60 moderate agreement, 0·21-0·40 fair agreement, and ≤0·20 poor agreement (10) . For questions where the responses were not ordinal, e.g. bread or spread type mainly used, only the percentage in the same category is given, κ values were not calculated. Agreement between the intakes of foods and food groups and the partial fibre score from the touchscreen dietary questions and group mean intakes from the 24-h dietary assessment. We created new variables for the weight (in g) from the 24-h dietary assessments of the food groups that were included in the touchscreen questionnaire: total vegetables; total fresh fruit; total dried fruit; oily fish; other fish; processed meat; poultry; beef; lamb; pork; cheese; tea; caffeinated coffee; decaffeinated coffee; water; white sliced bread; granary brown or mixed flours sliced bread; wholemeal sliced bread; bran cereal; wholewheat cereals; porridge; muesli; and plain cereals, sweetened oat crunch-type cereals, other sweetened cereals, and other cereals. This was done by using the steering file of the Oxford WebQ which contains the serving size of each food item listed in the 24-h dietary assessment, in g (data not shown). To estimate daily intake, the serving size in g was multiplied by the frequency reported in the 24-h dietary assessment. The top frequency category was open ended and differed by food group; these were coded so that 3+ = 3, 4+ = 4, 5+ = 5, 6+ = 6. Less than one was coded as 0·5. Dietary variables from the 24-h dietary assessments were chosen to match the touchscreen food groups as closely as possible; details of the individual dietary variables from the 24-h dietary assessment that formed each food group are given in Supplementary Table S2. For the purposes of comparing the touchscreen dietary variables with the 24-h dietary assessments, we grouped participants into categories for each food group based on the touchscreen questionnaire (for the main food groups of meat, fruit and vegetables we typically used four categories per food group), and these were compared with the 24-h dietary assessments in two ways. Firstly, within each category we calculated the mean intake (in g) of the same food group from participants who completed the 24-h dietary assessment tool at the assessment centre; this first analysis shows how well the touchscreen and 24-h dietary assessment tool agree when they are completed on the same day. Secondly, in each category we calculated the group mean intake of the corresponding food group from 24-h dietary assessments from participants who had completed one or more online 24-h dietary assessments. For participants who completed more than one online 24-h dietary assessments, we first averaged the values from all of their completed online dietary assessments. In this second analysis, we excluded the 24-h dietary assessments that were completed at the assessment centre on the same day as the touchscreen questionnaire; we used only the online 24-h dietary assessments because the aim was to take into account both change over time and variation in day-to day intakes, and therefore we wanted a lag time of at least a few months between the touchscreen questionnaire and the 24-h dietary assessment. To rank the participants by weekly red meat consumption based on the touchscreen, we summed the frequencies for beef, pork and lamb/mutton, using the following coding: 'never' = 0, 'less than once per week' = 0·5, 'once per week' = 1, '2-4 times per week' = 3, '5-6 times per week' = 5·5, 'once or more daily' = 7. The same approach was used for total meat consumption based on the touchscreen, which was the sum of processed meat, poultry, beef, pork and lamb/mutton. Results The initial dataset for the present study consisted of 502 640 participants who completed the touchscreen questionnaire at the recruitment visit. Of the participants, 20 348 repeated the touchscreen questions at the repeated assessment centre visit; the median time between administrations was 4·4 (25th-75th percentile 3·7-5·0) years. In total, 210 128 participants completed at least one 24-h dietary assessment with plausible energy intakes and 126 096 completed at least two 24-h dietary assessments; 70 046 participants completed a 24-h dietary assessment at the recruitment centre, and about half of these participants (n 35 322) also completed one or more online 24-h dietary assessments ( Fig. 1 and Table 1). Basic participant characteristics are shown for the whole cohort, the subsample that completed a repeat assessment centre visit, and the subsample that completed at least one 24-h dietary assessment in Table 2. Participants who completed the repeat assessment centre visit or at least one 24-h dietary assessment were more likely to have a university degree or vocational qualification and slightly less likely to smoke, compared with the full cohort. Table 3 shows the responses to the fruit and vegetable questions and the categorisation of participants into fifths based on the new partial fibre score, from the touchscreen questionnaire completed at baseline and at the repeat visit, for the subsample of participants who completed a repeat visit. Table 4 shows the results for the meat and fish questions. Table 5 shows the agreement and κ coefficient with quadratric weighting (κ weighted ) for these questions. Generally there was good agreement between reported consumption at recruitment and at the repeat assessment centre visit, approximately 4 years later. After excluding participants who selected 'prefer not to answer' or 'do not know' at either the recruitment or repeat assessment centre visit, the percentage of participants who were classified into the same or adjacent categories (same category only, total number of categories) was 82 % (42 %, seven categories) for cooked vegetables, 72 % (37 %, seven categories) for raw vegetables, 82 % (43 %, seven categories) for fresh fruit, 72 % (51 %, seven categories) for dried fruit, and above 95 % (above 55 %, six categories for each item) for all fish and meat items, except for processed meat which was 90 % (52 %, six categories). The weighted κ coefficient showed substantial agreement for fresh fruit, oily fish, processed meat, poultry, beef, and lamb, and moderate agreement for cooked vegetables, raw/salad vegetables, dried fruit, partial fibre score, other types of fish (non-oily) and pork. After excluding participants who reported that they made a major change to their diet in the past 5 years at the repeat visit, the weighted κ coefficient increased slightly for all items, and fibre and pork now showed substantial agreement. The κ coefficients were similar for men and women, and younger and older participants. Participants with a BMI < 25 kg/m 2 had higher κ coefficients than participants with BMI ≥ 25 kg/m 2 (Supplementary Table S3). The agreement for the other touchscreen dietary variables is shown in Supplementary Table S4. Repeatability of the touchscreen questionnaire Agreement between the intakes of food groups and partial fibre score estimated from the touchscreen dietary questions and group mean intakes from the 24-h dietary assessment After averaging the values from all 24-h dietary assessments from participants who completed more than one 24-h dietary assessment, the mean daily intakes were 217 g for vegetables, 202 g for fresh fruit, 16·3 g for fibre, 11 g for oily fish, 16 g for white fish, 31 g for poultry, 58 g for red and processed meat, and 92 g for total meat. For women the mean intakes were 237 g for vegetables, 213 g for fresh fruit, 16·1 g for fibre, 12 g for oily fish, 15 g for other fish, 31 g for poultry, 50 g for red and processed meat, and 84 g for total meat. For men they were 191 g for vegetables, 189 g for fresh fruit, 16·6 g for fibre, 11 g for oily fish, 16 g for white fish, 31 g for poultry, 67 g for red and processed meat, and 102 g for total meat. For all foods and food groups, the comparisons with the 24-h dietary assessments that were completed at the assessment centre showed good agreement, with slight regression to the mean (i.e. a narrower range of intakes from the low to high categories). The comparison with the online 24-h dietary assessments showed greater regression to the mean ( Table 6 and Supplementary Table S5). Other studies have found some differences in the reproducibility of an FFQ between normal-weight and overweight participants for foods (12) and nutrients (13) , but the differences were inconsistent and the reproducibility was not systematically worse among overweight participants. The poorer reproducibility of the dietary touchscreen questions among overweight participants should be considered in future UK Biobank studies. A 4-year period between administrations of the touchscreen questionnaire allows us to examine the longterm reproducibility of the questionnaireany changes will be due to a combination of variability in response to the questions and true dietary changes over time, both of which contribute to misclassification of long-term dietary intakes (3) . Examining this longer-term reproducibility is vital for future prospective work from UK Biobank on diet and disease development and we have shown that over 4 years of follow-up that the vast majority (>70 %) of participants report the same or adjacent category of consumption for the main food groups of fruit, vegetables, meat and fish, as well as our derived estimated partial fibre score (in fifths). The mean daily intakes of the main food groups from all 24-h dietary assessments in UK Biobank were similar to or slightly higher than those of the same food groups from the UK National Diet and Nutrition Survey (NDNS) for adults aged 19 years or older, which is not unexpected given the known under-reporting of energy intakes in NDNS, by a magnitude of approximately 30 % (14) , and the non-representativeness of the UK Biobank cohort (15) . In addition, the age categories in NDNS (19-64 years, and 65 years or older) are wider than the age range of participants in UK Biobank (40-69 years). The mean intake of vegetables from the 24-h dietary assessments in UK Biobank was 217 g compared with 183-186 g for adults aged 19 years or older in NDNS, for fresh fruit it was 202 g in UK Biobank and 96-127 g for fresh/canned fruit in NDNS, for oily fish it was 11 g compared with 8-12 g, for white fish it was 16 g compared with 12-16 g, for poultry it was 31 g compared with 23-38 g, for red and processed meat it was 58 g compared with 63-71 g, and for total meat it was 92 g compared with 86-109 g (14) . The comparison of the touchscreen dietary variables with the group mean intakes from the 24-h dietary assessments showed that the touchscreen dietary questions discriminate between low and high intakes of main food groups. The comparison also showed classic regression to the mean. This occurs because participants will randomly over-and underreport on the touchscreen questionnaire; when participants are categorised based on the answers to the touchscreen questions the lowest category will include a disproportionate number of people who reported an intake lower than their true intake, and the top category will include a disproportionate number of people who reported an intake higher than their true intake, thus the mean intakes from the 24-h dietary assessment re-measurement for each category will be closer together (16) . For the comparison with the group mean intakes from the 24-h dietary assessment completed at the recruitment centre, as expected, there is less regression to the mean because the two measures were completed on the same day. The new variables that we generated, of food groups in weight amounts from the 24-h dietary assessments, can be used to correct for regression dilution bias in diet-disease analyses. This can be done using the approach we have shown in this paper, by grouping participants according to baseline intakes reported at the touchscreen and calculating the mean intakes from the 24-h dietary assessments within each group. The Continued relative risk of disease can be reported for each category of intake, and the data from the 24-h dietary assessments can be used to generate the trends in risk per increment (in g/d) in dietary intake. The touchscreen questionnaire included questions on fruit, vegetables, bread and breakfast cereals, and from this we were able to estimate a partial fibre score for the whole cohort. According to the NDNS, the food groups in the touchscreen questionnaire that were used to estimate the fibre score contribute 54-60 % of the total fibre intakes for this age group. The other categories in NDNS that were major sources of fibre but were not asked about in the touchscreen were: potatoes; pasta, rice, pizza and other miscellaneous cereals; and biscuits; buns, cakes, pastries and fruit pies; which together contributed another 22-25 % to fibre intakes for adults aged 19 years and over (14) . Therefore, our estimated partial fibre score from the touchscreen is not a complete estimate of fibre intake. For epidemiological studies that investigate the associations between dietary intakes and health outcomes it is not necessary to determine food and nutrient intakes with absolute accuracy, but it is important to demonstrate that the questionnaire can discriminate between people with low and high intakes; the comparison of the touchscreen partial fibre score with the group mean intakes from the 24-h dietary assessments confirmed that the touchscreen partial fibre score that we derived does separate UK Biobank participants with low and high intakes of dietary fibre. This variable will be returned to UK Biobank, and there will now be an estimated partial fibre score for the whole cohort, which can be used to assess the relationships between fibre and disease. The partial fibre score should not be regarded as a measure of absolute fibre intake and therefore it should not be used for direct comparison with recommended intakes or intakes in other populations. This work has shown that the main dietary touchscreen variables, including the new partial fibre score, show moderate to substantial reproducibility over a 4-year period, and comparison with the mean intakes from the 24-h dietary assessments showed that the touchscreen variables reliably rank participants according to the intake of main foods and food groups. This work underlies future research examining diet-disease associations in UK Biobank. Supplementary material The supplementary material for this article can be found at https://doi.org/10.1017/jns.2017.66 * Group mean intakes from the 24-h dietary assessments completed online. † One piece of fresh fruit is equal to one serving of fruit, and two pieces of dried fruit are equal to one serving of fruit. ‡ Total red meat is the sum of beef, lamb and pork from the touchscreen questions. § Total meat is the sum of processed meat, poultry, beef, lamb and pork from the touchscreen questions.
2018-04-03T02:59:25.861Z
2018-02-01T00:00:00.000
{ "year": 2018, "sha1": "c38d63faed8137ea4ef2050b84e32035531fbeff", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/B39875301A0747DB323E12AF26D2F284/S2048679017000660a.pdf/div-class-title-dietary-assessment-in-uk-biobank-an-evaluation-of-the-performance-of-the-touchscreen-dietary-questionnaire-div.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "34931eaabf5ff1dd5d142f3c8a440aa6ab571b37", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246987003
pes2o/s2orc
v3-fos-license
Proceedings to the 7th Annual Conference of the Particle Therapy Cooperative Group North America (PTCOG-NA) Purpose : Cancer cells produce innate immune signals following detection of radiation-induced cytosolic DNA via signaling pathways such as cGAS-STING. High linear energy transfer (LET) radiations induce more DNA double-strand breaks (DSBs) per unit dose than low-LET radiations, potentially enhancing immunogenic effects. This work explores the in vitro dose response characteristics of pro-immunogenic interferon-beta (IFN b ) and cGAS-STING antagonist three-prime repair exonuclease 1 (TREX1) from varying-LET radiations. Methods : IFN b and TREX1 expression were measured in MCC13 cells irradiated with graded doses of x-rays or fast neutrons (comparable LET to carbon-12) via ELISA, immunofluorescence, and qPCR assays. Laboratory measurement of the RBE for IFN b production (RBE IFN b ) and TREX1 upregulation (RBE TREX1 ) was compared to the modeled RBE for DSB induction (RBE DSB ) from Monte Carlo DNA damage simulations. RBE IFN b models were applied to radiation transport simulations to quantify the potential secretion of IFN b from representative proton, helium-4, and carbon-12 beams. Results : Maximum IFN b secretions occurred at 5.7 Gy and 14.0 Gy for neutrons and x-rays, respectively (RBE IFN b of 2.5). TREX1 signal increased linearly, with a four-fold higher upregulation per unit dose for fast neutrons (RBE TREX1 of 4.0). Monte Carlo modeling suggests an enhanced Bragg peak-to-entrance ratio for IFNb production in charged particle beams. Conclusion : High-LET radiation initiates larger IFNb and TREX1 responses per unit dose than low-LET radiations. RBE IFN b is comparable to published values for RBE DSB , whereas RBE TREX1 is roughly twofold higher. Therapeutic advantages of high-LET versus low-LET radiation remain unclear. Potential TREX1-targeted interventions may enable IFNb-mediated immunogenic responses at lower doses of high-LET radiations. Aim : To implement lattice radiotherapy using proton pencil beam scanning, and demonstrate treatments that are spatially fractionated in physical dose (PD), with significant escalation of biologic dose (BD) and dose-averaged linear energy transfer (LET d ) in the vicinity of the high PD regions. Method : For 5 patients with bulky tumors, spatial proton dose fractionation inside the GTV was achieved using proton lattice radiotherapy (pLRT). This involves a 3D lattice of 1.5-cm diameter spherical dose regions separated by 3 cm on average. pLRT plans were created with Eclipse (Varian Medical Systems). Two fields with an opening angle of at least 40 degrees were used to reduce skin dose at entrance. Dose valleys between spheres were kept below 40% of the peak PD. The resulting LET d distributions were calculated with an in-house GPU-based Monte Carlo simulation. BD was estimated from LET d and PD by using published formulae that are based on the linear-quadratic model, as well as a simpler model that assumes a linear relationship between BD and the product of LET d (in keV/ l m) and PD: BD ¼ 1.1PD(0.08LET d þ 0.88). Results : Within the high dose spheres, peak BD values in excess of 140% of the prescription dose were observed (see figures). LET d values in the spheres reached values greater than 4 keV/ l m. This was achieved without using any explicit LET d optimization technique, and is a direct consequence of end-of-range energy deposition within the spheres. Conclusion : Besides spatial fractionation, a feature of pLRT is BD escalation. This can be advantageous for debulking radioresistant or hypoxic tumors. Background : This study investigates the radiosensitizing effect of Ganetespib for proton irradiation at a proximal and distal position in a SOBP in comparison to photon irradiation. Rad51, a key protein of homologous recombination repair (HRR), is downregulated by HSP90-inhibiting Ganetespib which provides a promising rational for a specifically proton-sensitizing approach. Methods and Materials : A549 and FaDu cells were treated with low-dose Ganetespib and irradiated with 200kV photons respectively protons at a proximal, low linear energy transfer (LET, 2.1keV/ l m) and a distal, higher LET (4.5keV/ l m) position within a SOBP. Cellular survival was determined by clonogenic assay, cell cycle distribution by flow cytometry, Rad51 protein levels by western blotting and c H2AX foci by immunofluorescence microscopy. Results : Ganetespib reduced clonogenicity in both cancer cell lines exclusively in response to proton irradiation of both investigated LETs. Upon proton irradiation, a more pronounced accumulation of cells in S/G2/M phase became evident with Ganetespib reducing this population. Rad51 protein levels were more extensively and more persistently elevated in proton-than in photon-irradiated cells and suppressed by Ganetespib at each investigated time point. Immunofluorescence staining demonstrated a similar induction and removal of c H2AX foci independent of Ganetespib which suggests compensation by more error-prone Rad51-independent repair pathways. Conclusion : Low-dosed Ganetespib significantly cancer Hence, this study supports pursuing research on the combination of Ganetespib with proton radiotherapy for a prospective clinical exploitation. Purpose: The normal tissue sparing effects of ultra-high dose rate radiation (FLASH) remain poorly understood. We present preliminary results of mouse FLASH proton radiation from a low-energy proton system (50 MeV) optimized for small animal radiobiological research. Methods: We radiated 6-7 week old female C57BL/6 mice with whole lung radiation using the plateau region of a cyclotron-generated 50 MeV preclinical proton beam, transmitting through the whole mouse lung, with beam-shaping via customized vertical and horizontal collimators. Mice were stratified into 3 groups: 1) control/sham radiation; 2) conventional dose rate (17Gy at ~ 0.5Gy/sec); and 3) FLASH (16-18Gy at 42-70Gy/sec). Mice were observed for dermatitis. Lung tissue was harvested post-radiation (1-hour, 5-days, 1-month, 3-months, 6-months). H&E and immunohistochemistry was performed for: yH2aX, cleaved caspase-3, and trichrome. Results: Radiation dermatitis was different between FLASH and conventional groups: FLASH (grade 0-1 ¼ ~ 90%, grade 2 ¼ ~ 10%); conventional (grade 0-1 ¼ ~ 40%; grade 2-3 ¼ ~ 60%) [Figure 1]. One-hour post radiation, lower cleaved caspase-3 IHC staining was seen in the FLASH group versus conventional group, while yH2aX staining was similar in both groups [Figure 2]. More lung airspace disease (fluid and inflammatory cells) was seen in the conventional group at 6-months. Conclusion: Preliminary results of mouse FLASH proton radiation from a 50 MeV beam suggest FLASH proton radiation leads to less normal tissue toxicity than conventional dose rate radiation. More studies are ongoing. Experimental setup: The HollandPTC R&D room is equipped with a fixed horizontal beam line providing beam from 70 up to 240 MeV, and intensities from 1 to 800 nA. The room can provide single pencil beam and large fields with 98% beam uniformity and Spread-Out-Bragg Peak (SOBP) produced with 2D passive modulators. Recently, the maximum energy of 250MeV has been released in the R&D room for FLASH applications. The full beam characterisation has been performed together with absolute dose measurements. Results: A 43% transmission efficiency of the ProBeam cyclotron is achieved at a 250 MeV energy. This resulted in a current of around 300 nA at target position. The beam spot size has a standard deviation of 3.6 mm. The fluence rate was found to be 8e6 protons/cm 2 s, more than a factor of 100 with respect to conventional beams. To further characterise the 250 MeV proton beam at maximum beam current a specific integral monitor chamber is currently under commissioning in collaboration with the company DE.TEC.TOR. Different cutting-edge solutions are adopted for the ionisation chambers to cope with FLASH intensities and minimise the recombination effects.The device is also equipped with X-Y strip ionisation chambers to measure beam size and position. compare out-of-field dosimetry in proton, neutron, and photon radiotherapy with a 3D printed anthropomorphic phantom created using a non-ionizing surface scan. Methods: We used a 3D printed phantom and tissue-equivalent chamber to measure absorbed dose in a phantom constructed from surface imaging of a female volunteer. Absorbed dose was measured in locations approximating the isocenter, thyroid, pacemaker, esophagus, and fetus positions. Square intracranial fields ranging from 2.8cm 2 to 12.8cm 2 were delivered using 6 MV flattened and flattening-filter-free (FFF) photon therapy, magnetically scanned layered proton therapy, and 50.5 MeV proton generated fast neutron therapy. out-of-field dose. For field was small but measurable with of esophagus and fetus proton therapy which measured dose not distinguishable from to proton out-of-field dose 6 MV FFF photon 60% 30% pacemaker. Out-of-field dose FFF out-of-field dose than conventional fields. Out-of-field dose in all locations. Our that out-of-field absorbed dose is reduced in magnetically scanned proton therapy more than photon and is in neutron radiotherapy. In each modality distance from the field edge the magnitude the out-of-field dose. Purpose : The purpose of this study was to investigate the impact of range uncertainty in conjunction with setup errors on dose-averaged linear energy transfer (LET d ) distribution in robustly optimized pencil beam scanning (PBS) proton lung plans. Additionally, the variability of LET d distribution in different breathing phases of 4DCT data set was evaluated. Methods : In this study, we utilized the 4DCT data set of an anonymized lung patient. The tumor motion was approximately 6 mm. A PBS lung plan was generated in RayStation using a robust optimization technique (range uncertainty: 6 3.5% and setup errors: 6 5 mm) on the CTV for a total dose of 7000 cGy(RBE) in 35 fractions. The average RBE was 1.1. The LET d distributions were calculated for the nominal plan, 12 plan robustness scenarios (range uncertainty ( 6 3.5%) in conjunction with setup errors ( 6 5 mm)), and ten different breathing phases of 4DCT data set. Results: For a nominal plan, the mean LET d was— CTV: 2.22 keV/ l m; heart: 5.94 keV/ l m; normal lung: 3.40 keV/ l m. For plan robustness scenarios, the mean LET d was— CTV:2.26 6 0.22 keV/ l m; heart: 5.88 6 0.50 keV/ l m; normal lung: 3.40 6 0.28 keV/ l m. For 10 breathing phases, the mean LET d was— CTV: 2.22 6 0.04 keV/ l m, heart: 5.89 6 0.17 keV/ l m, and normal lung: 3.37 6 0.13 keV/ l m. Conclusion : The maximum difference in mean LET d among plan robustness scenarios was higher than the one from ten different breathing phases. For our 4DCT data set, breathing motion has little effect on LET d distribution in the CTV and organs at risk for PBS plan robustly optimized for the setup and range errors. the magnitudes of both inter- and intrafraction to and the impact of Materials and Method : A total of 85 patients (all with implanted gold markers in the prostate) treated between January 2019 to January 2021 were selected for this study. Daily marker-based target alignment was performed for each patient. The relative position difference between marker and bony alignment were recorded and compared to the PTV margins of the pelvic-nodes. Patients with consistent prostate motion close to or exceeding pelvic node PTV margin expansion were identified and re-simulated/re-planned to mitigate the impact of systematic errors from interfraction prostate motion. Post-treatment orthogonal radiographs were also obtained to evaluate prostate intrafraction motion. Patients with intrafraction motion exceeds the prostate PTV margins were identified and replanned with larger PTV expansion. Results : A total of 4 patients were identified for large systematic errors in interfraction prostate motion (Figure After patients were re-CT simulated and replanned, the interfraction prostate motion error was largely removed. Four different patients were identified due to off-margin intrafraction motion magnitude (Figure 2) and were replanned with larger prostate PTV margins. Conclusion : The IGRT surveillance program was implemented and identified 4 patients for re-simulation and re-planning due to large systematic interfraction prostate motion error; identified 4 different patients for re-planning due to large intrafraction error. Aim: Proton therapy (PT) is still a limited resource mainly because current facilities are bulky and costly. We explore the potential of a new design for PT which may facilitate proton treatments in conventional bunkers and allow the widespread use of protons. Methods: The treatment room consists of a Linac, a motorized couch for treatments in lying position, and a horizontal proton beamline equipped with beam scanning. When proton plans are suboptimal due to limitations in the beam directions, high-quality treatment plans may be obtained by delivering protons and photons in the same fraction. We demonstrate this concept for a nasopharyngeal cancer case. Treatment planning is performed by simultaneously optimizing IMRT and IMPT plans based on their cumulative physical dose. Stochastic optimization is applied to mitigate systematic setup and proton range uncertainties. Results: The combined treatment uses photons to improve dose conformity while protons allow reducing the integral dose in normal tissues (Figure 1). The combined treatment improves on single-modality IMRT and IMPT plans for the main organs at risk (Figure 2a). The lower doses that can be obtained with combined treatment translate into a 10%, 6%, and 4% lower risk for oral mucositis, xerostomia, and dysphagia compared to the pure IMRT plan in the nominal scenario. Stochastic optimization yields robust plans although protons and photons deliver inhomogeneous dose contributions (Figure 2b). Conclusions: Compact and affordable PT systems will likely include a fixed beamline rather than a gantry. When proton-only plans are suboptimal, proton-photon combinations may retain high treatment quality. Background: This work describes our treatment planning strategies to reduce the effect of range and dose calculation uncertainties in the proton therapy plans for targets in the spine with high Z material implants. Method and Materials: Treatment planning was carried out with Eclipse TPS from Varian Medical Systems. The CT numbers of the high Z-material and artifacts were overridden to the values corresponding to the proton RSP of the material and to surrounding tissue respectively. A posterior and two posterior oblique passively scattered proton fields were used to minimize the effect of dose calculation and range uncertainties. The accuracy of dose calculation in Eclipse was evaluated by recalculating the dose using Monte-Carlo simulation (MCS). The robustness of target coverage under 5 to 10 % range uncertainties was evaluated to ensure acceptable target coverage and sparing of organ at risk under worst case scenarios. Results: The use of three fields was found to reduce the effect of range uncertainties and produces a more homogeneous dose distribution compared to that for single PA field. The 3D Max dose difference between calculated dose distributions from Eclipse and MCS for three field plans was found to be 4.1%. Conclusions: Use of three fields is found to produce a proton therapy plan with acceptable dose distributions under dose calculation and range uncertainties. Even with the larger uncertainness due to the presence of high-Z material, the proton therapy plan was preferred over the photon plan due to normal tissue sparing. Purpose : To evaluate the knowledge-based model in prostate proton treatment planning. Methods : The knowledge-based RapidPlan module in Varian Eclipse TPS (v16.1) was used for prostate proton treatment planning. A model was created and trained using 40 patients for a prescription of 70.2 Gy to the prostate and seminal vesicle in 26 fractions. The model was evaluated by analyzing the goodness-of-fit summary statistics and identifying possible outliers. The established model was then tested on five additional prostate patients. The model-generated plans were compared to clinical-used plans to determine the accuracy and performance of the model in target coverage and organs-at-risk (OAR) sparing. Results : The chi-squared values of the Rapidplan training result were 1.077, 1.127, 1.208, 1.119 for bladder, left femur, right femur, and rectum, respectively. The average optimization time is less than 10 minutes in a single run. CTV D 99% was . 99% for all RapidPlans. OAR sparing was superior with RapidPlan D D Mean ¼ -1.12 Gy (bladder, 13.07 Gy vs. 14.20 Gy), Conclusion : The RapidPlan model saved 66% of the planning time and produced comparable CTV coverage and superior or equivalent OAR sparing for prostate patients. Purpose: To compare proton ranges calculated in the treatment planning system (TPS) based on images of dual-layer CT (DLCT) and single-energy CT (SECT) with those from Monte Carlo simulations. Methods and materials: Electron density (ED), atomic number (Z), and conventional CT images were acquired on Philips IQon spectral CT for nine animal tissues in 19.5 cm 3 9 cm 3 19 cm acrylic boxes. The DLCT method directly utilized stopping power ratios (SPR) calculated from ED and Z. The SECT method mapped HU to SPR. The treatment plans of 150.3 MeV and 221.3 MeV pristine energy layers were individually created in Varian Eclipse for both methods. The SECT plan was simulated with a fully commissioned TOPAS-based Monte Carlo model of a synchrotron proton beam delivery system PROBEAT-V. The proton range, R90 , determined by DLCT and SECT were comparatively evaluated against the MC model. Results: For 221.3 MeV, the deviation of proton range in TPS from Monte Carlo simulation reduced from 1.9–4.6 mm for SECT to 0.2–1.3 mm with DLCT (Table 1). The largest deviations occurred in fresh and frozen lung tissues. For 150 MeV, differences were less dramatic. However, DLCT continued to agree better with Monte Carlo simulations than SECT for lung tissues. Conclusions: The DLCT-based range calculation in TPS agreed with Monte Carlo simulation to within 1.3 mm of 30.96 cm water-equivalent length for all tested animal tissues and within 0.5 mm for lung tissues. This finding supports the adoption of DLCT for dose calculation in TPS. Objective: This study aims to examine the usefulness of PTV created by a uniform expansion of CTV to improve the CTV coverage in robustly optimized intensity modulated proton therapy treatment plans. Method & Materials: IMPT treatment plans were optimized in Eclipse TPS from Varian Medical Systems using Nonlinear Universal Proton Optimizer (NUPO) for targets in brain and other regions of head and neck. The CTV was selected for robust optimization under eight uncertainties scenarios corresponding 3.5% range uncertainties and þ /-3 mm setup uncertainties. Two plans for each patient were created, one with and one without the non-robust minimum target coverage objective for PTV. The PTV was customized to avoid overlap with OARs. Robustness of the CTV coverage under the same eight uncertainties scenarios were analyzed for both plans. The D95 for the CTV in the uncertainties band DVH for the worst-case scenario was used to judge the usefulness of the PTV in improving the robustness of IMPT plans. Results: The IMPT plans with the minimum PTV target coverage objective are found to have a better D95 for CTV compared to the IMPT plans without this objective in the plan optimization with comparable DVHs for the OARs. Conclusion: The PTV was found to be useful in improving the robustness of the IMPT plans for target coverage under range and setup uncertainties. Several models of variable RBE may be implemented in clinical and research-based treatment planning systems for carbon radiotherapy, including the microdosimetric kinetic model (MKM), stochastic MKM (SMKM), and Local Effect Model I (LEM), which have not been thoroughly compared. This work compares how models handle carbon beam fragmentation, providing insight into where model differences arise. Geant4 Monte Carlo was used to simulate clinically realistic monoenergetic and SOBP carbon beams incident on a phantom. Using these, input parameters for each RBE model (microdosimetric spectra, double strand break yield, kinetic energy spectra, dose) were calculated for relevant fragment species (H, He, Li, Be, B, and C). Spectra for each fragment were used to calculate linear and quadratic portions of each RBE model, which were combined with reference values and physical dose to calculate RBE. Calculations found that secondary fragment contributions could exceed 20% of total physical dose (Figure). When calculated using identical beam parameters, RBE magnitude varied greatly across models and was typically lowest using MKM. When compared across fragments, RBE decreased with atomic number when Z , 3 and increased when Z (cid:3) 3 for RBE(MKM) and RBE(SMKM) (Table). RBE(LEM) increased with Z, until dropping sharply at Z ¼ 6. Trends of RBE by fragment varied by LET region for microdosimetric models only. This study demonstrated that secondary fragments can contribute notably to physical and estimated biological dose, indicating that fragmentation is an important factor in treatment delivery. Similar trends were seen in RBE fluctuations by atomic number for microdosimetric models, which differed from those of RBE(LEM). conducted a phase II clinical trial of chemoradiation for LA-NSCLC examining response-adaptive radiation dose-escalation and compared the proton and photon patient cohorts on this trial. fractions whereas PET-responders received 60Gy in 30 fractions. Differences between cohorts were evaluated by Mann-Whitney U-tests and log-rank tests. Results : The cohort of patients treated with proton radiation had significantly lower dose to the lungs and heart with similar PTV volume (Table 1). Median follow up was 19 months. There was no statistically significant difference in overall survival (2y 61% vs. 40%, p ¼ 0.10), progression-free survival (1y 66% vs. 43%, p ¼ 0.28), locoregional control (1y 89% vs. 78%, p ¼ 0.10), or pneumonitis rate (1y 35% vs. 45%, p ¼ 0.65) in the two patient cohorts (Figure 1). Conclusion: Our cohort of patients treated with proton therapy had lower radiation dose to lungs and heart, although there was no significant difference in clinical outcomes. Our results maybe be limited by our sample size, and we await results from larger randomized trials. organs at risk is paramount. We report outcomes of patients with AMNHL receiving chemotherapy followed by either pencil beam scanning (PBS) or double scattered (DS) PRT. Methods: We retrospectively analyzed data from a convenience sample of 24 patients with AMNHL treated between 2012-2019 at our institution. Median age was 36 (range 21-74), and disease stage distribution was 10 stage I, 10 stage II, 2 stage III, and 2 stage IV. Patients received either R-CHOP (18/24) or R-CHP-BV (6/24). Bulky disease ( (cid:3) 7.5cm) was present in 22/24 and 10/24 had B symptoms. Radiation toxicity was graded by CTCAEv5. Results: Median follow-up was 38.8 months (range 8.3-91.5 months) and median PRT dose was 30.6Gy (range 30-39.6Gy). DS was used in 13/24, PBS in 11/24, and both in 6/24. Deep inspiration breath hold (DIBH) was used in 10/24. Median mean lung dose was 4.6Gy (range 2.4-7.4Gy) for PBS and 6.8Gy (range 3-16.7Gy) for DS, while median mean heart dose was 10.5Gy (range 5.9-16.4Gy) for PBS and 7.3Gy (range .01-16.9Gy) for DS. Grade 1 (G1) toxicities occurred in 16 patients and G2 toxicities in 4. G1 radiation pneumonitis occurred in one patient with no (cid:3) G2 pneumonitis. A complete metabolic response was observed in 87.5% of patients (Figure 1). Conclusion: Consolidative PRT following chemotherapy for AMNHL resulted in favorable outcomes without high-grade toxicities. Purpose: To systematically review all dosimetric studies investigating the impact of deep inspiration breath hold (DIBH) compared with free breathing (FB) in mediastinal lymphoma patients treated with proton therapy as compared to IMRT-DIBH. Materials and Methods: A systematic search in PubMed was done to identify studies of mediastinal lymphoma patients with dosimetric comparisons of proton-FB and/or proton-DIBH with IMRT-DIBH including mean heart, lung, and breast doses (named MHD, MLD, and MBD respectively). Case reports were excluded. As of December 2020, eight studies fit these criteria. Results: The trends in dose are summarized in the table. MHD was reduced (n ¼ 2), similar ( , 1 Gy difference, n ¼ 2), or worse (2.5 Gy worse, n ¼ 1) for proton-FB compared with IMRT-DIBH. MLD and MBD in all studies were reduced for proton-FB compared with IMRT-DIBH. Proton-DIBH led to lower MHD (2.3-7.4 Gy difference,) and MLD (0.7-1.1 Gy difference) compared to proton-FB, while MBD remained within 0.3 Gy in all studies. Compared with IMRT-DIBH, proton-DIBH reduced the MHD (1.5-10.1 Gy, n ¼ 7) or was similar (n ¼ 1). MLD (1.7-3.9 Gy) and MBD (1.5-7.8 Gy) were reduced with proton-DIBH in all studies. Integral dose was similar between proton-FB and proton-DIBH, and both were substantially lower than IMRT-DIBH. Conclusion: Accounting for heart, lung, breast, and integral dose, proton therapy (FB or DIBH) was superior to IMRT-DIBH. Proton-DIBH can lower dose to the lungs and heart even further compared with proton-FB. Purpose: To report the dosimetric impact of rotation in temporary tissue expanders (TE) in patients receiving post-mastectomy intensity modulated proton radiotherapy (IMPT). Methods: Between 2017-2020, we identified consecutive patients in an internal registry as having a rotated TE during treatment. TE rotations were identified on daily setup kV imaging or CT-scans. Clinical target volumes (CTV) and organs at risk (OAR) were contoured on post-rotation CT-scans. Analysis of pre- and post-rotation dosimetry was completed in ProKnow (Elekta, Stockholm, Sweden). Results: Thirty-five patients with TE reconstruction undergoing IMPT were identified as having 47 instances of TE rotation and post-rotation CT-scans in treatment position available for analysis. 46/47 pre-rotation plans met CTV(range(r), 4005-5000cGy) coverage of D95% . 95%, while 16/47 met this constraint post-rotation. All pre- and post-rotation plans met coverage of D90% . 90%. 12/14 pre- and 7/14 post-rotation plans met a boost CTV(r, 5625-6000cGy) coverage of D90% . 90%. D0.01cc[Gy] to the left anterior descending (LAD) artery increased 1.5-fold from an average of 13.6Gy to 19.6Gy. D0.01cc[Gy] to the right coronary artery (RCA) increased 1.4-fold from an average of 12.3Gy to 16.2Gy. LAD and RCA mean doses increased 2.4-fold and 1.5-fold, respectively. Mean heart dose increased 1.4-fold for right and left-sided plans, from an average of 0.88Gy to 1.19Gy for left and 0.50Gy to 0.68Gy for right-sided plans. Conclusions: Tissue expanders can rotate during breast IMPT, potentially impacting both CTV coverage and dose to OARs. Awareness of the potential for TE rotation during daily imaging is warranted. Replan is usually indicated. Purpose/Objective(s) : Studies have suggested greater skin toxicity with post-mastectomy proton pencil-beam scanning radiotherapy (PRT) than photon radiotherapy (XRT). We aim to compare the target coverage and skin sparing of a large cohort of patients treated with post-mastectomy PRT and XRT at a tertiary cancer center. Materials/Methods: Consecutive women with unilateral, non-inflammatory breast cancer treated with 50 Gy (RBE) to the chest wall and regional lymph nodes between 2015-2019 were included. PRT was administered with a median of two multifield optimized fields (intensity modulated proton therapy). The chest wall skin was defined as the first 3 mm from the external body surface. PRT and XRT planning objectives with respect to skin were to achieve microscopic disease target coverage while limiting surface hot spots. For XRT 3-5 mm daily bolus was generally employed. PRT planning objectives for skin were V90% . 90% and D1cc , 105%. Results: One hundred seventy-nine women were included, 96 receiving PRT and 83 receiving XRT (95% 3D conformal radiotherapy with a wide tangent technique, 5% intensity modulated radiotherapy). Bolus was utilized in 93% of XRT patients. There was no significant difference of clinical characteristics between the groups (Table 1). Clinical target volume coverage with 47.5 Gy was excellent with PRT and XRT. The median skin dose to 0.01 cc, 1 cc, and 10 cc were all Conclusions: Post-mastectomy PRT administered with our skin-sparing technique is associated with lower skin dose than XRT, while maintaining excellent target coverage. Purpose/Objectives : We present a single institution retrospective study on the clinical outcomes of Western patients with large hepatocellular carcinomas (HCCs) treated with proton beam therapy (PBT). Materials/Methods: Fifty-one HCC patients with tumors (cid:3) 5 cm and ineligible for other liver-directed therapies were treated with PBT between 2014-2019 with a 15-fraction regimen of 45.0-67.5 Gy(RBE). Non-classic radiation-induced liver disease (ncRILD) was defined by a Child-Pugh (CP) score increase 2 þ and/or RTOG grade 3 enzyme elevation. Overall survival (OS), progression-free survival (PFS), and local control (LC) were calculated using the Kaplan-Meier method and univariate predictors of OS by Cox regression analysis. Results : Patients represented a high-risk cohort: 45% with BCLC stage C, 18% with CP-B/C cirrhosis, and a median gross tumor volume (GTV) diameter of 11.1 cm. Pencil beam scanning was used in 67% of patients. A simultaneous integrated boost technique was employed in 78% to achieve an average GTV mean BED of 87.0 Gy(RBE). Median follow-up for all patients was 10 months. 1-year OS and PFS were 57% (95% CI 42-70%) and 32% (95% CI 19-48%), respectively. 1-yr LC was 92% (95% CI 78-98%) with three isolated local failures. Out of field liver recurrences were the dominant pattern of failure. Six patients (14%) experienced ncRILD; all but one patient had baseline CP-B liver function. Conclusions: In this largest series to date of Western HCC patients with high-risk large tumors, moderate dose escalated PBT results in excellent local control rates and acceptable toxicities. Out of field and distant failures remain problematic. Purpose : To compare PRO-CTCAE in patients with endometrial cancer receiving adjuvant pelvic radiotherapy with proton beam therapy (PBT) vs intensity modulated radiotherapy (IMRT). Materials and Methods : Patients with uterine cancer treated with curative intent who received either adjuvant PBT or IMRT between 2014-2020 were identified. Patients were enrolled on a prospective registry using a gynecologic specific subset of PRO-CTCAE designed to assess symptom impact on daily living. Gastrointestinal questions included symptoms of diarrhea, flatulence, bowel incontinence, and constipation. Symptom based questions were on a 0-4-point scale, with Grade 3 þ symptoms occurring frequently or almost always. Patient reported toxicity was analyzed at baseline, end of treatment (EOT), and at 3, 6, 9, and 12 months after treatment. Unequal variance t-tests were used to determine if treatment type was a significant factor in baseline adjusted PRO-CTCAE. Results : Sixty-seven patients met inclusion criteria. Twenty-two received PBT and 45 patients received IMRT. Brachytherapy boost was delivered in 73% of patients. Median external beam dose was 45 Gy for both PBT and IMRT (range: 45-58.8 Gy). When comparing PRO-CTCAE, PBT was associated with less diarrhea at EOT (p ¼ 0.01) and at 12 months (p ¼ 0.24) compared to IMRT. Loss of bowel control at 12 months was more common in IMRT patients (p ¼ 0.15). Any patient reported Grade 3 þ GI toxicity was noted more frequently with IMRT (31% vs 9%, p ¼ 0.09). Discussion : Adjuvant PBT is a promising treatment for patients with uterine cancer and may reduce patient reported gastrointestinal toxicity compared to IMRT. Introduction : Collaborative prospective we report outcomes for risk prostate (HRPC) Methods : After exclusion, 605 HRPC patients from 8/2009-3/2019 at nine institutions were analyzed for freedom from progression (FFP), metastasis free survival (MFS), overall survival (OS), and toxicity. Multivariable cox/binomial regression models were used to assess for predictors of FFP and toxicity. Results : [CS1] Median age was 71 years. Gleason grade groups 4 (49.4%) and 5 (31.7%) were most common, as were stage T1c (46.1%) and T2 (41.3%). The median pre-treatment prostate specific antigen was 9.18. Median dose was 79.2 GyE in 44 fractions. Pelvic lymph nodes were treated in 58.2% of cases and 63.6% of patients received androgen deprivation therapy. Pencil beam scanning was used in 54.5%, uniform scanning in 38.8%, and a rectal spacer in 14.2%. At a median follow-up of 2 years, the 3- and 5-year FFP were 90.7% and 81.4%, respectively. The 5-year MFS and OS were 92.8% and 95.9%, respectively. Independent correlates of FFP included Gleason (cid:3) 8, PSA . 10, and cT2 (all P , 0.05). There were no grade 4 or 5 adverse events. Late grade 2 and 3 genitourinary toxicity was 5.8% and 1.7%, respectively, while late grade 2 and 3 gastrointestinal toxicity was 5% and 0%. Grade 2 and 3 erectile dysfunction at 2 years was 48.4% and 8.4%, respectively. Conclusion : In the largest series published to date, our results suggest that early safety and efficacy outcomes using proton therapy for HRPC are encouraging. Purpose : To assess acute GI and GU toxicities of IMPT targeting prostate/seminal vesicles and pelvic lymph nodes for prostate cancer Methods : A prospective study (ClinicalTrials.gov: NCT02874014) evaluating moderately hypo-fractionated IMPT for high-risk (HR) or unfavorable intermediate-risk (UIR) prostate cancer accrued 56 patients. Prostate/seminal vesicles and pelvic lymph nodes were treated simultaneously with 6750 and 4500 cGy RBE, respectively, in 25 daily fractions. All received androgen deprivation therapy. Acute GI and GU toxicities were prospectively assessed, using 7 GI and 9 GU categories of CTCAEv.4, at baseline, weekly during radiotherapy, and 3-month post-radiotherapy. Fisher exact tests were used for comparisons of categorical data. Results : Median age: 75 years. Median follow-up: 25 months. 55 patients (52: HR; 3: UIR) were available for acute toxicity assessment. 62% and 2% experienced acute grade 1 and 2 GI toxicity, respectively. 65% and 35% had acute grade 1 and 2 GU toxicity, respectively. None had acute grade (cid:3) 3 GI or GU toxicity. The presence of baseline GI and GU symptoms was associated with a greater likelihood of experiencing acute GI and GU toxicity, respectively (Table 1 and 2). Of 45 patients with baseline GU symptoms, 44% experienced acute grade 2 GU toxicity, compared to only 10% among 10 with no baseline GU symptoms (p ¼ 0.07). Although acute grade 1 and 2 GI and GU toxicities were common during radiotherapy, most resolved at 3 months post-radiotherapy. Conclusions : A moderately hypo-fractionated regimen of IMPT targeting prostate/seminal vesicles and pelvic lymph nodes yielded very acceptable acute GI and GU toxicity. Purpose: We proposed an experimental approach to build a precise machine-specific model for standard, volumetric, and layer repainting delivery based on a cyclotron accelerator system. Then, we assessed the interplay effect using a 4D mobile lung target phantom compared to a generic delivery sequence model from West German Proton Therapy Essen (WPE). Methods : The machine delivery log files, from an IBA ProteusPLUS t system, were retrospectively analyzed to quantitatively model energy layer switching time, spot switching time, and spot drill time for standard and volumetric repainting delivery. To quantitatively evaluate the interplay effect, a series of digital thoracic 4DCT image sets were used. The interplay effect was assessed based on the 4D dynamic dose accumulation method. Different delivery technique such as standard delivery (n ¼ 1), volumetric repainting delivery (n ¼ 2,3,4) and layer repainting delivery (n ¼ 2,3,5,25) were simulated based on the machine-specific delivery sequence model and WPE model. Results: The results showed that the WPE model’s spot delivery sequence deviated from the log file significantly compared to the machine-specific model. Based on the treatment delivery calculation of a lung treatment plan with target size (65 mm 3 ) and layer repainting 25 times (n ¼ 25), the difference is about 21.01%. Such a difference also resulted in different interplay effects estimation between the two models even though both institutions used the same proton system from IBA and calculated using the same 4DCT imaging set. Conclusion: A precise machine-specific delivery sequence is highly recommended to ensure an accurate estimation of mobile target treatment’s interplay effect. Purpose : To evaluate the effectiveness of existing shielding in a dedicated research room (second floor) for proton FLASH beam delivery. Materials and Method : The radiation survey was performed with Ludlum 42-38 WENDY-2 neutron detector and Ludlum 9DP ion chamber survey meter in a fixed horizontal beam room using an ultra-high dose rate proton beam (FLASH). A 250 MeV spot was delivered (total 5 minutes) with a cyclotron current of 600 nA ( ~ 210 nA at the nozzle), which provided a spot peak dose rate of 805 Gy/s. The survey meters were moved around to identify the highest reading of each location, and the readers were compared to survey results of clinical standard-dose rate beams. Results : The highest readings for the FLASH beam were along the beam path and read 550 l R/hour on WENDY-2 and 55 l R/hour on 9DP ion chamber meter. The neutron and photon readings are 97 to 170 fold higher than for clinical beams for the location with direct transmission. The readings are ~ 28 fold higher in the control room due to the length of the maze. High activation of 650 mR/hour, 434 mR/hour, and 186 mR/hour was observed in the solid water beam stopper at isocenter 5, 30, and 60 minutes after FLASH delivery. Conclusion : No extra shielding is needed to deliver FLASH beam in our research room. A beam-angle-dependent survey is recommended for the gantry room due to the flexible beam angles. Special attention should be paid to the activation of equipment in the treatment room. an experimental approach to a precise prototype arc system the routine proton clinical operation. SPArc two kinds of parameters:(1) mechanical parameters. gantry derive the irradiation parameters of established both and irradiation parameters. SPArc different disease to validate the model’s assess treatment efficiency the DSM SPArc used to simulate the SPArc treatment delivery sequence and compared to the clinical IMPT logfiles from the two full clinical days. Purpose: To address the challenges of generating a deliverable and efficient spot-scanning proton arc (SPArc) plan for a proton therapy system. We developed a novel SPArc optimization algorithm (SPArcDMSP) by directly incorporating machine-specific parameters such as mechanical constraints and delivery sequence. Method and Material: A SPArc delivery sequence model (DSMarc) was built based on the machine-specific parameters of the prototype arc delivery system IBA ProteusONE t . The SPArcDMSP resamples and adjusts each control point’s delivery speed based on the DSMarc calculation through the iterative approach (Fig1). Users could set the expected delivery time and gantry max acceleration as a mechanical constraint during the optimization. Four cases (brain, liver, head neck, liver, and lung cancer) were selected to test SPArcDMSP. Two kinds of SPArc plans were generated using the same planning objective functions:(1) SPArcDMSP plan meeting the maximum allowable gantry acceleration speed(0.6deg/s2);(2) SPArcDMSP-user-speed plan with a user pre-defined delivery time and acceleration speed , 0.1deg/s^2. Arc delivery sequence such as gantry speed, delivery time was simulated based on the DSMarc and was compared. Results: With a similar objective value, number of energy layers, and spots, both SPArcDMSP and SPArcDMSP-user-speed plans could be delivered continuously within the 6 1 degree tolerance window.The SPArcDMSP-user-speed plan could minimize the gantry momentum change based on users’ preference (Fig 2). Conclusions: For the first time, the clinical users could generate a SPArc plan by directly optimize the arc treatment speed and momentum changes of the gantry. This work paved the roadmap for the clinical implementation of proton arc therapy in the treatment planning system. Purpose: To quantitatively investigate the dose perturbations difference between gold and platinum VISICOIL TM fiducial markers in proton beam therapy Method: Gold and platinum VISICOIL TM fiducial markers with two different dimensions were tested, including 0.35 & 0.5mm in diameter and 5mm in length. Total four kinds of markers. Gafchromic EBT2 was used to measure dose perturbations along the beam path in a ‘‘ sandwich ’’ setup (Figure 1). Dose perturbation was reported in each depth (0.3mm, 1.65mm, 3.00mm, 5.40mm, 7.80mm, 10.20mm, 12.60mm, 18.15mm). Relative proton stopping power relative to water were calculated through National Institute of Standards and Technology database and SRIM (version -2013) in the therapeutic energy range (70-220MeV). Result: There is no statistical difference between gold and platinum VISICOIL TM fiducial markers in all the depth with diameter 0.35mm (p ¼ 0.125), and 0.5mm (p ¼ 0.130). The maximum point dose perturbation between Au and Pt marker with the same dimension are similar (0.35mm diameter at 7.8mm WET: 2.85% 6 2.31% Au vs. 2.70% 6 2.60% Pt; 0.5mm diameter at 5.4mm WET: 8.81% 6 2.60% Au vs. 8.81% 6 2.57% Pt.) (Figure 2). Bilateral treatment field arrangement could further reduce the dose perturbation by half. The relative stopping power ratio to water was calculated based on gold and platinum materials. The result showed there are about 3.5% difference between the two materials. Conclusion: The study indicated that the Au and Pt VISICOIL TM fiducial markers have very Purpose : To evaluate the dosimetric impact of different spine implants on proton therapy for paraspinal targets using an in-house fast Monte Carlo dose calculation platform. Methods : The commercial Eclipse TPS was used to generate proton plans for a representative spinal chordoma target in a spinal phantom with four different spine implants: normal tissue without an implant, titanium, carbon-fiber-reinforced polyetheretherketone (CFR-PEEK), and hybrid (CFR-PEEK screw with titanium head) implants. The in-house fast Monte Carlo dose calculation algorithm, MCsquare, was used to evaluate the impact of different implants on plan quality. Results : Monte Carlo dose calculation revealed that up to a maximal 16% local dose shadow within target after titanium screw and rod, and it depends on the dimension of metal implant and also the beam arrangement. The D95 of the CTV50 [AC1] decreased by 8.2% and 4.5% for titanium and hybrid implant, respectively, but no meaningful difference was found for CFR-PEEK implant and normal spine when comparing TPS with MCsquare. Monte Carlo results show no impact on OARs’ dosimetric merits. Conclusion : Dose calculation accuracy of TPS is limited for scenarios with metal heterogeneity. Titanium implants, in certain circumstances, cause dose shadowing and could theoretically compromise target coverage. On contrary, the use of increasing CFR-PEEK proportion, especially with complete CFR-PEEK implants improves the overall dosimetry accuracy. Purpose: While proton centers may observe a low frequency of prostate patients with titanium hip replacements, their plans require a specialized beam arrangement. Due to the increased Purpose : To quantify voxel-level dose-response relationships for non-small cell lung cancer (NSCLC) patients treated with passive scattering proton therapy (PSPT) and intensity modulated radiation therapy (IMRT). Methods : 203 locally advanced NSCLC patients treated on a prospective clinical trial with PSPT or IMRT were selected. For each patient, the planned dose was recomputed on the exhale planning 4DCT phase, and a 5-month post-treatment PETCT was obtained. Each planning/post-treatment CT-pair was registered via a biomechanical model-based deformable registration algorithm. Subsequently, voxel-level image density change ( D HU) was calculated by subtracting the planning CT from the deformed PETCT to represent response. For each cohort, normalized mean D HU of voxels within the non-cancerous ipsilateral lung was fit to a standard Lyman NTCP model, and mean D HU of air voxels ( , -800 HU) and tissue voxels ( . -750 HU) in the planning CT was plotted against dose. Results : Fifty-six patients with 74 Gy-RBE prescription dose have been analyzed to date. Figure 1 demonstrates a steeper NTCP curve for IMRT than PSPT. Conclusion: The voxel-level dose-response relationship demonstrated a variable dependence on dose, supporting the hypothesis of higher linear energy transfer and relative biological effectiveness of low-dose protons compared to photons. Purpose : Beam time in a proton therapy center (PTC) is a scarce resource relative to demand. Understanding the capacity of a PTC is critical to patient, provider, and staff satisfaction, and to financial sustainability. Queuing theory is the mathematical framework for analyzing service-demand systems, such as patient flow through a clinic. We describe a model for simulating a single-room PTC. Methods : The model is comprised of probability distributions for patient arrival, patient characteristics, and machine reliability. The distributions and parameters were selected to approximate the observed patient characteristics and machine maintenance records of a single-room PTC. Ten years of center operation at 16 hours per day were simulated for average patient arrival rates of 4-7.5 per week. The number of treatments delivered, machine availability, and patient wait times were recorded. Results : Machine availability was 90% and the average time per treatment was 22.2 min. Box plots of number of treatments delivered per day and patient wait times versus patient arrival rate are shown below. The maximum capacity of the center was attained at 6.5 patients per week, resulting in an average of 35.3 treatments per day. Above 6.5 patients per week, wait times grow because patients arrive faster than treatments are completed. Conclusion : We have demonstrated a throughput simulation of a single-room PTC. The model will be used to set realistic expectations for patient volume and to explore the effects of innovative operation strategies, such as selectively treating patients on weekends in anticipation of downtime events. Introduction ; One of the major effects of radiation is based on endothelial damage leading to an increased capillary leakage. MRI-based treatment response assessment maps (TRAMs) are established to qualitatively assess radiation induced capillary leakage based on contrast washout/accumulation over long delays. This is the first clinical study evaluating radiation induced small vessel damage during proton therapy (PT) both qualitatively and quantitatively. Materials and Methods : Twenty-two patients (5 gliomas, 17 meningioma) were treated with PT. T1-weighted MR images were acquired 5 and 60 minutes post-contrast at five time points: before treatment (T1), mid-treatment (T2), end-treatment (T3), 6 months (T4) and 12 months follow-up (T5). TRAMs were generated by Sheba Medical Centre in Israel. Changes within tumours during radiation (T2, T3) and follow-up (T4, T5) compared to baseline (T1) were studied. The quantitative analysis was performed using MICE toolkit TM (Medical Interactive Creative Environment) software (Fig. 1) and included the % of GTV of each respective TRAM colour (red, blue). Results : At baseline (T1) glioma GTVs presented on average 20 6 8% contrast clearance/AT compared to 78 6 4% in meningioma. Glioma GTVs showed non-significant decrease in clearance/AT during therapy which increased between end of therapy and follow-up; accumulation/TE decreased over all time points non-significantly. In meningioma GTVs contrast clearance/AT decreased significantly during therapy (T1-T3) and stabilized at follow-up (T4, T5). Conclusion : Here we show for the first time radiation-induced changes in the tumour during and early after proton therapy based on TRAMs. Further evaluations and follow-up is needed to fully understand the clinical impact in terms of response-assessment. irradiation dose (high . 54 Gy 13 pts. vs low (cid:4) 54 Gy RBE 8 pts.) showed a trend toward significance for 2-years LC (high-dose 74% vs. low-dose 100% p .08) Conclusion : Proton re-irradiation is a safe and effective modality to successfully treat recurrent meningioma. The toxicity profile in our series was very favorable. Early recurrences after conventional RT have poor prognosis. Introduction This is of toxicities of reirradiation with (PT) for central nervous system (CNS) using the Collaborative Group (PCG). Methods : The multi-institutional, prospectively collected PCG registry was queried for CNS tumors treated with PT reirradiation between 2010-2020. Acute grade 2 (G2) and grade 3 (G3) toxicities were reported, with binomial regression analysis to identify correlates thereof. Results : Overall, 97 male and 79 female patients 19-85 years old (median 49) were identified, with 37 benign tumors, 117 gliomas, and 22 medulloblastoma/ependymomas/neuroendocrine tumors, located in cerebral hemispheres (n ¼ 130), infratentorium (n ¼ 24), base of the skull (n ¼ 14), and spinal cord (n ¼ 8). The median time to PT reirradiation was 63 months. Median PT dose and cumulative dose (EQD2) were 50 Gy 10 (13 – 66 Gy) and 104 Gy 10 (51-210 Gy 10 ), respectively. Chemotherapy was given with PT in 86 patients. Baseline ECOG was 0 (n ¼ 55), 1 (n ¼ 56), 2 þ (n ¼ 42). Median follow-up was 10 months. Acute G2 and G3 toxicities occurred in 51.1% and 7.9% of cases, respectively. Eighteen patients had G3 symptoms at baseline, and all but one resolved after PT. There were no grade 4 or 5 toxicities. Independent correlates of G3 toxicity per multivariable binomial regression analysis include ECOG 2 or higher (HR ¼ 18.7, P ¼ 0.003) and cumulative EQD2 dose over 115 Gy (HR ¼ 4.7, P ¼ 0.03). Conclusion : Poor performance status and higher cumulative radiation doses in the salvage setting correlated with more G3 toxicity, but in appropriately selected patients reirradiation with PT for CNS tumors is well tolerated. Purpose : To evaluate a novel spine implant, carbon-fiber-reinforced polyetheretherketone (CFR-PEEK), for proton treatment planning. Methods : We compared target coverage and sparing of organs at risk (OARs) for a spinal phantom with four different spine configurations: normal (no implant), titanium, CFR-PEEK, and hybrid (CFR-PEEK with titanium head). The spinal phantom was imaged via CT scan, and the iMAR CT set was used for planning. A representative spinal chordoma target and OARs were contoured. 50Gy was prescribed to the initial target volume, followed by a 24Gy boost, for which MFO proton plans were developed with 3 mm and 3.5% uncertainties. OAR dose constraints were set according to our institutional guidelines, including limiting the spinal cord Dmax , 63Gy. We avoided any direct proton path through titanium parts per institutional practice. Results : For the four spine configurations, the proton plans achieved similar nominal target coverage, heart mean, and spinal cord max dose. However, when evaluating coverage and OAR dose under uncertainty scenario analysis for initial CTV 50Gy 95% and 90% coverage, higher means and narrower range of doses were achieved for the normal and CFR-PEEK plans than the titanium and hybrid plans. Similarly, uncertainty analysis of spinal cord Dmax showed tighter distribution for normal and CFR-PEEK plans. Conclusion : The CFR-PEEK implant has similar clinical properties to a normal spine for proton planning, allowing us to pass protons through the material and achieve superior target coverage and OAR sparing under nominal and uncertainty conditions as compared to treating in the presence of titanium hardware. Purpose: To report on outcomes and toxicities following proton therapy (PT) for patients with primary central nervous system (CNS) germinoma and non-germinomatous germ cell tumors (NGGCT). Methods: Data on patients with primary CNS germinoma and NGGCT treated with PT were queried from a prospective multi-institutional registry (PCG). We performed a similar query of our institutional database with IRB approval. Acute and late toxicities were scored using CTCAE v4.0. Results: Forty-three patients (32 germinoma; 11 NGGCT) met the eligibility criteria, including 22 from PCG and 21 from our institution. Median age was 19 years (Range: 8-47). Twenty-three patients underwent surgery for tissue diagnosis and twenty were diagnosed based on imaging/laboratory values. Median PT dose was 36 Gy for germinoma patients (Range: 30-45 Gy); all NGGCT patients received 54 Gy. Twenty-three patients (22 germinoma, 1 NGGCT) received whole-ventricular irradiation and nineteen (9 germinoma, 10 NGGCT) received CSI. Median follow-up was 29 months (Range 2-101 months). At last follow-up, all had stable or controlled disease with 2-year disease-free and overall survival rates of 100%. Grade 2 alopecia was recorded in 33 patients (77%). Excluding alopecia, 15 patients (34%) developed any acute grade 2 non-hematologic toxicity, with only one grade 3 toxicity (fatigue). No late grade 2 þ toxicities were reported. Conclusions: In this multi-institutional study, patients treated with PT for CNS germinoma and NGGCT had high tumor control rates during early follow-up and few clinically significant acute or late treatment-related toxicities. Long-term outcomes for disease control and neurocognition are needed to measure the benefits of PT. Purpose : To evaluate the feasibility of proton therapy of ocular melanomas using a non-dedicated treatment planning system (TPS) and proton pencil beam scanning gantry beam line. Methods : The commercial Eclipse TPS was used to generate robust multifield optimized (rMFO) intensity-modulated proton plan for representative ocular tumor patients. Doses were compared among the initial plan and 40 additional scenarios of combined setup errors and range uncertainties. An in-house fast Monte Carlo dose calculation platform was used to assess the dosimetric impact of 3 tantalum fiducial markers for imaging-guidance treatment. Results : Retina, optic nerve, cornea, lens, lacrimal gland, conjunctiva, sino-nasal mucosa and GTV were contoured on the treatment planning CT. 3-dimentional rMFO planning accounting for 2mm setup uncertainty and 3.5% range uncertainty was performed, utilizing 3 fields at different optimal gantry angles. All plans achieved satisfactory target coverage (TC), with at least 95% of CTV receiving full prescription of 50 Gy RBE in 5 fractions while achieving clinical dose limits of all organ at risks. The average target coverage remained D95 ¼ 97.7% over 40 scenarios. Monte Carlo dose calculation revealed up to an 11% local dose shadow within target and D95 decreased by 3.2% if tantalum marker is in the beam path. Conclusion : Non-dedicated TPS and gantry beam line can be used to effectively treat ocular tumors. This procedure is feasible with relatively low doses to anterior structures and achieves acceptable plan robustness. Fiducial markers could cause dose shadows and theoretically compromise local tumor control. Optimized beam angle and fiducial positioning should be considered. Conclusions: Unilateral Proton Beam RT for oropharynx cancer has similar disease control to photon therapy. The dosimetric advantage of proton beam therapy did not result in excess contralateral failures when compared to historical unilateral photon beam radiotherapy series. Purpose: High linear energy transfer (LET) neutrons have been used to treat over 3,300 patients at the UW because of their ability to overcome multiple mechanisms of resistance to low LET radiations. Technical and clinical challenges of implementing IMNT are presented along with an analysis of the potential therapeutic benefits. Methods: A commercial treatment planning system (TPS) has been modified to incorporate neutron scattering kernels and accommodate the unique characteristics of the Clinical Neutron Therapy System (CNTS). A Monte Carlo model of the CNTS has been developed to independently confirm TPS doses. A portal imaging system based on 11 C positron emission tomography has also been developed. Results: Comparisons of measurements, TPS and Monte Carlo doses are in excellent agreement (3%/3mm g analysis) for a wide range of field sizes, both open and wedged. An analysis IMNT plans for seven head and neck patients shows an average 56% decrease in organ at risk dose compared to 3D conformal neutron therapy (3DCNT). The maximum dose decreased by 20% and 21% for the spinal cord and temporal lobe, respectively. The mean larynx D50% decreased by 80%. The overall number of monitor units for wedged and IMNT treatments is similar. Conclusions: With IMNT, comparative planning studies demonstrate significant reductions in OAR dose are possible with similar target coverage. Clinical trials to compare 3DCNT to IMNT are in development. Such trials will inform ongoing work to evaluate the use of other types of high LET radiations for patient care, including carbon ions. Purpose: DNA fragmentation leads to micronuclei (MN) and release of self-DNA triggering the cGAS-STING pathway. Compared to low-LET radiation, we hypothesize that high-LET radiations 1) increase the number of MN per unit dose, and 2) rupture more frequently. Impact on MN for cells exposed to a DNA damage repair inhibitor (DDRi) were also assessed. Methods: MN formation and rupture assessed in vitro using MCC-13 cells after 8 Gy of x-rays and 3 Gy of fast neutrons. Cells irradiated then fixed after first mitosis. Immunofluorescence markers were used for evaluation of DNA, MN rupture and plasma membrane integrity. Confocal microscopy imaging with automated image analysis provided: MN per cell, proportion of ruptured MN, and number of intact and ruptured MN. The proportion of ruptured MN was compared for x-rays and neutrons. Cells with (cid:3) 1 ruptured MN were scored at 38- and 72-hours post-irradiation. Additionally, cells were exposed to ATRi, a DDRi, for two hours pre-irradiation and MN analyzed at 72-hours. Results: Per unit dose, high LET neutrons produced more MN than MV x-rays. The proportion of cells with at least one MN rupture at multiple time points was also greater for neutrons than x-rays. Exposure of cells to ATRi increased the MN number and the number of MN ruptures for both radiation types. Conclusions: The RBE for double strand break (DSB) induction and RBE for MN induction are approximately the same. Fast neutrons may promote increased immunogenic cell death more efficiently than x-rays. MedAustron began patient treatments with Proton Therapy in December 2016 and with Carbon Ion Radio-Therapy (CIRT) in July 2019. Currently, CIRT comprises 30% of Particle Therapy treatments and is either applied exclusively, or in combination as boost with Proton Therapy. All eligible patients participate in a prospective registry study. Figure 1 illustrates the distribution of all CIRT patients per indication and histology and figure 2 details the subgroup receiving combined Proton/ CIRT. At the initial phase treatment selection was based on established CIRT indications, but rapidly expanded to take full advantage and explore the opportunities of Carbon Ion properties. Since CIRT was integrated into the pre-existing Proton Therapy program this presentation will focus on the clinical decision algorithm between Protons versus Carbon Ions or a combination of both. Principle factors involve radiobiologic considerations of possible improvement in local control for selected histologies and stages, physical advantages of Carbon Ions (sharp penumbra and small spot size), and optimal re-irradiation dose profiles and fractionation schemas. However, individualized risk assessment of particle therapy also takes into account the comparable large body of evidence on dose tolerance for Proton Therapy versus presently limited clinical data or extrapolated clinical normal organ tolerance data in case of CIRT. Examples will be presented. Optimization of Multi-Ion Therapy led to other innovative concepts, for example delivering high dose intra-tumoral CIRT boost without significantly increasing dose to normal tissues. CIRT was well tolerated and details of acute side effects on the initial 100 patients will be presented. Extremely hypofractionated SBRT-based PA rtial T umor irradiation targeting HY poxic clonogenic cells ( PATHY ) but sparing peritumoral immune microenvironment (PIM) has previously been developed and clinically assessed for treatment of unresectable bulky, oligometastatic disease, showing encouraging results in terms of bystander and abscopal effects induction. Present study will be conducted to determine the immunogenic potential of carbon-ions applied to this novel concept. The hypothesis implies that for an effective immune modulation leading to improved therapeutic ratio, the entire tumor volume may not need to be irradiated but only a partial tumor volume, to initiate the immune cycle in radiation-spared PIM, resulting in tumoricidal bystander and abscopal effect. This is a mono-centric, prospective-phase I study which will enroll 23 patients with locally advanced or metastatic cancers with at least one bulky ( (cid:3) 6cm) lesion. This study uses a carbon-based PATHY approach, consisting of 3 consecutive 12Gy RBE fractions delivered exclusively to the hypoxic tumor segment while sparing PIM. The hypoxic segment will be defined using 64Cu-ATSM PET-CT and dynamic contrast enhanced MRT imaging. CARBON-PATHY will be administered at the precise timing, thus synchronized with the most reactive anti-tumor immune response phase based on the serially mapped homeostatic immune fluctuations by monitoring blood levels of the inflammatory markers. Primary endpoint will be bystander effect response rate defined as at least 30% regression of the unirradiated tumor tissue. Secondary endpoints will include overall survival, progression-free survival, abscopal response, symptoms relief, toxicity, feasibility of carbon-PATHY-timing and the bystander/abscopal response rate in relation to dose-size of PIM. Purpose/Objectives: Sacral chordomas are rare, locally aggressive neoplasms, for which both surgery and radiotherapy are utilized as local treatments. We compare outcomes for patients undergoing carbon ion radiotherapy (CIRT) versus surgical resection (SR) ( þ /- radiotherapy) versus definitive radiotherapy (DR) (proton/photon). Materials/Methods: Propensity score matching was used to compare CIRT and SR from two institutional databases. Baseline characteristics, oncologic outcomes, functional mobility scale (FMS) and toxicities were compared. Five subgroups from the National Cancer Database (NCDB), including patients treated with SR (with positive and negative margins) and DR (photon/proton), were matched to the CIRT cohort for outcomes analysis. Cost of care within 2 years of treatment was analyzed. Results: Forty-seven CIRT patients were matched to 47 SR patients with a median follow-up of 68.1 and 58.6 months, respectively. Baseline characteristics after matching were similar apart from poorer performance status in the CIRT cohort. After treatment, there was no difference in urinary retention, need for colostomy, overall survival (OS), progression-free survival, local recurrence, or distant metastasis between groups. Patients in the CIRT cohort had improved FMS and lower peripheral nerve toxicity (Table 1). In comparison of CIRT (n ¼ 188) to the NCDB subgroups (n ¼ 669), OS favored CIRT when compared to patients who had SR with a positive margin without adjuvant radiotherapy (p ¼ 0.03, median follow-up 60.6 months). OS was similar between CIRT and DR proton patients and improved when compared to DR photon patients. Costs for CIRT and DR proton were lower than the SR cohort. Conclusion: These data suggest CIRT is a safe, effective, and cost-effective treatment option. This study aims at developing a pencil beam model for Magnetic Resonance Imaging guided Carbon Ion RadioTherapy (MRIgCIRT). The main issue was how to model the fragmentation of primary 12 C ions and their magnetic deflection. Particles were classified into 3 groups according to the similarity of specific energy: group 1 ( 12 C), group 2 (C isotopes other than 12 C, B, Be and Li) and group 3 (other particles). In groups 1 and 2, the lateral distribution of physical dose was approximated by a Gaussian function, while the superposition of Gaussian and Lorentzian functions was used for group 3 to describe the halo which arises from light particles (Figure 1). The specific energy was considered to be constant at each depth. All parameters were obtained from Monte Carlo simulations using Geant4. To evaluate our model, biological dose distribution was calculated based on the micro dosimetric kinetic model and a lateral irradiation field was generated for comparison with the one simulated by Geant4. 12 C beam was irradiated into a water phantom with 3-T magnetic field in the Geant4 simulation. Although the maximum of the absolute difference was increased for lower or higher energy, it did not exceed 2.7 % (Figure 2). This increase was probably due to overestimation by the Lorentzian function or the higher asymmetricity caused by beam deflection. The results of this work indicated that our model is valid for MRIgCIRT. Further research is needed to apply our model to a heterogeneous tissue. hepatocellular carcinoma, cholangiocarcinoma, locally advanced pancreatic cancer, non-small cell lung cancer, localized prostate cancer, soft tissue sarcomas, and head and neck cancers) diagnosed in 2015. The percentage and number of patients likely benefiting from CIRT was estimated using inclusion criteria from clinical trials and retrospective studies, and this ratio was applied to 2019 statistics. An adaption correction rate was applied to estimate the potential number of patients treated with CIRT. Given the high dependency on prostate and lung cancers, the data were then re-analyzed excluding these diagnoses. Results: Of the 1,127,455 new cases of cancer diagnosed in the United States in 2015, there were 213,073 patients eligible for treatment with CIRT based on inclusion criteria. When applying this rate and the adaption correction rate to the 2019 incidence data, an estimated 89,946 patients are eligible for CIRT. Excluding prostate and lung cancers, there were an estimated 8,922 patients eligible for CIRT. The need for CIRT is estimated to increase by 25- 27.7% by 2025. Conclusions: Our analysis suggests a need for CIRT in the United States in 2019, with the number of patients possibly eligible to receive CIRT expected to increase over the coming 5-10 years. Purpose: Despite contrary evidence in the literature, a constant proton relative biological effectiveness (RBE) of 1.1, regardless of linear energy transfer (LET), is still used for treatment plan evaluation and optimization. End-of-track effects (LET in excess of 5 keV / um) is an ongoing concern in proton therapy. We retrospectively analyzed the delivered treatment dosimetry and used published variable RBE models to evaluate the impact of high-LET track-ends in a patient with necrosis at the tongue base. Methods and Materials: A 68-year old male with grade 4 necrosis of the soft tissue and hyoid bone approximately 3 months after definitive proton therapy for a T1N1 squamous cancer (concurrent chemotherapy). The region of gross disease received 69.96 Gy (RBE 1.1) in 33 fractions with pencil beam scanning. RBE-weighted dose (RWD) was evaluated using several variable (with LET) RBE models that span the range of possible RBE values at the track-end. Results: Low-dose tissue regions adjacent to the tumor target are likely to have an elevated RBE (track-end effect). Variable RBE modeling suggests a 5-10% increase in the RBE-weighted dose (RWD) in the tissue regions with observed toxicity. We found no evidence the delivered treatment differed from the planned treatment (RBE ¼ 1.1). Conclusions: The unexpected treatment toxicity observed in this patient cannot be easily explained from a dosimetric perspective (RBE ¼ 1.1) or in terms of the RWD computed using several, published variable RBE models. Other patient-specific factors likely contributed to the observed clinical outcome. Purpose : To compare IMPT vs. VMAT treatment plans for a spinal chordoma tumor using four unique spine configurations. Methods : A representative 14 cm Objective : Patients undergoing radiotherapy for meningiomas in close proximity to optic structures require particular attention towards maintaining visual status. This study reports on the patient-reported, prospective assessment of the visual performance following proton therapy (PT). Methods : All patients treated with PT for meningioma WHO I, whose planning target volumes included parts of the optic system, were included. Assessment tool was the Visual Disorder Scale (VDS), of the EORTC-BN20 questionnaire. Test times were at start of PT, at completion, and at 3, 6, 12 and 24 months (mo) of follow-up (FU,t1-t6). A minimum FU of 6mo was required. Results : With a mean FU period of 23.6mo 56 patients, aged 24-82 years (mean ¼ 53.9), received the institutional prescription dose of 54.0 GyRBE at 2.0 dose/fraction. The mean/D2% doses for optic chiasm and ipsilateral optic nerve were 43.4 GyRBE/49.9 GyRBE and 35.6 GyRBE/51.7 GyRBE, respectively. Mean/D2% doses for the contralateral optic nerve were 18.8 GyRBE/42.4 GyRBE. 302 data sets were analyzed (t1/t2/t3/t4/t5/t6: n ¼ 56/56/48/56/52/34). The mean symptom burden largely decreased over time (graph 1). At 12mo-FU, the subjective visual performance improved significantly ( p ¼ 0.041), 13/15 asymptomatic patients reported no new onset of symptoms, 34/37 symptomatic Daily anesthesia for pediatric patients undergoing proton therapy (PT) has potential to increase neurocognitive adverse treatment effects. It is emotionally and logistically difficult for patients and families, with NPO requirements exacerbating nutritional challenges and necessitating longer time in the center. Daily anesthesia also demands more health care resources including anesthesiologists, nursing support, increasing CT simulation and treatment time, and limiting scheduling flexibility for other patients. We aimed to develop a new tool for identifying and addressing barriers to children (cid:3) 3 years old completing PT awake. Checklists are commonly employed in radiation oncology and anesthesia, but have not been described in this context. We are not aware of prior research examining how strategies are implemented to avoid anesthesia, nor assessing residual barriers to treatment awake in patients continuing to require anesthesia. We developed checklists to be completed by the Radiation Oncologist and Anesthesiologist at simulation and weekly throughout treatment. These prompt the clinician to use several anesthesia-avoiding strategies, outlined in Table 1, and to document the remaining barriers to the patient being treated in Figure As part of an IRB approved quality assurance study, we will analyze data collected from these checklists. Through this rigorous method of implementing anesthesia-avoiding strategies, we expect to reduce anesthesia use and its impacts for children undergoing proton therapy. Equally important, we expect to describe which interventions are effective at what stage in the treatment course and to identify persistent barriers in patients who continue require anesthesia. Purpose: Adolescent and young adult (AYA) with cancer face unique challenges. Methodology: Retrospective review of AYA patients (age 15-25) treated with breast RT. Results: Eleven AYA patients with breast cancer were treated with RT from 1998-2020; eight received RT. With 10-year median follow-up 88% are alive without disease. One died of metastatic disease and one had in-breast recurrence at 21 years. Median age at diagnosis was 24. All presented with palpable mass. Seven were invasive ductal carcinoma, one was adenoid cystic carcinoma. One was pregnancy associated. All underwent genetic testing, and 2 had BRCA1 mutations. All saw a fertility specialist, 4 elected oocyte retrieval (2) or leuprolide (2). Stage ranged from I-IIIC (Stage I (1), II (3), III (4)). Five were ER þ /PR þ /HER2-, 1 was triple positive, and 2 were triple negative. Two underwent lumpectomy, 6 had mastectomy, and 3 had contralateral prophylactic mastectomy. Four underwent reconstruction. Six had chemotherapy. 6 were treated with comprehensive post mastectomy RT and 2 had breast only RT. Proton therapy was used in 3. All experienced acute grade 1-2 dermatitis, no grade 3 or higher toxicities. Three developed grade 2 arm lymphedema at a median of 9 mo post RT, each had axillary dissection. Three developed shoulder dysfunction at a median of 10.9 mo after RT. Conclusions: AYA patients with breast cancer have unique challenges. Among those undergoing RT, oncologic outcomes appear excellent. Extremity lymphedema and shoulder dysfunction were common. Purpose : To report our initial experience using ablative dose intensity modulated proton therapy (IMPT) for lung lesions. Methodology: Local, regional, and distant progression and overall survival (OS) were assessed in 34 patients who received ablative (BED10 . 70) IMPT from 2017-2020. Results : Patient and treatment characteristics can be seen in Table 1. With median follow up of 14 months, 29 patients had partial or complete response as their best treatment response, and 4 had stable disease on post-treatment imaging. OS at 1 and 2 years were 64.9% and 48.2%, respectively, with median OS of 16.1 months. Six patients developed local recurrence (LR). Cumulative incidence (CI) of LR was 10.9% at 1 year and 26.1% at 2 years. Ten patients had a regional recurrence, with a CI of 21.9% at 1 year and 31.3% at 2 years. Seventeen developed distant progression, with a CI of 46.5% and 57.7% at 1 and 2 years. Univariate analysis did not identify any factors associated with increased risk of LR. Acute grade 2 þ toxicity was seen in 2 patients who developed dyspnea. Subacute grade 2 þ toxicities occurred in 4 patients: 2 with radiation pneumonitis (grade-2), 1 with bronchial stenosis (grade-2), and 1 with bronchial obstruction (grade-3). The patient with bronchial obstruction also had a trapped lung (grade-4) requiring surgical management. Both pneumonitis patients had prior ipsilateral lung radiation. Conclusions: Ablative IMPT provided favorable oncologic outcomes with a low rate of toxicity and should be considered especially for patients with underlying ILD or prior lung radiation. Introduction : There are limited studies comparing the acute toxicities of protons vs. intensity modulated radiotherapy (IMRT) in the local treatment of pancreatic cancer. The objective of this study was to compare patient demographics and the incidence of acute clinical events in pancreatic cancer patients treated with IMRT vs. protons. Methods : We collected data on 98 pancreatic cancer patients who were treated with IMRT or protons between 2007-2017 at the three Mayo Clinic locations. Results : Patient characteristics are shown in Table 1. We found that mean age, gender, stage, and chemotherapy use were well balanced among the two treatment modalities. Interestingly, surgical management of the two groups differed significantly. Acute clinical incidents occurring within 90 days of radiation (blood transfusions, weight loss of . 10%, emergency department visits, inpatient admissions, narcotic use, and death) did not significantly differ between IMRT vs. protons (Chi Square; p . 0.05). Both treatments led to a significant reduction in the lymphocyte count from the start to the end of treatment. (T-Test; p , 0.05). The absolute value of this lymphocyte count drop during radiation therapy was similar between IMRT and protons. The total healthcare cost (comprising of radiation, chemotherapy, hospitalizations, ED visits, procedures etc.) was similar between the two modalities (T-Test; p . 0.05). Conclusions : Our data indicate that both protons and IMRT are appropriate treatment modalities, with similar rates of acute events and total healthcare costs. Surgical management was higher among those treated with protons; however, this may reflect a selection bias for healthier patients undergoing proton treatments. Background : Retrospective analyses and a phase II trial have demonstrated that chemoradiation with proton therapy in locally advanced esophageal cancer may reduce treatment-related toxicity, such as lymphopenia and cardiopulmonary complications. An unanswered question is the optimal proton beam arrangement to achieve the maximal reduction in radiation dose to both the heart and lung. Methods : Retrospective review was performed of patients with locally advanced, non-metastatic esophageal cancer treated with chemoradiation utilizing a single posterior-anterior (PA) proton beam technique with pencil-beam scanning (PBS) at a single institution between January 2015 and August 2020. Inclusion criteria: 1) planned with pencil beam scanning (PBS) to 50.4 Gy(RBE) over 28 fractions, 2) concurrent carboplatin and paclitaxel, and 3) age 18 or older. Results : Fifty patients met inclusion criteria of which 42 received trimodality therapy. Median follow-up was 3.2 years. 3-year overall survival (OS) and disease-free survival (DFS) were 77% and 56%, respectively. Nine patients (18%) experienced late grade 3 þ toxicity, all of which were non-malignant esophageal strictures. No vertebral body fractures were observed. On univariate analysis, both ypN stage (HR 2.54 [1.04-6.25], p ¼ 0.04) and PTV volume (HR 1.68 [1.04-2.73], p ¼ 0.03) were significantly associated with DFS. No dosimetry variable was significantly associated with OS or toxicity. Of those undergoing esophagectomy (n ¼ 42), pCR was achieved in 16.7% (n ¼ 7). Conclusions : Chemoradiation utilizing a single PA proton beam with PBS for locally advanced esophageal cancer is feasible and safe with clinical outcomes comparable to historic data. Abstract: Local failure represents a source of morbidity and mortality for patients with locally advanced unresectable or medically inoperable pancreatic cancer (LAPC). We hypothesize that proton therapy (PBT) can achieve durable local control with a reduced risk of side effects as compared to photon therapy. Methods: We analyzed the multicenter prospective registry of the Proton Collaborative Group for patients with LAPC who received definitive PBT. 90% of patients had adenocarcinoma histology, while two patients had either a neuroendocrine tumor or cystadenoma. Overall survival (OS), freedom from local-regional recurrence (FFLR), and freedom from distant metastases (FFDM) was calculated for the adenocarcinoma cohort. Toxicity was calculated for the entire cohort. Results: Nineteen patients were identified. Median age was 70 years. Patients had adenocarcinoma (n ¼ 17), neuroendocrine tumor (n ¼ 1), or cystadenoma (n ¼ 1). Majority had T3-4 (68.4%) disease. Median PBT dose was 54 Gy (IQR: 50.5-59.4). Of patients with adenocarcinoma histology, 76.4% received induction chemotherapy, and 82% received concurrent chemotherapy. Median follow-up time was 10.0 months. Median, Conclusions: This study shows excellent local control following PBT in LAPC, with a lower side effect profile than in modern IMRT photon series. Additional studies are needed to determine if PBT can further improve outcomes without adding toxicity using dose escalated strategies for LAPC. Purpose/Objectives : We present a retrospective single institution study on the clinical outcomes of patients with hepatocellular carcinomas (HCCs) treated with proton beam therapy (PBT) using a simultaneous integrated boost/protection (SIB/P) technique to dose escalate to tumors while protecting organs-at-risk (OARs). Materials/Methods: Thirty-one consecutive HCC patients were treated with SIB/P PBT between 2014-2020 with a 15-fraction regimen of 45.0-67.5 Gy(RBE). Non-classic radiation-induced liver disease (RILD) was defined by a Child-Pugh (CP) score increase 2 þ and/or RTOG grade 3 enzyme elevation. Overall survival (OS), progression-free survival (PFS), and local control (LC) were calculated using the Kaplan-Meier method and univariate predictors of OS by Cox regression analysis. Results : Patients represented a high-risk cohort: 39% with BCLC stage C, 16% with CP-B/C cirrhosis, and a median gross tumor volume (GTV) diameter of 10.2 cm. Pencil beam scanning was used in all patients. An average GTV mean of 62.0 Gy(RBE) was achieved with an average D99 49.5 Gy(RBE) and D95 52.8 Gy(RBE). Median follow-up for all patients was 8 months. 1-year OS and PFS were 53% (95% CI 34-72%) and 75% (95% CI 39-93%), respectively. 1-yr LC was 86.2% (95% CI 63-96%), with two isolated LF. One patient experienced RILD. Conclusions: In this series of HCC patients with high-risk tumors, moderate dose escalated PBT with SIB/P technique that delivers heterogeneous tumor dose results in excellent local control rates and minimal toxicities. Purpose: Consistent daily bladder volumes (BVs) during a course of proton therapy for prostate cancer improves the treatment accuracy and efficiency especially for fixed-beam and SBRT patients. The initial clinical experience of using an ultrasound bladder scanner to optimize bladder-filling was demonstrated. Methods: The CUBEscan TM software used for the BioCon-750 bladder scanner calculates the ultrasound-BV from 12 planes instead of ellipsoid estimation with coronal and sagittal diameters. The daily ultrasound-BV was measured prior to X-ray setup imaging. The patient would wait longer if the ultrasound-BV is less by 25% of the volume calculated in plan CT (pCT) but no void if the ultrasound-BV was larger. The patient-specific drink instruction (16-24 oz., 30-60 mins) could be also adjusted to improve the consistency of bladder filling for the remaining fractions. The daily ultrasound-BVs for 6 patients (5 fixed-beam room, and 1 SBRT) were compared with the volumes calculated in pCT, verification CTs (vCTs), and cone-beam CTs (CBCTs), respectively. Results: Figure 1 displays the daily ultrasound-BVs for a fixed-beam prostate patient with BV of 264.6 ml in pCT and 259.8/ 157.6/270.3 ml in vCTs. Table 1 listed the daily ultrasound and CBCT BVs for a prostate SBRT patient. Preliminary results showed the average daily ultrasound-BV differences versus pCT-baseline were 20.4%, 18.8%, 28.5%, 22.9%, 15.5% and 16.9%, excluding the BVs lager by 50% of pCT-baseline mostly due to treatment delays. Conclusions: Daily ultrasound-BV , 25% different from the pCT-baseline was achievable, which minimizes the number of kV/CBCTs, setup/range uncertainties, and improves treatment efficiency. Purpose: Proton beam therapy (PBT) may provide a dosimetric advantage in sparing soft tissue and bone while achieving target coverage for extremity soft sarcoma (STS). We compare PBT with intensity modulated radiation therapy (IMRT) and 3D conformal radiation therapy (3D CRT) in treatment of extremity STS. Materials/Methods: Seventeen patients previously treated with PBT were collected for this study. Of these patients, 14 patients were treated with preoperative 50 Gy in 25 fractions and are the subject of this study. IMRT and 3D CRT plans were created to compare against PBT plans. Cumulative DVH data were generated to compare techniques. For the clinical target volume, D2, D95, D98, V50, Dmin, and Dmax were assessed. Dmin, D1, Dmax, Dmean, V1Gy, V5Gy, and V50 Gy were evaluated for the adjacent soft tissue. D1cc, Dmax, Dmean, V35-50 were evaluated for bone. Results: All of the plans achieved satisfactory coverage to the clinical target volume. The PBT plans delivered less dose to uninvolved soft tissue and adjacent bone. The mean dose to the soft tissue was 2 Gy, 11 Gy, and 13 Gy for PBT, IMRT, and 3D, respectively. The mean dose to adjacent bone was 15 Gy, 24 Gy, and 27 Gy for PBT, IMRT, and 3D, respectively. Conclusion: PBT for extremity STS demonstrated superior sparing of uninvolved soft tissue and adjacent bone in comparison to IMRT and 3D. Further analysis will identify for which patients PBT provides the maximum benefit. Assessment of clinical outcomes will determine if the dosimetric advantages of PBT correlate with less toxicity. Background and Aim : Acute tongue mucositis and dysgeusia are common toxicities of particle therapy applied in the proximity of oral cavity. In the present report we describe application of individual tongue spacers that may help reduce these side effects by mechanically moving the tongue away from the irradiation field. Material Methods Fourteen the following nasal spacers Gy Gy was with the spacers, of the maximum dose to the up to 22.3 Gy spacers were mucositis of the Purpose: Adjuvant pelvic radiotherapy improves locoregional control in high risk and advanced stage endometrial cancer. Pelvic radiotherapy is associated with acute urinary, gastrointestinal, and hematologic toxicity, which may be reduced with proton therapy. This study is a dosimetric comparison of intensity modulated proton beam therapy (IMPT) versus volumetric modulated arc therapy (VMAT). Methods: The first 10 patients enrolled on an institutional prospective non-randomized trial of proton or photon post-hysterectomy radiotherapy were included. Patients underwent CT simulation with pelvic immobilization and a rectal balloon. Full bladder, empty bladder, and IV contrast scans were obtained. Comparison plans were generated. Clinical target volumes (CTV) included vaginal cuff, proximal 3 cm of vagina, and pelvic lymph nodes (internal iliac, external iliac, obturator, presacral, and distal common iliac) to the level of L4/L5. Photon planning target volume (PTV) was 5 mm expansion on CTV. Proton optimization target volume was 7 mm on vaginal cuff CTV and 5 mm on nodal CTV. Prescription to the CTV (IMPT) or PTV (VMAT) was 45 Gy (or relative biological effectiveness 1.1) in 25 fractions. Results: stages IB (n ¼ 2), (n ¼ 1), IIIA (n ¼ 2), and IIIC1 (n Three treated with VMAT and were treated with IMPT. Nine adjuvant of the PTV (photon) or CTV (proton) and to at risk. for Conclusion: IMPT is associated with reduced low dose to bowel, bladder, and bone marrow. Additional dosimetric comparisons will be conducted as patients enroll. Purpose: A vaginal dilator was used in female pelvis proton therapy to reduce the toxicity of vaginal stenosis. Our initial clinical experience is described including simulation, dilator contouring (HU override), plan optimization, bladder filling, daily IGRT, and continuous plan evaluation with verification scan. Methods: Two consecutive patients were treated to the pelvis with proton therapy using a vaginal dilator. The dilator was inserted with a marked stopping point at the entrance. The physical and water-equivalent thickness (relative stopping power of 1.26) were measured and applied. Three fields (LPO/RPO/AP) with multiple-field optimization were used to deliver a simultaneous integrated boost prescription (50.4/42.0 Gy(RBE)). An ultrasound bladder scanner was used to maintain consistent bladder filling prior to X-ray imaging. Daily kV/CBCT was used to align the patient with , 5 mm setup tolerance to bony structures. Verification CTs were performed to evaluate the plan robustness. Results: The vaginal dilator was located at the distal dose fall-off between the anal target and bladder. There was slight inter-fraction variation of the dilator angle and up to 7 mm difference of the inserted length. The final verification plans showed an increase of , 5 cm 3 of vagina V 47.88Gy(RBE) with the dilator tilted , 4 8 . No significant changes were found in CTV dose coverages. Conclusions: The entrance marker reproduced the length of the vaginal dilator insertion but did not account for rotational positioning. Pre-treatment ultrasound bladder scan improved the consistency of bladder filling and minimized the dose variations. Clinical outcomes and treatment toxicities of the two patients will be followed. Background and Aims : An integrated platform has been created at the University of Washington Medical Cyclotron Facility to conduct proton FLASH research on a mouse model. Methods : A cyclotron beamline has been modified to produce a 6cm diameter scattered beam at dose rates between 0.1 to 100 Purpose : The high cost of proton treatment centers can result in unsustainable economic pressure if the facility is not planned appropriately. Single room centers provide a lower cost facility with a tradeoff of a planning to treat at the maximum capacity. It is critical then to understand realistic treatment times for the patient population served. We analyzed treatment times from two single room proton centers to obtain site specific expectations. Materials and Methods : Database queries were performed from two independent facilities A) a private center treating predominately prostate, and B) a large academic medical center treating a complex case distribution. Treatment sites were grouped by 2 field prostate (2FP), 3 field prostate (3FP), brain (BRN), 3 field head/neck (3FHN), 4 þ field head/neck (4F þ HN), spine (SPN) and breast/chest wall (BCW). Sites were identified retrospectively using plan names. Session time was defined as the time of the first image to the last beam off time. Results : 541 patient datasets encompassing 5737 sessions were analyzed. The average session times were 8.0 6 2.2 min, 11.6 6 5.6 min, 13.7 6 6.8 min, 12.3 6 4.8 min, 12.3 6 5.4 min, 16.7 6 10.9 min, 16.8 6 5.2 min for 2FP, 3FP, BRN, 3FHN, BCW, SPN, and 4F þ HN respectively. The overall session times were 11.2 6 8.5 min and 19.7 6 9.3 min for Facility A and B respectively. Conclusion : The analyzed data in this study provide a reasonable collection of treatment data to plan future centers for various patient populations. Purpose : The purpose of this study was to determine the feasibility of utilizing XRV-124 scintillation detector in measuring the collinearity of the X-ray system and uniform scanning proton beam. Methods : A brass aperture for Snout 10 was manufactured. The center of the aperture had an opening of 1 cm in diameter. The 2D kV X-ray images of the XRV-124 were acquired such that the marker inside the detector is aligned at the imaging isocenter. After obtaining the optimal camera settings, a uniform scanning proton beam was delivered for various ranges (12 g/ cm 2 to 28 g/cm 2 in step size of 2 g/cm 2 ). For each range, 10 monitor units (MU) of the first layer were delivered to the XRV-124 detector. Collinearity tests were then repeated by utilizing EDR2 and EBT3 films following our current QA protocol in practice. The results from the XRV-124 measurements were compared against the collinearity results from EDR2 and EBT3 films. Results : The collinearity results were evaluated in the horizontal (X) and vertical (Y) directions. The average collinearity in the X-direction was -0.24 6 0.30 mm, 0.57 6 0.39 mm, and -0.27 6 0.14 mm for EDR2, EBT3, and XRV-124, respectively. The average collinearity in the Y-direction was 0.39 6 0.07 mm, 0.29 6 0.14 mm, and 0.39 6 0.03 mm for EDR2, EBT3, and XRV-124, respectively. Conclusion : On average, the results from the XRV-124 had a better agreement with that of EDR2. The use of XRV-124 for collinearity tests in uniform scanning protons can improve the efficiency of the QA workflow compared to the films. Purpose : The objective of the current study was to present the comprehensive patient-specific quality assurance (QA) results by comparing RayStation treatment planning system (TPS) predicted dose vs. measured dose in uniform scanning proton therapy. Methods : Proton plans of various disease sites were generated in RayStation TPS. The disease sites studied include abdomen, bladder, bowel, brain, breast, chest wall, esophagus, larynx, liver, mediastinum, head and neck, pelvis, prostate, sacrum, and spine. The field size ranged from 3 cm to 28 cm, whereas the proton beam range and modulation ranged from 4 to 31 g/cm 2 and 2 to 19 cm, respectively. Measurements were acquired using a parallel plate ionization chamber in a water tank following the institution’s QA protocol. The TPS predicted results calculated based on an in-house developed output factor model were then compared against the measurements. Results : A total of 705 proton fields were irradiated. The differences between predicted vs. measured doses were the following – abdomen: 0.10 6 0.36%; bladder: -0.47 6 1.15%; bowel: 0.81 6 1.04%; brain: 0.28 6 1.03%; breast: 0.88 6 1.24%; chest wall: 2.09 6 1.31%; esophagus: 0.39 6 1.26%; larynx: 0.47 6 0.68%; liver: 0.01 6 1.02%; mediastinum: 0.28 6 0.56%; head and neck: 0.23 6 0.74%; pelvis: -0.40 6 0.73%; prostate: -0.80 6 0.60%; sacrum:0.17 6 0.55%; spine: 0.31 6 0.48%. Conclusion : Overall, 93.9% of proton fields were within 6 2% and did not require monitor units (MU) adjustment. For measurements outside of 6 2%, 6.1% of proton fields were recalibrated with the measured MU. The major discrepancies between predicted and measured dose were seen for the breast and chest wall patients. Using Pencil Beam Scanning (PBS), treatment plans are fully described by a set of beam spot parameters such as energy, time duration, position, angle, etc. Even as PBS has become the new standard in proton radiotherapy, most quality assurance instrumentation has not been designed to complement PBS. A detector capable of measuring beam parameters spot by spot in real time would enable richer diagnostics and further restriction on proximal margins. A new planar detector array capable of recording proton beam position and fluence at a rate of 25kHz has been built, and is now being characterized. With sub-millimeter resolution in beam positioning, the new device overcomes most common limiting performance factors of planar detector arrays with a conventional pixelated arrangement of sensors. A large-area (1,260cm2) array proof-of-concept is also nearing its completion. Gas ionization is collected by a planar arrangement of strips projected along three directions in a beam transverse plane, from which beam shape (covariance) and size, as well as position, are reconstructed for each recorded data frame. Such proposed multi-directional readout provides a large, isotropic and continuous active area, while using fewer data channels (vs. pixel-based arrays). Combined with a novel approach to tomographic reconstruction, the updated preliminary experimental results demonstrate spatial resolution of better than 200 l m and down to 100 l s timing resolution. These findings open additional avenues to the enhanced machine and patient level quality assurance of the PBS as well as continuous line-scanning proton beam modalities through its superior timing capabilities, coordinate resolution and dose precision registration. Purpose: Treatment planning for proton therapy is evolving rapidly with the additions of multi-criterion optimization, GPU-based calculation, LET-based optimization, etc. to commercial treatment planning systems (TPS). As the needs of a clinical practice also evolve, it becomes necessary to periodically re-evaluate available options. In this study, we evaluated RaySearch’s RayStation against our existing Varian Eclipse TPS. Methods: A core group of physicists, dosimetrists, and physicians compiled a list of functionalities to evaluate using an on-site test system. Testers were provided training and support from RaySearch such that unfamiliarity with the new software would not hinder evaluation. Functionalities were ranked by importance for patient care (IPC) and scored based on a performance index (PI) according to evaluation metrics previously developed in our clinic for evaluation of oncology information systems. Results: In Table 1, Advantage indicates whether the PI favored Eclipse (blue) or RayStation (red) and the Test Score represents performance weighted by IPC, where a perfect score would be 100%. Of the 24 features tested, 7 favored Eclipse, 11 favored RayStation, and 6 were neutral. With Test Scores below 60%, neither TPS is ideal. Based on PI, 16 features were identified as being acceptable and 8 unacceptable for both systems (Fig. 1). Conclusion: The Eclipse and RayStation systems were found to be comparable to one another, each with advantages and disadvantages and neither being an ideal solution. This study, which spanned more than a year from initiation to completion, also highlighted the complexities of software evaluation with the increasing complexity of IT environments. Purpose : Treatment of locally advanced salivary gland tumors with skull base invasion is a major clinical challenge, due to the dose limits of adjacent critical structures. One effective treatment approach is to combine a 3D conformal fast neutron therapy (3DCNT) treatment with a proton boost to the skull base. Recent technical advances have enabled intensity-modulated neutron therapy (IMNT) at the University of Washington. We evaluated the dosimetry of IMNT as an alternative to the combined 3DCNT with a proton boost with locally advanced salivary gland tumors with skull base invasion. Methods : Two patients treated with 3DCNT and a proton boost for adenoid cystic carcinoma of palate with skull base invasion were retrospectively replanned using IMNT. Patients received 18.4Gy in 16fx 3DCNT (equivalent to 74 Gy in 37fx of x-rays) with a subsequent 30 Gy proton boost delivered in 15fx to the skull base. Dose volume histograms (DVHs) were used for plan comparison. Results : Table 1 below compares relevant changes in dosimetry for the IMNT and 3DCNT þ proton plans. Planning target volume (PTV) coverage was equivalent for both treatment approaches. The IMNT plan allowed for significant sparing of adjacent optic structures and the temporal lobe. Conclusions : IMNT achieved comparable target volume coverage compared with high-LET neutrons, while significantly reducing dose to adjacent tissues compared to the 3DCNT þ proton boost. IMNT has the potential to reduce treatment toxicity and improve other quality of life metrics (e.g., treatment duration) compared to 3DCNT with a proton boost. Hence, this study supports pursuing research on the combination of Ganetespib with proton radiotherapy for a prospective clinical exploitation. Experimental setup: The HollandPTC R&D room is equipped with a fixed horizontal beam line providing beam from 70 up to 240 MeV, and intensities from 1 to 800 nA. The room can provide single pencil beam and large fields with 98% beam uniformity and Spread-Out-Bragg Peak (SOBP) produced with 2D passive modulators. Recently, the maximum energy of 250MeV has been released in the R&D room for FLASH applications. The full beam characterisation has been performed together with absolute dose measurements. Results: A 43% transmission efficiency of the ProBeam cyclotron is achieved at a 250 MeV energy. This resulted in a current of around 300 nA at target position. The beam spot size has a standard deviation of 3.6 mm. The fluence rate was found to be 8e6 protons/cm 2 s, more than a factor of 100 with respect to conventional beams. To further characterise the 250 MeV proton beam at maximum beam current a specific integral monitor chamber is currently under commissioning in collaboration with the company DE.TEC.TOR. Different cutting-edge solutions are adopted for the ionisation chambers to cope with FLASH intensities and minimise the recombination effects.The device is also equipped with X-Y strip ionisation chambers to measure beam size and position. PTCNA-0044 Out-of-field dose in photon and hadron radiotherapy measured in a 3D-printed patient-specific anthropomorphic whole-body phantom PTCNA-0091 Impact of range uncertainty, setup errors, and breathing phase on dose-averaged linear energy transfer (LETd) distributions in PBS proton lung plan patients were identified due to off-margin intrafraction motion magnitude ( Figure 2) and were replanned with larger prostate PTV margins. Conclusion: The IGRT surveillance program was implemented and identified 4 patients for re-simulation and re-planning due to large systematic interfraction prostate motion error; identified 4 different patients for re-planning due to large intrafraction error. PTCNA-0023 Combined proton-photon treatments with a fixed proton beamline integrated into a conventional bunker for photon therapy recalculating the dose using Monte-Carlo simulation (MCS). The robustness of target coverage under 5 to 10 % range uncertainties was evaluated to ensure acceptable target coverage and sparing of organ at risk under worst case scenarios. Results: The use of three fields was found to reduce the effect of range uncertainties and produces a more homogeneous dose distribution compared to that for single PA field. The 3D Max dose difference between calculated dose distributions from Eclipse and MCS for three field plans was found to be 4.1%. Conclusions: Use of three fields is found to produce a proton therapy plan with acceptable dose distributions under dose calculation and range uncertainties. Even with the larger uncertainness due to the presence of high-Z material, the proton therapy plan was preferred over the photon plan due to normal tissue sparing. PTCNA-0076 Evaluation of knowledge-based spot-scanning proton treatment planning model for prostate cancer patients Casey Johnson 1 , Sheng Huang 1 , Chin-Cheng Chen 1 , Anna Zhai 1 , Haibo Lin 1 , Pingfang Tsai 1 1 New York Proton Center, Physics, New York, USA Purpose: To evaluate the knowledge-based model in prostate proton treatment planning. Methods: The knowledge-based RapidPlan module in Varian Eclipse TPS (v16.1) was used for prostate proton treatment planning. A model was created and trained using 40 patients for a prescription of 70.2 Gy to the prostate and seminal vesicle in 26 fractions. The model was evaluated by analyzing the goodness-of-fit summary statistics and identifying possible outliers. The established model was then tested on five additional prostate patients. The model-generated plans were compared to clinical-used plans to determine the accuracy and performance of the model in target coverage and organs-at-risk (OAR) sparing. Conclusion: The RapidPlan model saved 66% of the planning time and produced comparable CTV coverage and superior or equivalent OAR sparing for prostate patients. PTCNA-0037 Accuracy of proton range calculations with dual-layer CT: A Monte Carlo study with animal tissues with a fully commissioned TOPAS-based Monte Carlo model of a synchrotron proton beam delivery system PROBEAT-V. The proton range, R90, determined by DLCT and SECT were comparatively evaluated against the MC model. Results: For 221.3 MeV, the deviation of proton range in TPS from Monte Carlo simulation reduced from 1.9-4.6 mm for SECT to 0.2-1.3 mm with DLCT ( Table 1). The largest deviations occurred in fresh and frozen lung tissues. For 150 MeV, differences were less dramatic. However, DLCT continued to agree better with Monte Carlo simulations than SECT for lung tissues. Conclusions: The DLCT-based range calculation in TPS agreed with Monte Carlo simulation to within 1.3 mm of 30.96 cm water-equivalent length for all tested animal tissues and within 0.5 mm for lung tissues. This finding supports the adoption of DLCT for dose calculation in TPS. PTCNA-0102 Study of usefulness of PTV in improving the robustness of IMPT plans for target coverage under range and setup uncertainties Narayan Sahoo 1 , Xiaodong Zhang 1 , Yupeng Li 1 , Falk Poenisch 1 , Archana Gautam 1 , Thomas Whitaker 1 , Ming Yang 1 , Richard Wu 1 , Xiaorong Zhu 1 1 UT MD Anderson Cancer Center, Department of Radiation Physics-Pt. Care, Houston, USA Objective: This study aims to examine the usefulness of PTV created by a uniform expansion of CTV to improve the CTV coverage in robustly optimized intensity modulated proton therapy treatment plans. Method & Materials: IMPT treatment plans were optimized in Eclipse TPS from Varian Medical Systems using Nonlinear Universal Proton Optimizer (NUPO) for targets in brain and other regions of head and neck. The CTV was selected for robust optimization under eight uncertainties scenarios corresponding 3.5% range uncertainties and þ/-3 mm setup uncertainties. Two plans for each patient were created, one with and one without the non-robust minimum target coverage objective for PTV. The PTV was customized to avoid overlap with OARs. Robustness of the CTV coverage under the same eight uncertainties scenarios were analyzed for both plans. The D95 for the CTV in the uncertainties band DVH for the worst-case scenario was used to judge the usefulness of the PTV in improving the robustness of IMPT plans. Results: The IMPT plans with the minimum PTV target coverage objective are found to have a better D95 for CTV compared to the IMPT plans without this objective in the plan optimization with comparable DVHs for the OARs. Conclusion: The PTV was found to be useful in improving the robustness of the IMPT plans for target coverage under range and setup uncertainties. PTCNA-0059 Assessment of fragment contributions to dose and estimates of relative biological effectiveness by common models in carbon radiotherapy Shannon Hartzell 1 , Fada Guan 1 , Paige Taylor 1 , Christine Peterson 2 , Stephen Kry 1 1 The University of Texas MD Anderson Cancer Center, Radiation Physics, Houston, USA 2 The University of Texas MD Anderson Cancer Center, Biostatistics, Houston, USA Several models of variable RBE may be implemented in clinical and research-based treatment planning systems for carbon radiotherapy, including the microdosimetric kinetic model (MKM), stochastic MKM (SMKM), and Local Effect Model I (LEM), which have not been thoroughly compared. This work compares how models handle carbon beam fragmentation, providing insight into where model differences arise. Geant4 Monte Carlo was used to simulate clinically realistic monoenergetic and SOBP carbon beams incident on a phantom. Using these, input parameters for each RBE model (microdosimetric spectra, double strand break yield, kinetic energy spectra, dose) were calculated for relevant fragment species (H, He, Li, Be, B, and C). Spectra for each fragment were used to calculate linear and quadratic portions of each RBE model, which were combined with reference values and physical dose to calculate RBE. Calculations found that secondary fragment contributions could exceed 20% of total physical dose ( Figure). When calculated using identical beam parameters, RBE magnitude varied greatly across models and was typically lowest using MKM. When compared across fragments, RBE decreased with atomic number when Z,3 and increased when Z3 for RBE(MKM) and RBE(SMKM) (Table). RBE(LEM) increased with Z, until dropping sharply at Z¼6. Trends of RBE by fragment varied by LET region for microdosimetric models only. This study demonstrated that secondary fragments can contribute notably to physical and estimated biological dose, indicating that fragmentation is an important factor in treatment delivery. Similar trends were seen in RBE fluctuations by atomic number for microdosimetric models, which differed from those of RBE(LEM). PTCNA-0085 Comparison of proton versus photon radiation patient cohorts in a phase 2 trial of response-adaptive radiation in locally advanced NSCLC Introduction: For patients with locally advanced non-small cell lung cancer (LA-NSCLC), proton therapy can provide dosimetric advantages over photon radiation. However, it is unknown whether this translates into superior outcomes. We conducted a phase II clinical trial of chemoradiation for LA-NSCLC examining response-adaptive radiation dose-escalation and compared the proton and photon patient cohorts on this trial. Methods: Forty-five patients with AJCCv7 stage IIB-IIIB NSCLC were prospectively enrolled (NCT02773238) in 2016-2019. All patients underwent chemoradiation (23¼protons, 22¼photon IMRT); 18 patients also received consolidation durvalumab. PET/CT was performed at week-3 during chemoradiation and response status was prospectively defined by multivariate changes in tumor-volume and metabolic-uptake. PET non-responders received 74Gy in 30 fractions whereas PET-responders received 60Gy in 30 fractions. Differences between cohorts were evaluated by Mann-Whitney U-tests and log-rank tests. Conclusion: Our cohort of patients treated with proton therapy had lower radiation dose to lungs and heart, although there was no significant difference in clinical outcomes. Our results maybe be limited by our sample size, and we await results from larger randomized trials. PTCNA-0024 Evaluation of Pencil Beam Scanning and Double Scatter Proton Radiotherapy following Chemotherapy in the Treatment of Aggressive Mediastinal non-Hodgkin Lymphoma organs at risk is paramount. We report outcomes of patients with AMNHL receiving chemotherapy followed by either pencil beam scanning (PBS) or double scattered (DS) PRT. Conclusion: Consolidative PRT following chemotherapy for AMNHL resulted in favorable outcomes without high-grade toxicities. PTCNA-0057 Systematic review of deep inspiration breath hold in proton therapy and IMRT for mediastinal lymphoma PTCNA-0065 Dosimetric impact of tissue expander rotation in breast cancer patients undergoing post-mastectomy intensity modulated proton therapy Purpose: To report the dosimetric impact of rotation in temporary tissue expanders (TE) in patients receiving postmastectomy intensity modulated proton radiotherapy (IMPT). Methods: Between 2017-2020, we identified consecutive patients in an internal registry as having a rotated TE during treatment. TE rotations were identified on daily setup kV imaging or CT-scans. Clinical target volumes (CTV) and organs at risk (OAR) were contoured on post-rotation CT-scans. Analysis of pre-and post-rotation dosimetry was completed in ProKnow (Elekta, Stockholm, Sweden). Results: Thirty-five patients with TE reconstruction undergoing IMPT were identified as having 47 instances of TE rotation and post-rotation CT-scans in treatment position available for analysis. 46/47 pre-rotation plans met CTV(range(r), 4005-5000cGy) coverage of D95%.95%, while 16/47 met this constraint post-rotation. All pre-and post-rotation plans met coverage of D90%.90%. 12/14 pre-and 7/14 post-rotation plans met a boost CTV(r, 5625-6000cGy) coverage of D90%.90%. D0.01cc[Gy] to the left anterior descending (LAD) artery increased 1.5-fold from an average of 13.6Gy to 19.6Gy. D0.01cc[Gy] to the right coronary artery (RCA) increased 1.4-fold from an average of 12.3Gy to 16.2Gy. LAD and RCA mean doses increased 2.4-fold and 1.5-fold, respectively. Mean heart dose increased 1.4-fold for right and left-sided plans, from an average of 0.88Gy to 1.19Gy for left and 0.50Gy to 0.68Gy for right-sided plans. Conclusions: Tissue expanders can rotate during breast IMPT, potentially impacting both CTV coverage and dose to OARs. Awareness of the potential for TE rotation during daily imaging is warranted. Replan is usually indicated. PTCNA-0061 Post-mastectomy intensity modulated proton therapy reduces skin dose compared with photon therapy Purpose/Objective(s): Studies have suggested greater skin toxicity with post-mastectomy proton pencil-beam scanning radiotherapy (PRT) than photon radiotherapy (XRT). We aim to compare the target coverage and skin sparing of a large cohort of patients treated with post-mastectomy PRT and XRT at a tertiary cancer center. Materials/Methods: Consecutive women with unilateral, non-inflammatory breast cancer treated with 50 Gy (RBE) to the chest wall and regional lymph nodes between 2015-2019 were included. PRT was administered with a median of two multifield optimized fields (intensity modulated proton therapy). The chest wall skin was defined as the first 3 mm from the external body surface. PRT and XRT planning objectives with respect to skin were to achieve microscopic disease target coverage while limiting surface hot spots. For XRT 3-5 mm daily bolus was generally employed. PRT planning objectives for skin were V90%.90% and D1cc,105%. Results: One hundred seventy-nine women were included, 96 receiving PRT and 83 receiving XRT (95% 3D conformal radiotherapy with a wide tangent technique, 5% intensity modulated radiotherapy). Bolus was utilized in 93% of XRT patients. There was no significant difference of clinical characteristics between the groups (Table 1). Clinical target volume coverage with 47.5 Gy was excellent with PRT and XRT. The median skin dose to 0.01 cc, 1 cc, and 10 cc were all significantly lower with PRT (Table 2). Conclusions: Post-mastectomy PRT administered with our skin-sparing technique is associated with lower skin dose than XRT, while maintaining excellent target coverage. PTCNA-0100 Proton beam therapy for large hepatocellular carcinomas in western patients Matthew Greer 1 , Stephanie Schaub 1 , Avril O'Ryan-Blair 2 , Tony Wong 2 , Smith Apisarnthanarax 1 1 University of Washington, Department of Radiation Oncology, Seattle, USA 2 University of Washington, Seattle Proton Center, Seattle, USA Purpose/Objectives: We present a single institution retrospective study on the clinical outcomes of Western patients with large hepatocellular carcinomas (HCCs) treated with proton beam therapy (PBT). Materials/Methods: Fifty-one HCC patients with tumors 5 cm and ineligible for other liver-directed therapies were treated with PBT between 2014-2019 with a 15-fraction regimen of 45.0-67.5 Gy(RBE). Non-classic radiation-induced liver disease (ncRILD) was defined by a Child-Pugh (CP) score increase 2þ and/or RTOG grade 3 enzyme elevation. Overall survival (OS), progression-free survival (PFS), and local control (LC) were calculated using the Kaplan-Meier method and univariate predictors of OS by Cox regression analysis. Results: Patients represented a high-risk cohort: 45% with BCLC stage C, 18% with CP-B/C cirrhosis, and a median gross tumor volume (GTV) diameter of 11.1 cm. Pencil beam scanning was used in 67% of patients. A simultaneous integrated boost technique was employed in 78% to achieve an average GTV mean BED of 87.0 Gy(RBE). Median follow-up for all patients was 10 months. 1-year OS and PFS were 57% (95% CI 42-70%) and 32% (95% CI 19-48%), respectively. 1-yr LC was 92% (95% CI 78-98%) with three isolated local failures. Out of field liver recurrences were the dominant pattern of failure. Six patients (14%) experienced ncRILD; all but one patient had baseline CP-B liver function. Conclusions: In this largest series to date of Western HCC patients with high-risk large tumors, moderate dose escalated PBT results in excellent local control rates and acceptable toxicities. Out of field and distant failures remain problematic. Materials and Methods: Patients with uterine cancer treated with curative intent who received either adjuvant PBT or IMRT between 2014-2020 were identified. Patients were enrolled on a prospective registry using a gynecologic specific subset of PRO-CTCAE designed to assess symptom impact on daily living. Gastrointestinal questions included symptoms of diarrhea, flatulence, bowel incontinence, and constipation. Symptom based questions were on a 0-4-point scale, with Grade 3þ symptoms occurring frequently or almost always. Patient reported toxicity was analyzed at baseline, end of treatment (EOT), and at 3, 6, 9, and 12 months after treatment. Unequal variance t-tests were used to determine if treatment type was a significant factor in baseline adjusted PRO-CTCAE. Results: Sixty-seven patients met inclusion criteria. Twenty-two received PBT and 45 patients received IMRT. Brachytherapy boost was delivered in 73% of patients. Median external beam dose was 45 Gy for both . When comparing PRO-CTCAE, PBT was associated with less diarrhea at EOT (p¼0.01) and at 12 months (p¼0.24) compared to IMRT. Loss of bowel control at 12 months was more common in IMRT patients (p¼0.15). Any patient reported Grade 3þ GI toxicity was noted more frequently with IMRT (31% vs 9%, p¼0.09). Discussion: Adjuvant PBT is a promising treatment for patients with uterine cancer and may reduce patient reported gastrointestinal toxicity compared to IMRT. PTCNA-0030 Proton therapy for high risk prostate cancer: Results from the Proton Collaborative Group PCG 001-09 prospective registry trial Introduction: Using the Proton Collaborative Group (PCG) prospective registry, we report outcomes for high risk prostate cancer (HRPC) treated with proton therapy. Methods: After exclusion, 605 HRPC patients from 8/2009-3/2019 at nine institutions were analyzed for freedom from progression (FFP), metastasis free survival (MFS), overall survival (OS), and toxicity. Multivariable cox/binomial regression models were used to assess for predictors of FFP and toxicity. Conclusion: In the largest series published to date, our results suggest that early safety and efficacy outcomes using proton therapy for HRPC are encouraging. PTCNA-0032 Acute gastrointestinal (GI) and genitourinary (GU) toxicities of intensity-modulated proton therapy (IMPT) targeting prostate and pelvic nodes for prostate cancer Purpose: To assess acute GI and GU toxicities of IMPT targeting prostate/seminal vesicles and pelvic lymph nodes for prostate cancer Methods: A prospective study (ClinicalTrials.gov: NCT02874014) evaluating moderately hypo-fractionated IMPT for highrisk (HR) or unfavorable intermediate-risk (UIR) prostate cancer accrued 56 patients. Prostate/seminal vesicles and pelvic lymph nodes were treated simultaneously with 6750 and 4500 cGy RBE, respectively, in 25 daily fractions. All received androgen deprivation therapy. Acute GI and GU toxicities were prospectively assessed, using 7 GI and 9 GU categories of CTCAEv.4, at baseline, weekly during radiotherapy, and 3-month post-radiotherapy. Fisher exact tests were used for comparisons of categorical data. Results: Median age: 75 years. Median follow-up: 25 months. 55 patients (52: HR; 3: UIR) were available for acute toxicity assessment. 62% and 2% experienced acute grade 1 and 2 GI toxicity, respectively. 65% and 35% had acute grade 1 and 2 GU toxicity, respectively. None had acute grade 3 GI or GU toxicity. The presence of baseline GI and GU symptoms was associated with a greater likelihood of experiencing acute GI and GU toxicity, respectively (Table 1 and 2). Of 45 patients with baseline GU symptoms, 44% experienced acute grade 2 GU toxicity, compared to only 10% among 10 with no baseline GU symptoms (p¼0.07). Although acute grade 1 and 2 GI and GU toxicities were common during radiotherapy, most resolved at 3 months post-radiotherapy. Conclusions: A moderately hypo-fractionated regimen of IMPT targeting prostate/seminal vesicles and pelvic lymph nodes yielded very acceptable acute GI and GU toxicity. PTCNA-0038 Assessing the interplay effect based on a precise machine-specific delivery sequence and time for cyclotron accelerator proton therapy system Lewei Zhao 1 , Gang Liu 2 , Jiajian Shen 3 , Andrew Lee 4 , Di Yan 1 , Rohan Deraniyagala 1 , Craig Stevens 1 , Xiaoqiang Li 1 , Shikui Tang 4 , Xuanfeng Ding 1 1 Beaumont Health System, Radiation Oncology, Royal Oak, USA 2 Huazhong University of Science and Technology, Cancer Center, Wuhan, China 3 Mayo Clinic Arizona, Radiation Oncology, Phoenix, USA 4 Texas center for Proton Therapy, Radiation Oncology, Ivring, USA Purpose: We proposed an experimental approach to build a precise machine-specific model for standard, volumetric, and layer repainting delivery based on a cyclotron accelerator system. Then, we assessed the interplay effect using a 4D mobile lung target phantom compared to a generic delivery sequence model from West German Proton Therapy Essen (WPE). Methods: The machine delivery log files, from an IBA ProteusPLUSt system, were retrospectively analyzed to quantitatively model energy layer switching time, spot switching time, and spot drill time for standard and volumetric repainting delivery. To quantitatively evaluate the interplay effect, a series of digital thoracic 4DCT image sets were used. The interplay effect was assessed based on the 4D dynamic dose accumulation method. Different delivery technique such as standard delivery (n¼1), volumetric repainting delivery (n¼2,3,4) and layer repainting delivery (n¼2,3,5,25) were simulated based on the machine-specific delivery sequence model and WPE model. Results: The results showed that the WPE model's spot delivery sequence deviated from the log file significantly compared to the machine-specific model. Based on the treatment delivery calculation of a lung treatment plan with target size (65 mm 3 ) and layer repainting 25 times (n¼25), the difference is about 21.01%. Such a difference also resulted in different interplay effects estimation between the two models even though both institutions used the same proton system from IBA and calculated using the same 4DCT imaging set. Conclusion: A precise machine-specific delivery sequence is highly recommended to ensure an accurate estimation of mobile target treatment's interplay effect. PTCNA-0056 Assessment of shielding requirements for proton beam FLASH delivery Francis Yu 1 , Minglei Kang 1 , Sheng Huang 1 , Chengyu Shi 1 , Chin-cheng Chen 1 , Shouyi Wei 1 , Qing Chen 1 , Jehee I. Choi 2 , Charles B. Simone II 2 , Haibo Lin 1 1 New York Proton Center, Physics, New York, USA 2 New York Proton Center, Radiation Oncology, New York, USA Purpose: To evaluate the effectiveness of existing shielding in a dedicated research room (second floor) for proton FLASH beam delivery. Materials and Method: The radiation survey was performed with Ludlum 42-38 WENDY-2 neutron detector and Ludlum 9DP ion chamber survey meter in a fixed horizontal beam room using an ultra-high dose rate proton beam (FLASH). A 250 MeV spot was delivered (total 5 minutes) with a cyclotron current of 600 nA (~210 nA at the nozzle), which provided a spot peak dose rate of 805 Gy/s. The survey meters were moved around to identify the highest reading of each location, and the readers were compared to survey results of clinical standard-dose rate beams. Results: The highest readings for the FLASH beam were along the beam path and read 550 lR/hour on WENDY-2 and 55 lR/hour on 9DP ion chamber meter. The neutron and photon readings are 97 to 170 fold higher than for clinical beams for the location with direct transmission. The readings are~28 fold higher in the control room due to the length of the maze. High activation of 650 mR/hour, 434 mR/hour, and 186 mR/hour was observed in the solid water beam stopper at isocenter 5, 30, and 60 minutes after FLASH delivery. Conclusion: No extra shielding is needed to deliver FLASH beam in our research room. A beam-angle-dependent survey is recommended for the gantry room due to the flexible beam angles. Special attention should be paid to the activation of equipment in the treatment room. PTCNA-0066 The First Modeling of the Spot-Scanning Proton Arc(SPArc) Delivery Sequence and Investigating Its Efficiency Improvement Xuanfeng Ding 1 , Gang Liu 1 , Lewei Zhao 1 , Rohan Deraniyagala 1 , Craig Stevens 2 , Di Yan 2 , Xiaoqiang Li 1 1 Beaumont Health, Radiation Oncology Proton Therapy Center, Royal Oak, USA 2 Beaumont Health, Radiation Oncology, Royal Oak, USA Purpose: Introduce an experimental approach to model a precise prototype arc system and quantitatively assess its efficiency improvement in the routine proton clinical operation. Methods: The SPArc delivery sequence model(DSM SPArc ) includes two kinds of parameters:(1) mechanical parameters. (2) irradiation parameters. Log files and an independent gantry inclinometer were used to derive the irradiation parameters through a series of test plans. The in-house DSM SPArc was established by fitting both mechanical and irradiation parameters. Eight SPArc plans from different disease sites were used to validate the model's accuracy. To assess the treatment efficiency improvement, the DSM SPArc was used to simulate the SPArc treatment delivery sequence and compared to the clinical IMPT treatment logfiles from the two full clinical days. Results: The relative difference of treatment time between log files and DSM SPArc 's prediction was 6.1%63.9% on average, and the gantry angle vs. delivery time showed a good agreement between the DSM SPArc and log file ( Figure 1). Additionally, the SPArc plan could effectively save two hours out of ten hours of clinical operation by simplifying the treatment workflow for a single room proton therapy center. The average treatment delivery time (including gantry rotation and irradiation) per patient was reduced to 2266149s using SPArc compared to 6656407s using IMPT (p,0.01). Conclusion: SPArc can offer a superior delivery efficiency to improve daily patient treatment throughput, compared to IMPT. Most importantly, this model helps the community to further develop and investigate this merging technqieu especially incorporating the arc delivery speed and time into the SPArc optimization algorithm. PTCNA-0048 A direct machine-specific parameters incorporated Spot-scanning Proton Arc (SPArc) algorithm Gang Liu 1 , Lewei Zhao 2 , Di Yan 2 , Xiaoqiang Li 2 , Xuanfeng Ding 2 1 Beaumont Health System / Wuhan Union Hospital, Radiation Oncology, Royal oak, USA 2 Beaumont Health System, Radiation Oncology, Royal Oak, USA Purpose: To address the challenges of generating a deliverable and efficient spot-scanning proton arc (SPArc) plan for a proton therapy system. We developed a novel SPArc optimization algorithm (SPArcDMSP) by directly incorporating machinespecific parameters such as mechanical constraints and delivery sequence. Method and Material: A SPArc delivery sequence model (DSMarc) was built based on the machine-specific parameters of the prototype arc delivery system IBA ProteusONEt. The SPArcDMSP resamples and adjusts each control point's delivery speed based on the DSMarc calculation through the iterative approach (Fig1). Users could set the expected delivery time and gantry max acceleration as a mechanical constraint during the optimization. Four cases (brain, liver, head neck, liver, and lung cancer) were selected to test SPArcDMSP. Two kinds of SPArc plans were generated using the same planning objective functions:(1) SPArcDMSP plan meeting the maximum allowable gantry acceleration speed(0.6deg/s2);(2) SPArcDMSP-userspeed plan with a user pre-defined delivery time and acceleration speed , 0.1deg/s^2. Arc delivery sequence such as gantry speed, delivery time was simulated based on the DSMarc and was compared. Results: With a similar objective value, number of energy layers, and spots, both SPArcDMSP and SPArcDMSP-userspeed plans could be delivered continuously within the 61 degree tolerance window.The SPArcDMSP-user-speed plan could minimize the gantry momentum change based on users' preference (Fig 2). Conclusions: For the first time, the clinical users could generate a SPArc plan by directly optimize the arc treatment speed and momentum changes of the gantry. This work paved the roadmap for the clinical implementation of proton arc therapy in the treatment planning system. PTCNA-0080 A quantitative dose perturbation comparison study between gold and platinum VISICOIL TM fiducial markers in proton beam therapy Xuanfeng Ding 1 , Gang Liu 1 , Weili Zheng 1 , Jessica Valdes 1 , Xiaoqiang Li 1 , Lewei Zhao 1 , An Qin 1 , Daniel Krauss 1 , Di Yan 1 1 Beaumont Health, Radiation Oncology Proton Therapy Center, Royal Oak, USA Purpose: To quantitatively investigate the dose perturbations difference between gold and platinum VISICOIL TM fiducial markers in proton beam therapy Method: Gold and platinum VISICOIL TM fiducial markers with two different dimensions were tested, including 0.35 & 0.5mm in diameter and 5mm in length. Total four kinds of markers. Gafchromic EBT2 was used to measure dose perturbations along the beam path in a ''sandwich'' setup ( Figure 1). Dose perturbation was reported in each depth (0.3mm, 1.65mm, 3.00mm, 5.40mm, 7.80mm, 10.20mm, 12.60mm, 18.15mm). Relative proton stopping power relative to water were calculated through National Institute of Standards and Technology database and SRIM (version -2013) in the therapeutic energy range (70-220MeV). Result: There is no statistical difference between gold and platinum VISICOIL TM fiducial markers in all the depth with diameter 0.35mm (p¼0.125), and 0.5mm (p¼0.130). The maximum point dose perturbation between Au and Pt marker with the same dimension are similar (0.35mm diameter at 7.8mm WET: 2.85%62.31% Au vs. 2.70%62.60% Pt; 0.5mm diameter at 5.4mm WET: 8.81%62.60% Au vs. 8.81%62.57% Pt.) ( Figure 2). Bilateral treatment field arrangement could further reduce the dose perturbation by half. The relative stopping power ratio to water was calculated based on gold and platinum materials. The result showed there are about 3.5% difference between the two materials. Conclusion: The study indicated that the Au and Pt VISICOIL TM fiducial markers have very similar physics properties and could interchangeable in the proton beam therapy as long as the clinical users correct the RSP during the planning process. PTCNA-0045 Dosimetric impact of spinal implant on proton therapy plans for paraspinal target Sheng Huang 1 , Pingfang Tsai 1 , Anna Zhai 1 , Arpit Chhabra 2 , Haibo Lin 1 , Chengyu Shi 1 1 New York Proton Center, Medical Physics, New York, USA 2 New York Proton Center, Radiation Oncology, New York, USA Purpose: To evaluate the dosimetric impact of different spine implants on proton therapy for paraspinal targets using an inhouse fast Monte Carlo dose calculation platform. Methods: The commercial Eclipse TPS was used to generate proton plans for a representative spinal chordoma target in a spinal phantom with four different spine implants: normal tissue without an implant, titanium, carbon-fiber-reinforced polyetheretherketone (CFR-PEEK), and hybrid (CFR-PEEK screw with titanium head) implants. The in-house fast Monte Carlo dose calculation algorithm, MCsquare, was used to evaluate the impact of different implants on plan quality. Results: Monte Carlo dose calculation revealed that up to a maximal 16% local dose shadow within target after titanium screw and rod, and it depends on the dimension of metal implant and also the beam arrangement. The D95 of the CTV50 [AC1] decreased by 8.2% and 4.5% for titanium and hybrid implant, respectively, but no meaningful difference was found for CFR-PEEK implant and normal spine when comparing TPS with MCsquare. Monte Carlo results show no impact on OARs' dosimetric merits. Conclusion: Dose calculation accuracy of TPS is limited for scenarios with metal heterogeneity. Titanium implants, in certain circumstances, cause dose shadowing and could theoretically compromise target coverage. On contrary, the use of increasing CFR-PEEK proportion, especially with complete CFR-PEEK implants improves the overall dosimetry accuracy. PTCNA-0041 Hip implant planning procedure for proton plans (HIPPPP) Sean Boyer 1 , Linnae Campbell 1 , Steven Laub 1 , Mark Pankuch 1 , Maggie Stauffer 1 1 Northwestern Medicine Proton Center, Physics & Dosimetry, Warrenville, USA Purpose: While proton centers may observe a low frequency of prostate patients with titanium hip replacements, their plans require a specialized beam arrangement. Due to the increased complexity a subcommittee reviewed different treatment techniques to determine the optimal planning technique. Methods: Plans were restricted to the Inclined Beam Line (IBL), where gantry angles must be 90 or 30 degrees from vertical. Plans were also created on non-hip-replacement CT datasets to increase the patient cohort. Comparisons were made for: Three beam plans versus two Contra-lateral oblique angle versus Ipsi-lateral oblique Equally weighted beams versus increased lateral beam weight Robustness testing using þ/-3.5% density shifts and þ/-3mm translational shifts (26 scenarios total) were performed for each plan. LET calculations were also done for each plan. Results: There were minimal differences between all planning techniques. The greatest differences: Three-beam plans had a higher DVH ''low dose'' region, two-beam plans had a higher ''intermediate dose'' region (Fig 1). The femoral head dose was slightly higher for plans with increased lateral weighting. Conclusion: Since the different methods produced similar results, the priority then became which method was the most efficient for patient set up. The more efficient setup reduced treatment time and intrafraction motion. Therefore, the ideal beam arrangement for single hip-replacement patients would be two beams on the side opposite the hip implant with higher weighting of the lateral beam. Abstracts Introduction Unscheduled machine downtime can cause patient treatment interruptions and may adversely impact patient treatment outcome. Conventional proton pencil beam scanning (PBS) quality assurance (QA) performs checks on proton beam parameters but does not reveal the underlying issues that the device may have prior to a machine failure. In this study, we propose a predictive maintenance approach that may provide early detection of machine issues. Methods Log file data from daily morning QA performed at the Burr Proton Center of Massachusetts General Hospital were collected. Unsupervised deep learning-based model using Long Short-Term Memory Autoencoder (LSTM-AE) architecture was constructed. The model was trained on QA data of five ''normal'' sessions so that the model learns characteristics of normal machine properties. The model error (anomaly) is computed between the model predicted data and the measured data of the day and is converted to Mahalanobis distance (M-Distance) by comparing with a reference error distribution. Results Figure 1. shows an overlay of model predicted M-Distance (blue) and downtime occurrences (red). Model prediction on the validation 2018 QA data shows that machine downtime events are associated with elevated peaks of M-Distance. Using an M-distance threshold of 22.09, our preliminary model prediction performance for three relevant machine anomaly event types is presented in terms of recall and precision rates in Table1. Conclusion Our novel predictive modeling approach allows for the evaluation of abnormal machine status and demonstrates great promise for enabling predictive maintenance for proton PBS machines. Purpose: Beam time in a proton therapy center (PTC) is a scarce resource relative to demand. Understanding the capacity of a PTC is critical to patient, provider, and staff satisfaction, and to financial sustainability. Queuing theory is the mathematical framework for analyzing service-demand systems, such as patient flow through a clinic. We describe a model for simulating a single-room PTC. Methods: The model is comprised of probability distributions for patient arrival, patient characteristics, and machine reliability. The distributions and parameters were selected to approximate the observed patient characteristics and machine maintenance records of a single-room PTC. Ten years of center operation at 16 hours per day were simulated for average patient arrival rates of 4-7.5 per week. The number of treatments delivered, machine availability, and patient wait times were recorded. Results: Machine availability was 90% and the average time per treatment was 22.2 min. Box plots of number of treatments delivered per day and patient wait times versus patient arrival rate are shown below. The maximum capacity of the center was attained at 6.5 patients per week, resulting in an average of 35.3 treatments per day. Above 6.5 patients per week, wait times grow because patients arrive faster than treatments are completed. Conclusion: We have demonstrated a throughput simulation of a single-room PTC. The model will be used to set realistic expectations for patient volume and to explore the effects of innovative operation strategies, such as selectively treating patients on weekends in anticipation of downtime events. Conclusions: We propose that an adjustment factor accounting for the increased cost and utilization of proton therapy be implemented for future national base rate calculations. PTCNA-0094 Quantitative evaluation of proton therapy related capillary leakage in glioma and meningioma using treatment response assessment maps (TRAMs) Introduction; One of the major effects of radiation is based on endothelial damage leading to an increased capillary leakage. MRI-based treatment response assessment maps (TRAMs) are established to qualitatively assess radiation induced capillary leakage based on contrast washout/accumulation over long delays. This is the first clinical study evaluating radiation induced small vessel damage during proton therapy (PT) both qualitatively and quantitatively. Materials and Methods: Twenty-two patients (5 gliomas, 17 meningioma) were treated with PT. T1-weighted MR images were acquired 5 and 60 minutes post-contrast at five time points: before treatment (T1), mid-treatment (T2), end-treatment (T3), 6 months (T4) and 12 months follow-up (T5). TRAMs were generated by Sheba Medical Centre in Israel. Changes within tumours during radiation (T2, T3) and follow-up (T4, T5) compared to baseline (T1) were studied. The quantitative analysis was performed using MICE toolkit TM (Medical Interactive Creative Environment) software (Fig. 1) and included the % of GTV of each respective TRAM colour (red, blue). Results: At baseline (T1) glioma GTVs presented on average 2068% contrast clearance/AT compared to 7864% in meningioma. Glioma GTVs showed non-significant decrease in clearance/AT during therapy which increased between end of therapy and follow-up; accumulation/TE decreased over all time points non-significantly. In meningioma GTVs contrast clearance/AT decreased significantly during therapy (T1-T3) and stabilized at follow-up (T4, T5). Conclusion: Here we show for the first time radiation-induced changes in the tumour during and early after proton therapy based on TRAMs. Further evaluations and follow-up is needed to fully understand the clinical impact in terms of responseassessment. PTCNA-0054 Clinical experiences in re-irradiation of recurrent meningioma Materials and Methods: Between 04/2017 and 01/2020, 21 patients with recurrent meningioma were re-irradiated using proton therapy. Initial treatment varied from single course of RT to multiple surgeries and irradiations. Patient-and treatment characteristics are summarized in Table 1. Results: With a median follow-up of 22 months (range 7-42) local control (LC) at 2-years was 84%, and 2 years-overall survival (OS) was 87%. There was no acute toxicity G3, but 2 late G3 toxicities: one patient with nasal synechiae (resolved after surgery) and one patient with CNS necrosis. 2-years-actuarial risk of persisting G3 toxicity was 4.8%. Sex and age had no correlation with outcome. Tumor grade correlated neither with OS nor with LC. At time of analysis, 100% Grade I meningioma were locally controlled versus 75% Grade II and 50% Grade 3. Time since last RT (. vs. , 5 years) correlated strongly with 2Y-OS (.5y 6 pts. 100% vs. ,5y 15 pts 56%, p:019) and with 2Y-LC ( .5y 6 pts. 100% vs. ,5y 15 pts 40% p .007). Re-irradiation dose (high.54 Gy 13 pts. vs low54 Gy RBE 8 pts.) showed a trend toward significance for 2-years LC (high-dose 74% vs. low-dose 100% p .08) Conclusion: Proton re-irradiation is a safe and effective modality to successfully treat recurrent meningioma. The toxicity profile in our series was very favorable. Early recurrences after conventional RT have poor prognosis. PTCNA-0029 Toxicity analysis of reirradiation with proton therapy for central nervous system tumors: a prospective proton collaborative group study developed with 3 mm and 3.5% uncertainties. OAR dose constraints were set according to our institutional guidelines, including limiting the spinal cord Dmax ,63Gy. We avoided any direct proton path through titanium parts per institutional practice. Results: For the four spine configurations, the proton plans achieved similar nominal target coverage, heart mean, and spinal cord max dose. However, when evaluating coverage and OAR dose under uncertainty scenario analysis for initial CTV 50Gy 95% and 90% coverage, higher means and narrower range of doses were achieved for the normal and CFR-PEEK plans than the titanium and hybrid plans. Similarly, uncertainty analysis of spinal cord Dmax showed tighter distribution for normal and CFR-PEEK plans. Conclusion: The CFR-PEEK implant has similar clinical properties to a normal spine for proton planning, allowing us to pass protons through the material and achieve superior target coverage and OAR sparing under nominal and uncertainty conditions as compared to treating in the presence of titanium hardware. Multi-Institutional Experience of Proton Therapy for Primary Central Nervous System Germinoma and Non-Germinomatous Germ Cell Tumors Conclusions: Unilateral Proton Beam RT for oropharynx cancer has similar disease control to photon therapy. The dosimetric advantage of proton beam therapy did not result in excess contralateral failures when compared to historical unilateral photon beam radiotherapy series. PTCNA-0092 Development of Intensity Modulated Neutron Therapy (IMNT) at the University of Washington (UW) ruptured MN, and number of intact and ruptured MN. The proportion of ruptured MN was compared for x-rays and neutrons. Cells with 1 ruptured MN were scored at 38-and 72-hours post-irradiation. Additionally, cells were exposed to ATRi, a DDRi, for two hours pre-irradiation and MN analyzed at 72-hours. Results: Per unit dose, high LET neutrons produced more MN than MV x-rays. The proportion of cells with at least one MN rupture at multiple time points was also greater for neutrons than x-rays. Exposure of cells to ATRi increased the MN number and the number of MN ruptures for both radiation types. Conclusions: The RBE for double strand break (DSB) induction and RBE for MN induction are approximately the same. Fast neutrons may promote increased immunogenic cell death more efficiently than x-rays. PTCNA-0086 Strategies and Challanges to Integrate Carbon Ions in Proton Therapy MedAustron -A Multi-Ion Therapy Center MedAustron began patient treatments with Proton Therapy in December 2016 and with Carbon Ion Radio-Therapy (CIRT) in July 2019. Currently, CIRT comprises 30% of Particle Therapy treatments and is either applied exclusively, or in combination as boost with Proton Therapy. All eligible patients participate in a prospective registry study. Figure 1 illustrates the distribution of all CIRT patients per indication and histology and figure 2 details the subgroup receiving combined Proton/ CIRT. At the initial phase treatment selection was based on established CIRT indications, but rapidly expanded to take full advantage and explore the opportunities of Carbon Ion properties. Since CIRT was integrated into the pre-existing Proton Therapy program this presentation will focus on the clinical decision algorithm between Protons versus Carbon Ions or a combination of both. Principle factors involve radiobiologic considerations of possible improvement in local control for selected histologies and stages, physical advantages of Carbon Ions (sharp penumbra and small spot size), and optimal re-irradiation dose profiles and fractionation schemas. However, individualized risk assessment of particle therapy also takes into account the comparable large body of evidence on dose tolerance for Proton Therapy versus presently limited clinical data or extrapolated clinical normal organ tolerance data in case of CIRT. Examples will be presented. Optimization of Multi-Ion Therapy led to other innovative concepts, for example delivering high dose intra-tumoral CIRT boost without significantly increasing dose to normal tissues. CIRT was well tolerated and details of acute side effects on the initial 100 patients will be presented. PTCNA-0083 Carbon-ion partial tumor irradiation targeting hypoxic segment and sparing the peritumoral immune microenvironment for unresectable bulky tumors: phase I trial. Extremely hypofractionated SBRT-based PArtial Tumor irradiation targeting HYpoxic clonogenic cells (PATHY) but sparing peritumoral immune microenvironment (PIM) has previously been developed and clinically assessed for treatment of unresectable bulky, oligometastatic disease, showing encouraging results in terms of bystander and abscopal effects induction. Present study will be conducted to determine the immunogenic potential of carbon-ions applied to this novel concept. The hypothesis implies that for an effective immune modulation leading to improved therapeutic ratio, the entire tumor volume may not need to be irradiated but only a partial tumor volume, to initiate the immune cycle in radiation-spared PIM, resulting in tumoricidal bystander and abscopal effect. This is a mono-centric, prospective-phase I study which will enroll 23 patients with locally advanced or metastatic cancers with at least one bulky (6cm) lesion. This study uses a carbon-based PATHY approach, consisting of 3 consecutive 12Gy RBE fractions delivered exclusively to the hypoxic tumor segment while sparing PIM. The hypoxic segment will be defined using 64Cu-ATSM PET-CT and dynamic contrast enhanced MRT imaging. CARBON-PATHY will be administered at the precise timing, thus synchronized with the most reactive anti-tumor immune response phase based on the serially mapped homeostatic immune fluctuations by monitoring blood levels of the inflammatory markers. Primary endpoint will be bystander effect response rate defined as at least 30% regression of the unirradiated tumor tissue. Secondary endpoints will include overall survival, progression-free survival, abscopal response, symptoms relief, toxicity, feasibility of carbon-PATHY-timing and the bystander/abscopal response rate in relation to dose-size of PIM. PTCNA-0064 Carbon Ion Radiotherapy for Treatment of Sacral Chordomas: An Institutional and National Comparison of Outcomes with Surgery and Primary Radiotherapy. This study aims at developing a pencil beam model for Magnetic Resonance Imaging guided Carbon Ion RadioTherapy (MRIgCIRT). The main issue was how to model the fragmentation of primary 12 C ions and their magnetic deflection. Particles were classified into 3 groups according to the similarity of specific energy: group 1 ( 12 C), group 2 (C isotopes other than 12 C, B, Be and Li) and group 3 (other particles). In groups 1 and 2, the lateral distribution of physical dose was approximated by a Gaussian function, while the superposition of Gaussian and Lorentzian functions was used for group 3 to describe the halo which arises from light particles (Figure 1). The specific energy was considered to be constant at each depth. All parameters were obtained from Monte Carlo simulations using Geant4. To evaluate our model, biological dose distribution was calculated based on the micro dosimetric kinetic model and a lateral irradiation field was generated for comparison with the one simulated by Geant4. 12 C beam was irradiated into a water phantom with 3-T magnetic field in the Geant4 simulation. Although the maximum of the absolute difference was increased for lower or higher energy, it did not exceed 2.7 % (Figure 2). This increase was probably due to overestimation by the Lorentzian function or the higher asymmetricity caused by beam deflection. The results of this work indicated that our model is valid for MRIgCIRT. Further research is needed to apply our model to a heterogeneous tissue. PTCNA-0015 Estimating the need for carbon ion radiotherapy United States Purpose: Carbon ion radiotherapy (CIRT) is an emerging radiotherapy modality, although there are no centers in the US. We aim to estimate the need for a CIRT center in the United States. Materials and Methods: Using the National Cancer Database, we analyzed the incidence of cancers treated with CIRT internationally (glioblastoma, hepatocellular carcinoma, cholangiocarcinoma, locally advanced pancreatic cancer, non-small cell lung cancer, localized prostate cancer, soft tissue sarcomas, and head and neck cancers) diagnosed in 2015. The percentage and number of patients likely benefiting from CIRT was estimated using inclusion criteria from clinical trials and retrospective studies, and this ratio was applied to 2019 statistics. An adaption correction rate was applied to estimate the potential number of patients treated with CIRT. Given the high dependency on prostate and lung cancers, the data were then re-analyzed excluding these diagnoses. Results: Of the 1,127,455 new cases of cancer diagnosed in the United States in 2015, there were 213,073 patients eligible for treatment with CIRT based on inclusion criteria. When applying this rate and the adaption correction rate to the 2019 incidence data, an estimated 89,946 patients are eligible for CIRT. Excluding prostate and lung cancers, there were an estimated 8,922 patients eligible for CIRT. The need for CIRT is estimated to increase by 25-27.7% by 2025. Conclusions: Our analysis suggests a need for CIRT in the United States in 2019, with the number of patients possibly eligible to receive CIRT expected to increase over the coming 5-10 years. Purpose: To compare IMPT vs. VMAT treatment plans for a spinal chordoma tumor using four unique spine configurations. Methods: A representative 14 cm mid-thoracic chordoma was simulated in a spine phantom using four unique spine configurations: 1). normal (no spine) implant, 2). titanium, 3). a novel carbon-fiber-reinforced polyetheretherketone (CFR-PEEK) implant, and 4). hybrid implant (CFR-PEEK screw with titanium head). A sequential plan delivering 50Gy to the initial target volume followed by a 24Gy boost was prescribed in the four configurations (8 plans for proton and 8 plans for photon). MFO-IMPT technique was used for proton planning, whereas VMAT was used for photon planning. Organs at risk (OAR) dose constraints were set according to our institution guidelines, including spinal cord D max ,63Gy. Dose parameters of D 90% , D 95% , D max , D mean were collected for the targets and OARs. Results: No significant differences in target coverage were present between proton and photon plans (p¼0.344, 0.093, 0.680, 0.311 for 1)-4) spine configurations considering 95% target coverage). Proton plans achieved a lower mean heart, mean left lung, and mean right lung doses, as well as reduced maximum spinal cord and esophageal doses. The proton plans, however, had a higher maximum skin dose. Conclusion: Proton and photon planning can achieve similar target coverage for both the native spine and in the presence of spinal hardware. However, proton plans in all tested spine configurations achieve superior normal tissue sparing, with the exception of skin dose. PTCNA-0081 Patients with meningioma I8 and involvement of the optical structures: does proton therapy lead to patient-reported changes in vision? Methods: All patients treated with PT for meningioma WHO I, whose planning target volumes included parts of the optic system, were included. Assessment tool was the Visual Disorder Scale (VDS), of the EORTC-BN20 questionnaire. Test times were at start of PT, at completion, and at 3, 6, 12 and 24 months (mo) of follow-up (FU,t1-t6). A minimum FU of 6mo was required. Conclusion: Proton therapy of patients with meningioma WHO I in close proximity to optical structures provides excellent prospect of maintaining visual status. At 12mo-FU there was a statistically significant improvement in the perceived visual performance. PTCNA-0058 Seattle proton anesthesia reduction initiative (SPARI): employing checklists to maximize the number of pediatric patients safely treated awake Molly Blau 1 , Stephanie Schaub 1 , Sally Rampersad 2 , Ralph Ermoian 1 1 University of Washington, Radiation Oncology, Seattle, USA 2 Seattle Children's Hospital, Anesthesiology-Pain Medicine, Seattle, USA Daily anesthesia for pediatric patients undergoing proton therapy (PT) has potential to increase neurocognitive adverse treatment effects. It is emotionally and logistically difficult for patients and families, with NPO requirements exacerbating nutritional challenges and necessitating longer time in the center. Daily anesthesia also demands more health care resources including anesthesiologists, nursing support, increasing CT simulation and treatment time, and limiting scheduling flexibility for other patients. We aimed to develop a new tool for identifying and addressing barriers to children 3 years old completing PT awake. Checklists are commonly employed in radiation oncology and anesthesia, but have not been described in this context. We are not aware of prior research examining how strategies are implemented to avoid anesthesia, nor assessing residual barriers to treatment awake in patients continuing to require anesthesia. We developed checklists to be completed by the Radiation Oncologist and Anesthesiologist at simulation and weekly throughout treatment. These prompt the clinician to use several anesthesia-avoiding strategies, outlined in Table 1, and to document the remaining barriers to the patient being treated awake, shown in Figure 1. As part of an IRB approved quality assurance study, we will analyze data collected from these checklists. Through this rigorous method of implementing anesthesia-avoiding strategies, we expect to reduce anesthesia use and its impacts for children undergoing proton therapy. Equally important, we expect to describe which interventions are effective at what stage in the treatment course and to identify persistent barriers in patients who continue require anesthesia. PTCNA-0103 Outcomes of Adolescents and Young Adults Following Radiotherapy for Breast Cancer Danielle Cunningham 1 1 Mayo Clinic, Radiation Oncology, Rochester, USA Purpose: Adolescent and young adult (AYA) with cancer face unique challenges. Methodology: Retrospective review of AYA patients (age 15-25) treated with breast RT. Results: Eleven AYA patients with breast cancer were treated with RT from 1998-2020; eight received RT. With 10-year median follow-up 88% are alive without disease. One died of metastatic disease and one had in-breast recurrence at 21 years. Median age at diagnosis was 24. All presented with palpable mass. Seven were invasive ductal carcinoma, one was adenoid cystic carcinoma. One was pregnancy associated. All underwent genetic testing, and 2 had BRCA1 mutations. All saw a fertility specialist, 4 elected oocyte retrieval (2) or leuprolide (2). Stage ranged from I-IIIC (Stage I (1), II (3), III (4)). Five were ERþ/PRþ/HER2-, 1 was triple positive, and 2 were triple negative. Two underwent lumpectomy, 6 had mastectomy, and 3 had contralateral prophylactic mastectomy. Four underwent reconstruction. Six had chemotherapy. 6 were treated with comprehensive post mastectomy RT and 2 had breast only RT. Proton therapy was used in 3. All experienced acute grade 1-2 dermatitis, no grade 3 or higher toxicities. Three developed grade 2 arm lymphedema at a median of 9 mo post RT, each had axillary dissection. Three developed shoulder dysfunction at a median of 10.9 mo after RT. Conclusions: AYA patients with breast cancer have unique challenges. Among those undergoing RT, oncologic outcomes appear excellent. Extremity lymphedema and shoulder dysfunction were common. PTCNA-0087 Safety and efficacy of ablative proton therapy for thoracic tumors Purpose: To report our initial experience using ablative dose intensity modulated proton therapy (IMPT) for lung lesions. Methodology: Local, regional, and distant progression and overall survival (OS) were assessed in 34 patients who received ablative (BED10 . 70) IMPT from 2017-2020. Results: Patient and treatment characteristics can be seen in Table 1. With median follow up of 14 months, 29 patients had partial or complete response as their best treatment response, and 4 had stable disease on post-treatment imaging. OS at 1 and 2 years were 64.9% and 48.2%, respectively, with median OS of 16.1 months. Six patients developed local recurrence (LR). Cumulative incidence (CI) of LR was 10.9% at 1 year and 26.1% at 2 years. Ten patients had a regional recurrence, with a CI of 21.9% at 1 year and 31.3% at 2 years. Seventeen developed distant progression, with a CI of 46.5% and 57.7% at 1 and 2 years. Univariate analysis did not identify any factors associated with increased risk of LR. Acute grade 2þ toxicity was seen in 2 patients who developed dyspnea. Subacute grade 2þ toxicities occurred in 4 patients: 2 with radiation pneumonitis (grade-2), 1 with bronchial stenosis (grade-2), and 1 with bronchial obstruction (grade-3). The patient with bronchial obstruction also had a trapped lung (grade-4) requiring surgical management. Both pneumonitis patients had prior ipsilateral lung radiation. Conclusions: Ablative IMPT provided favorable oncologic outcomes with a low rate of toxicity and should be considered especially for patients with underlying ILD or prior lung radiation. Results: Patient characteristics are shown in Table 1. We found that mean age, gender, stage, and chemotherapy use were well balanced among the two treatment modalities. Interestingly, surgical management of the two groups differed significantly. Acute clinical incidents occurring within 90 days of radiation (blood transfusions, weight loss of .10%, emergency department visits, inpatient admissions, narcotic use, and death) did not significantly differ between IMRT vs. protons (Chi Square; p .0.05). Both treatments led to a significant reduction in the lymphocyte count from the start to the end of treatment. (T-Test; p , 0.05). The absolute value of this lymphocyte count drop during radiation therapy was similar between IMRT and protons. The total healthcare cost (comprising of radiation, chemotherapy, hospitalizations, ED visits, procedures etc.) was similar between the two modalities (T-Test; p . 0.05). Conclusions: Our data indicate that both protons and IMRT are appropriate treatment modalities, with similar rates of acute events and total healthcare costs. Surgical management was higher among those treated with protons; however, this may reflect a selection bias for healthier patients undergoing proton treatments. PTCNA-0069 Esophageal chemoradiation utilizing a single posterior proton beam technique with pencil beam scanning: Feasibility, dosimetry, and clinical outcomes Conclusions: This study shows excellent local control following PBT in LAPC, with a lower side effect profile than in modern IMRT photon series. Additional studies are needed to determine if PBT can further improve outcomes without adding toxicity using dose escalated strategies for LAPC. PTCNA-0099 Simultaneous integrated boost/protection with proton beam therapy for hepatocellular carcinomas Matthew Greer 1 , Stephanie Schaub 1 , Avril O'Ryan-Blair 2 , Tony Wong 2 , Smith Apisarnthanarax 1 1 University of Washington, Department of Radiation Oncology, Seattle, USA 2 Seattle Protons Center, Department of Radiation Oncology, Seattle, USA Purpose/Objectives: We present a retrospective single institution study on the clinical outcomes of patients with hepatocellular carcinomas (HCCs) treated with proton beam therapy (PBT) using a simultaneous integrated boost/protection (SIB/P) technique to dose escalate to tumors while protecting organs-at-risk (OARs). Materials/Methods: Thirty-one consecutive HCC patients were treated with SIB/P PBT between 2014-2020 with a 15fraction regimen of 45.0-67.5 Gy(RBE). Non-classic radiation-induced liver disease (RILD) was defined by a Child-Pugh (CP) score increase 2þ and/or RTOG grade 3 enzyme elevation. Overall survival (OS), progression-free survival (PFS), and local control (LC) were calculated using the Kaplan-Meier method and univariate predictors of OS by Cox regression analysis. Conclusions: In this series of HCC patients with high-risk tumors, moderate dose escalated PBT with SIB/P technique that delivers heterogeneous tumor dose results in excellent local control rates and minimal toxicities. PTCNA-0046 Initial clinical experience of the bladder-filling controls using ultrasound bladder scanner for proton prostate patients Chin-Cheng Chen 1 , Jason Pineiro 1 , Danielle Boos 1 , Shiomo-Kalman Rosenfeld 1 , Andrew Okhuereigbe 1 , Daniel Gorovets 2 , Shaakir Hasan 1 , Haibo Lin 1 1 New York Proton Center, Radiation Oncology, New York, USA 2 Memorial Sloan Kettering Cancer Center, Radiation Oncology, New York, USA Purpose: Consistent daily bladder volumes (BVs) during a course of proton therapy for prostate cancer improves the treatment accuracy and efficiency especially for fixed-beam and SBRT patients. The initial clinical experience of using an ultrasound bladder scanner to optimize bladder-filling was demonstrated. Methods: The CUBEscan TM software used for the BioCon-750 bladder scanner calculates the ultrasound-BV from 12 planes instead of ellipsoid estimation with coronal and sagittal diameters. The daily ultrasound-BV was measured prior to X-ray setup imaging. The patient would wait longer if the ultrasound-BV is less by 25% of the volume calculated in plan CT (pCT) but no void if the ultrasound-BV was larger. The patient-specific drink instruction (16-24 oz., 30-60 mins) could be also adjusted to improve the consistency of bladder filling for the remaining fractions. The daily ultrasound-BVs for 6 patients (5 fixed-beam room, and 1 SBRT) were compared with the volumes calculated in pCT, verification CTs (vCTs), and cone-beam CTs (CBCTs), respectively. from cold polymer resin on plaster teeth molds. Shape was determined by planned irradiation field and desired tongue position based on one of four pre-defined reference models (example shown on Figure 1). Subsequently, the patients had their treatment planned and applied with spacers in place. Treatment toxicity was prospectively recorded. Results: For 12/14 patients (85.7%) the treatment was delivered with protons and for 2 (14.3%) with carbon ions. Mean prescription dose was 66 Gy RBE (range: 66 -76.8 Gy RBE). By the time of results evaluation 11 patients had completed the therapy. In patients for whom planning CT was available with and without the spacers, a reduction of the maximum dose to the tongue up to 22.3 Gy was observed ( Figure 2). The spacers were well tolerated. Radiation mucositis of the tongue was not observed in 9/11 patients (81.8%) and 10/11 (90.9%) remained free from dysgeusia. Conclusion: The individual spacers are a promising strategy to reduce tongue-related toxicities by particle irradiation and should be further explored. PTCNA-0043 Dosimetric comparison of proton versus photon post-hysterectomy pelvic radiotherapy for patients with endometrial cancer treated on an institutional prospective trial Purpose: A vaginal dilator was used in female pelvis proton therapy to reduce the toxicity of vaginal stenosis. Our initial clinical experience is described including simulation, dilator contouring (HU override), plan optimization, bladder filling, daily IGRT, and continuous plan evaluation with verification scan. Methods: Two consecutive patients were treated to the pelvis with proton therapy using a vaginal dilator. The dilator was inserted with a marked stopping point at the entrance. The physical and water-equivalent thickness (relative stopping power of 1.26) were measured and applied. Three fields (LPO/RPO/AP) with multiple-field optimization were used to deliver a simultaneous integrated boost prescription (50.4/42.0 Gy(RBE)). An ultrasound bladder scanner was used to maintain consistent bladder filling prior to X-ray imaging. Daily kV/CBCT was used to align the patient with ,5 mm setup tolerance to bony structures. Verification CTs were performed to evaluate the plan robustness. Results: The vaginal dilator was located at the distal dose fall-off between the anal target and bladder. There was slight inter-fraction variation of the dilator angle and up to 7 mm difference of the inserted length. The final verification plans showed an increase of ,5 cm 3 of vagina V 47.88Gy(RBE) with the dilator tilted ,48. No significant changes were found in CTV dose coverages. Conclusions: The entrance marker reproduced the length of the vaginal dilator insertion but did not account for rotational positioning. Pre-treatment ultrasound bladder scan improved the consistency of bladder filling and minimized the dose variations. Clinical outcomes and treatment toxicities of the two patients will be followed. PTCNA-0088 Platform for delivery of proton flash radiation research in a mouse model Robert Emery 1 , David Argento 1 , Marissa Kranz 1 , Jon Jacky 1 , Bob Smith 1 , Ning Cao 1 1 University of Washington, Radiation Oncology, Seattle, USA Background and Aims: An integrated platform has been created at the University of Washington Medical Cyclotron Facility to conduct proton FLASH research on a mouse model. Methods: A cyclotron beamline has been modified to produce a 6cm diameter scattered beam at dose rates between 0.1 to 100 Gy/s. Dose is monitored using a microDiamond detector connected to a Keithley 6517B electrometer. The diamond detector is calibrated against an Advanced Markus chamber. The electrometer is integrated with the cyclotron control system to deliver the desired dose. A GUI allows researchers to set the dose, deliver beam, and record dose, dose rate, and delivery time without accelerator operator assistance. A wirelessly controlled, six-axis robotic arm acts as the mouse support and positioning assembly with a 3D printed mouse bed attached as the end effector. The beam is collimated with variable graphite jaw collimators and field shape is verified with a light field. Results: Six irradiation sessions have been conducted irradiating 30 mice per session with both FLASH (60Gy/s) and conventional dose rate (0.5 Gy/s) protons. The time to position a mouse at isocenter and adjust and verify the radiation field shape is on the order of 30 seconds. Conventional rate dose is reproducible to within 0.01 Gy. FLASH dose can vary as much as 2Gy between runs. Conclusions: Mouse positioning and field adjustment is fast and user-friendly. Work is currently underway to improve light field and collimator accuracy. A new electrometer from Pyramid Technical Consultants is being investigated to improve FLASH dose reproducibility. Survey of Session Times by Site for Two Single Room Proton Centers Rex Cardan 1 , Grant Evans 2 , Charles Shang 2 , James W Snider 1 , Richard Popple 1 1 University of Alabama at Birmingham, Radiation Oncology, Birmingham, USA 2 SFPTI Proton Center, Radiation Oncology, Delray, USA Purpose: The high cost of proton treatment centers can result in unsustainable economic pressure if the facility is not planned appropriately. Single room centers provide a lower cost facility with a tradeoff of a planning to treat at the maximum capacity. It is critical then to understand realistic treatment times for the patient population served. We analyzed treatment times from two single room proton centers to obtain site specific expectations. Materials and Methods: Database queries were performed from two independent facilities A) a private center treating predominately prostate, and B) a large academic medical center treating a complex case distribution. Treatment sites were grouped by 2 field prostate (2FP), 3 field prostate (3FP), brain (BRN), 3 field head/neck (3FHN), 4þ field head/neck (4FþHN), spine (SPN) and breast/chest wall (BCW). Sites were identified retrospectively using plan names. Session time was defined as the time of the first image to the last beam off time. Conclusion: The analyzed data in this study provide a reasonable collection of treatment data to plan future centers for various patient populations. PTCNA-0095 Evidence-Based Practice Improvement in Radiation Oncology K. Halda 1 , L. Fong de los Santos 1 1 Mayo Clinic, Radiation Oncology, Rochester, USA Background: Implementing a web-based system to capture events, incidents, and areas that need improvement has led to a significant number of project improvements and benefited the culture of safety. Method and Materials: Mayo Clinic has executed a Safety, Improvement, and Learning System (SAILS). This has become a platform that is reviewed routinely to learn about what we can do to improve safety and provide high-quality service. Construction of the form, input from employees, and steps for reviewing cases has been ongoing and successful in the project improvement initiatives. Results: Since implementing SAILS we have seen an increase in documentation and action from miss and near miss situations in radiation treatments and plans, in protons and photons. This has also been improved non-modality-related occurrences in treatment scheduling, consults, etc. Conclusions: Having an electronic, multiple-step program and process has led to timely identification of process gaps, recommendations, and improvements in all areas of our Radiation Oncology practice. PTCNA-0084 Feasibility study of utilizing XRV-124 scintillation detector for collinearity measurement in uniform scanning proton therapy Colton Eckert 1 , Biniam Tesfamicael 1 , Michael Chacko 1 , Hardev Grewal 1 , Suresh Rana 1 1 Oklahoma Proton Center, Medical Physics, Oklahoma City, USA Purpose: The purpose of this study was to determine the feasibility of utilizing XRV-124 scintillation detector in measuring the collinearity of the X-ray system and uniform scanning proton beam. Methods: A brass aperture for Snout 10 was manufactured. The center of the aperture had an opening of 1 cm in diameter. The 2D kV X-ray images of the XRV-124 were acquired such that the marker inside the detector is aligned at the imaging isocenter. After obtaining the optimal camera settings, a uniform scanning proton beam was delivered for various ranges (12 g/ cm 2 to 28 g/cm 2 in step size of 2 g/cm 2 ). For each range, 10 monitor units (MU) of the first layer were delivered to the XRV-124 detector. Collinearity tests were then repeated by utilizing EDR2 and EBT3 films following our current QA protocol in practice. The results from the XRV-124 measurements were compared against the collinearity results from EDR2 and EBT3 films. Results: The collinearity results were evaluated in the horizontal (X) and vertical (Y) directions. The average collinearity in the X-direction was -0.2460.30 mm, 0.5760.39 mm, and -0.2760.14 mm for EDR2, EBT3, and XRV-124, respectively. The average collinearity in the Y-direction was 0.3960.07 mm, 0.2960.14 mm, and 0.3960.03 mm for EDR2, EBT3, and XRV-124, respectively. Conclusion: On average, the results from the XRV-124 had a better agreement with that of EDR2. The use of XRV-124 for collinearity tests in uniform scanning protons can improve the efficiency of the QA workflow compared to the films. PTCNA-0089 Analysis of patient QA results comparing RayStation TPS predicted vs. measurements for uniform scanning proton therapy Biniam Tesfamicael 1 , Colton Eckert 1 , Hardev Grewal 1 , Michael Chacko 1 , Suresh Rana 1 1 Oklahoma Proton Center, Medical Physics, Oklahoma City, USA Purpose: The objective of the current study was to present the comprehensive patient-specific quality assurance (QA) results by comparing RayStation treatment planning system (TPS) predicted dose vs. measured dose in uniform scanning proton therapy. Methods: Proton plans of various disease sites were generated in RayStation TPS. The disease sites studied include abdomen, bladder, bowel, brain, breast, chest wall, esophagus, larynx, liver, mediastinum, head and neck, pelvis, prostate, sacrum, and spine. The field size ranged from 3 cm to 28 cm, whereas the proton beam range and modulation ranged from 4 to 31 g/cm 2 and 2 to 19 cm, respectively. Measurements were acquired using a parallel plate ionization chamber in a water tank following the institution's QA protocol. The TPS predicted results calculated based on an in-house developed output factor model were then compared against the measurements. Conclusion: Overall, 93.9% of proton fields were within 62% and did not require monitor units (MU) adjustment. For measurements outside of 62%, 6.1% of proton fields were recalibrated with the measured MU. The major discrepancies between predicted and measured dose were seen for the breast and chest wall patients. PTCNA-0104 Fast proton beam fluence and position detector array with multi-coordinate readout Evgeny Galyaev 1 , Pablo Camacho 2 , Aleksei Krasilnikov 2 , Martin Bues 3 , Jiajian Jason Shen 3 , Rafael Acuna Briceno 4 1 Radiation Detection and Imaging RDI Technologies, Physics, Tempe, USA 2 Radiation Detection and Imaging RDI Technologies, Electronics, Tempe, USA 3 Mayo Clinic College of Medicine and Science, Radiation Oncology, Phoenix, USA 4 Arizona State University, Ira A. Fulton Schools of Engineering, Tempe, USA Using Pencil Beam Scanning (PBS), treatment plans are fully described by a set of beam spot parameters such as energy, time duration, position, angle, etc. Even as PBS has become the new standard in proton radiotherapy, most quality assurance instrumentation has not been designed to complement PBS. A detector capable of measuring beam parameters spot by spot in real time would enable richer diagnostics and further restriction on proximal margins. A new planar detector array capable of recording proton beam position and fluence at a rate of 25kHz has been built, and is now being characterized. With submillimeter resolution in beam positioning, the new device overcomes most common limiting performance factors of planar detector arrays with a conventional pixelated arrangement of sensors. A large-area (1,260cm2) array proof-of-concept is also nearing its completion. Gas ionization is collected by a planar arrangement of strips projected along three directions in a beam transverse plane, from which beam shape (covariance) and size, as well as position, are reconstructed for each recorded data frame. Such proposed multi-directional readout provides a large, isotropic and continuous active area, while using fewer data channels (vs. pixel-based arrays). Combined with a novel approach to tomographic reconstruction, the updated preliminary experimental results demonstrate spatial resolution of better than 200lm and down to 100ls timing resolution. These findings open additional avenues to the enhanced machine and patient level quality assurance of the PBS as well as continuous linescanning proton beam modalities through its superior timing capabilities, coordinate resolution and dose precision registration. PTCNA-0052 A clinical comparison of two commercial treatment planning systems for spotscanning proton therapy Daniel Mundy 1 , Thomas Bradley 1 , Ashley Hunzeker 1 , Luis Fong de los Santos 1 , Jon Kruse 1 , Janelle Miller 1 , Alan Kraling 1 , John Antolak 1 , Eric Brost 1 , Anita Mahajan 1 1 Mayo Clinic, Radiation Oncology, Rochester, USA Purpose: Treatment planning for proton therapy is evolving rapidly with the additions of multi-criterion optimization, GPUbased calculation, LET-based optimization, etc. to commercial treatment planning systems (TPS). As the needs of a clinical practice also evolve, it becomes necessary to periodically re-evaluate available options. In this study, we evaluated RaySearch's RayStation against our existing Varian Eclipse TPS. Methods: A core group of physicists, dosimetrists, and physicians compiled a list of functionalities to evaluate using an onsite test system. Testers were provided training and support from RaySearch such that unfamiliarity with the new software would not hinder evaluation. Functionalities were ranked by importance for patient care (IPC) and scored based on a performance index (PI) according to evaluation metrics previously developed in our clinic for evaluation of oncology information systems. Results: In Table 1, Advantage indicates whether the PI favored Eclipse (blue) or RayStation (red) and the Test Score represents performance weighted by IPC, where a perfect score would be 100%. Of the 24 features tested, 7 favored Eclipse, 11 favored RayStation, and 6 were neutral. With Test Scores below 60%, neither TPS is ideal. Based on PI, 16 features were identified as being acceptable and 8 unacceptable for both systems (Fig. 1).
2022-02-20T16:19:08.329Z
2022-02-18T00:00:00.000
{ "year": 2022, "sha1": "f7b2e200bafdbbaa161b7a13ea934f82c20a5dfa", "oa_license": "CCBY", "oa_url": "https://meridian.allenpress.com/theijpt/article-pdf/8/4/82/3030687/i2331-5180-8-4-82.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "403ba0a02ad562db3acc230aa690ddddb06d0d36", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
244462950
pes2o/s2orc
v3-fos-license
Boost Distribution System Restoration with Emergency Communication Vehicles Considering Cyber-Physical Interdependence Enhancing restoration capabilities of distribution systems is one of the main strategies for resilient power systems to cope with extreme events. However, most of the existing studies assume the communication infrastructures are intact for distribution automation, which is unrealistic. Motivated by the applications of the emergency communication vehicles (ECVs) in quickly setting up wireless communication networks after disasters, in this paper, we propose an integrated distribution system restoration (DSR) framework and optimization models, which can coordinate the repair crews, the distribution system (physical sectors), and the emergency communication (cyber sectors) to pick up unserved power loads as quickly as possible. Case studies validated the effectiveness of the proposed models and proved the benefit of considering ECVs and cyber-physical interdependencies in DSR. Compared with abundant research basis of power system resilience, there are few studies on the resilience of cyberphysical power systems (CPPS). From the perspective of information systems, [13] proposed a CPPS robust routing model with a priority mechanism that considers cyber-physical disturbances based on robust optimization. [14] proposed a selfhealing phasor measurement unit (PMU) network that exploits the programmable configuration in a software-defined networked infrastructure to achieve resilience. [15] developed a model to find the optimal routing in the communication network to minimize the impact of cascading effects triggered by initial failures. From the perspective of the physical power grid, [16] proposed a cooperative evolutionary algorithm that simultaneously evolves a population of unmanned aerial vehicles (UAV) scheduling solutions and a population of human team scheduling solutions. The power-communication coordination recovery strategies based on the gridding method after disasters are proposed in [7]. [17] established an integer linear programming model for DC optimal power flow considering the information network constraints and a multistage bi-level model for cyber-physical collaborative recovery. [18] proposed a cyber-constrained optimal power flow model for the emergency response of smart grids. These studies switches with the interdependency constraints of CAs, repair agents (RAs), and electric agents (EAs). It should be noted that, in our model, the usage of ECVs to set up emergency communications is to enable remote switching operations, rather than situational awareness. Also, the damage status is assumed to be known, which can be achieved by the damage assessment process, or by the outage information achieved via smart meters, customers' trouble calls, or even social sensors [20]. The rest of this paper is organized as follows. Section II introduces the integrated DSR framework, followed by the detailed formulations described in Section III. In Section IV, we give the solution methodology. Then, we test the proposed models by case studies and discuss the numerical results in Section V. In the end, we conclude this paper in Section VI. II. INTEGRATED DSR FRAMEWORK WITH CYBER-PHYSICAL INTERDEPENDENCE In this section, we propose an integrated distribution system restoration framework as depicted in Fig. 1, which considers the emergency communication set up by ECVs where the existing communication infrastructure is ineffective. The control center sends commands for dispatching repair crews and ECVs and operating automatic switches remotely. The repair crews can repair damaged components and operate both automatic and manual switches. The bidirectional communication links between automatic feeder switches and the control center are built through the emergency wireless communication network set up by ECVs and FTUs. The seamless coordination of three sectors, i.e., the repair crews, the distribution system (physical sectors), and the emergency communication network (cyber sectors) are considered to help restore the unserved customers as quickly as possible. In the next section, we will introduce the individual and interdependent optimization models of these three sectors. In this paper, we assume that in the short period (i.e. the scheduled time horizon), the existing communication infrastructure is not available and that the emergency wireless communication network can be set up by dispatching ECVs to certain working sites. In this section, we formulate and explain the proposed optimization models step by step. A. Emergency Communication Vehicle Model An emergency communication vehicle (ECV) is actually a mobile base station. When it is dispatched to and set up at a certain working site (WS), an ECV and the available communication nodes (CNs) within its coverage can form a temporary wireless network. The working sites are certain locations given by expertized operators before dispatching the ECVs, in which many factors should be considered such as environment suitability for the erection of mobile base station, traffic accessibility for the vehicles, etc. For a better explanation of the proposed ECV model, we use Fig. 2 to display the working process of ECVs. An ECV (e.g., , ) departs from the depot where it is prepositioned. Then, it travels to a WS (e.g., to , to ), and sets up a temporary base station. The ECV itself and the available CNs inside its cover range can quickly form a wireless network. As a result, the ECVs can transfer bidirectional signals between the control center and the communication nodes which are coupled with physical devices. In DSR, the CNs are the FTUs that are associated with automatic feeder switches. With the emergency wireless network, the communications between the control center and the FTUs can be built, realizing the resumed functionality of DA for DSR. An ECV can cover several CNs, and how many CNs can be covered is decided by its cover range. The cover range of an ECV is modeled by a circle denoted by its radius . We use a binary parameter , , to represent if a communication node can be coved by the wireless network set up by ECV at the working site , and it can be calculated given the original data, as expressed below: where the step function: ( ) = 1, ≥ 0 0, < 0 . For example, the CN is in the cover range of the base station set up by ECV at WS , so the parameter , , = 1. Similarly, the parameter , , = 0, , , = 1. It should be noted that a CN can be covered by different ECVs at different WSs, which are determined by their relative locations. For example, the CN can be both covered by ECV at WS and by ECV at WS ). Since the locations of CNs and WSs and the cover range of ECVs (i.e. ) are known, the , , are actually given parameters. An ECV can travel from one WS to another, setting up a wireless network at another circle area to make some other intelligent physical devices observable and controllable. For example, in Fig. 2 ECV travels from WS to . The mobility of an ECV is essentially a "vehicle routing" problem, which has been widely studied in operational research areas. Differently, for ECVs, the duration of stay at WSs is a variable, not a parameter, as the duration of stay depends on how long the emergency wireless network will be utilized, which is unknown in advance. To make it compatible with the models of repair crews and physical operational constraints of distribution systems, we adopt the "variable time step" (VTS) modeling method [11] to formulate the mobility of ECVs, and we add an extra variable (i.e., the departure time from WS i) so that the variable duration of stay can be modeled. Similar to [11], we use the concept of "communication agents" (CAs) to represent ECVs, and the specific constraints are formulated as below. Constraints (1-6) define the route table of communication agents (CAs). Specifically, (1-3) mean that a CA should travel starting only from depots and should not go back to depots. Constraint (4) represents that each possible route can be visited no more than once because the routing of ECVs is coordinated with the service restoration process so that at the visited working sites, there should always be a "task" of setting up emergency communications for remotely switching, and after the task is completed, there should be no further need for communication, in other words, there is no need for ECVs to revisit this working site. Constraint (5) limits the total number of agents dispatched out of the depot not to exceed the capacity of that depot. Besides, (6) depicts that each WS can be visited by at most one CA, and one CA leave or stay at the visited WS. Except for route table-related constraints, (7)(8)(9)(10)(11) list the timerelated constraints. Specifically, (7) defines the initial time of a CA in the time horizon. Constraint (8) ensures that, at any site (depot or WS), the departure time should be later than the arrival time, and both times should not exceed the scheduled time horizon. Constraint (9) is tight only if there is a CA travels from i to j (i.e., the route table element = 1), which limits the time difference between arriving at site j and departing from site i to be exactly the travel time from i to j. Constraint (10) ensures the duration of stay at a WS j should be longer than the required minimum duration if one CA visits this site. Constraint (11) sets the arrival time at a WS to be the end time of the scheduled horizon (i.e. ) if there are no CAs traveling to that WS. B. Crew Dispatch Model To reduce the complexity of the crew dispatch model, first, we pre-assign working sites (WSs) of repair crews (including switches and faulted lines to be repaired) to depots. Similar to [21], the clustering model is formulated as follows: ∑ ∑ ∈ ∈ (p1) s.t. ∑ ∈ = 1, ∀ ∈ (p2) By solving the above integer programming problem, the optimal clustering strategy with minimum travel time can be found. Then, we use the "repair agent" (RA) to represent the repair crew, which can be modeled by the constraints of the route table and arrival time list [11], which are listed below. (25) In this paper, we assume all the crews have both skills of operating switches and repairing faulted lines, thus the operation agent (OA) and repair agent (RA) in [11] are unified to be only RA. In other words, the working sites of RAs include both switches and faulted lines, i.e. = ∪ ℱ. First, the RAs' route table-related constraints are listed in (12)- (17). Specifically, (12)- (13) limit that a RA should travel starting only from the depot. Constraint (14) limits that each type of agent should not go back to the depot because how a RA travel back to the depot from the last visited site is irrelevant to the proposed problem. Constraint (15) means each possible route can be visited no more than once because each working site cannot be visited twice. Constraint (16) means the total number of RAs dispatched out of each depot cannot exceed the capacity of that depot. Constraint (17) describes that each working site can be visited by at most one RA, and a RA should leave or stay at the visited working site. Second, the time-related constraints of RAs are listed in (18)(19)(20)(21)(22)(23)(24)(25). Specifically, (18) defines the initial time of RAs in the scheduled time horizon. Constraints (19)(20)(21)(22) describe the equality of arrival time between two sites if a RA travels from one site to another. These constraints include: from a depot to a WS (i.e. (19)), from a faulted line (not switch) to another WS (i.e. (20)), from a healthy switch to another WS (i.e. (21)), and from a faulted switch to another WS (i.e. (22)). Constraint (23) sets the arrival time of a WS to be the end time of the scheduled horizon (i.e. ) if there are no RAs travel to that WS. Constraints (24-25) limit the repair completion time of different types of node cells, where ( ) is the index transfer from RA to EA, which represents the node cell where the fault f is inside. For a node cell with faulted lines inside it, the repair completion time of this cell is later than that of all the faulted lines inside it, as shown in (24). By contrast, for the node cell without faulted lines inside it, the repair completion time is set to be the initial time of RAs in the scheduled horizon, as shown in (25). C. Physical System Model We use the variable time step (VTS) modeling method, as introduced in [4], to formulate the physical distribution systems. The virtual electric agents (EA) are used to represent the energy flow in the physical network, which departs from a substation or black-start DG and goes through node cells to restore unserved load in these cells. Specifically, the physical system models include 1) the constraints of EAs' route table; 2) the constraints of EAs' arrival time; 3) the constraints of energization status; 4) the constraints of system and component operation. All these constraints can be found in [4], and we integrate them labeled by one number, as summarized below. D. Interdependency Constraints In the abovementioned models in section III.A-C, we use CAs, RAs, and EAs to represent the emergency communication vehicles, the repair crews, and the physical distribution systems, respectively. In this section, we list all the interdependent constraints among these agents. 1) CA-EA (Cyber-Physical) Interdependency Constraints For healthy automatic switches, they can either be closed remotely by DA or manually by repair crews, which can be expressed as: (27) Basically, two conditions should be satisfied to enable the feeder automation to remotely close the opened switches: 1) the communication between the control center and the FTUs is unblocked to enable information flows; 2) the FTUs have power sources for the automatic control. According to the introduction in section III.A, when an ECV is dispatched to and set up at a WS, all the operable FTUs inside the cover range of the wireless network set up by the ECV can restore communications to the control center, making the associated automatic switches remotely operable again. Note that the FTUs are equipped with backup batteries with limited capacity, so the "residual time" of an FTU, denoted as ( , ) ′ indicating the duration of depleting the backup battery of an FTU, should also be considered. We formulate the cyber-physical interdependency between CA and EA as below. give the conditions that a healthy automatic switch ( , ) ∈ \ℱ can be operated remotely, where ( , ) ′ is the FTU at the switch ( , ), embedded with communication and control devices. Constraint (28-29) means that if and only if there is a mobile base station at the working site that can cover the CN ( , ) ′ , then the switch ( , ) can be operated automatically (either from i to j or from j to i), and ( , ) can only be governed by at most one working site. Constraints (30-31) limit the variable ( , ) , by considering the cover range and CAs' routing behaviors. Specifically, (30) indicates that the base station at WS k cannot supply the communication for the automatic switch ( , ) beyond the cover range of the ECV at WS k. Constraint = 1). Constraint (34) indicates that if the switch ( , ) is automatically operated, then it should be either energized or de-energized operated, followed by the constraints (35-37) which give the conditions of energized operation, and the constraints (38-40) which give the conditions of deenergized operation, as introduced in the following two paragraphs, respectively. In (35), max , represent the earliest time when the automatic switch ( , ) is "ready" to be switched on from node cell i to j, and we define it as the "ready time". It consists of two conditions that must be satisfied if the automatic switch ( , ) is ready to be switched on from node cell i to j: 1) the node cell i has been energized (i.e. after ), and 2) all the faults in the node cell j have been repaired (i.e. after ). For energized operation from i to j (i.e. = 1), the remote operation time from i to j for automatic switch ( , ) (i.e. ) should be later than the ready time "max , ", as indicated in (35). In this situation, the node cell j will be energized immediately after the automatic switch ( , ) is remotely operated or closed, as depicted in (36). After outages caused by extreme events, the power supply of an FTU from the power grid is lost, and the backup battery of the FTU continues to supply it. It should be noted that the power supply of FTU ( , ) ′ from the power grid is only at one side of the switch( , ). If the FTU ( , ) ′ is at the "from" node cell i and the automatic switch ( , ) is remotely operated from i to j, then the power source of FTU is not a concern because the grid can supply the power. However, if the FTU ( , ) is at j side, then the remote operation time of the automatic switch ( , ) must be before the residual time of ( , ) , as described in (37). In (38), " + ( , ) " represent the remote operation completion time of the automatic switch( , ). For de-energized operation (i.e. = 1), the energization time of both node cells at two ends of the automatic switch ( , ) should be the same and later than the operation completion time, as depicted in (38-39). Meanwhile, since both sides are de-energized during the operation, the backup battery is the only power source of FTU and the switch's remote operation. Thus, in this situation, the remote operation time of the automatic switch ( , ) must be before the residual time of ( , ) no matter which side of FTU ( , ) is at, as depicted in (40). 2) RA-EA Interdependent Constraints Any switch (∀( , ) ∈ ) can be closed manually, and the following constraints list the interdependent constraints between RA and EA when a switch is manually operated. Note that, in this paper, we assume all the repair crews can operate energized or de-energized switches. In these constraints, ( , ) denotes the index transfer from EA to RA, which represents the working site for repair crews to repair and operate the switch ( , ). Constraint (41-42) implies that if the switch ( , ) is visited by a RA, then it must be manually closed either from i to j or from j to i, and be either energized or de-energized during the operation. The constraints of energized and de-energized operation of the switch ( , ) are expressed in (43-45) and (46-48), respectively. For the energized operation of the switch ( , ) from i to j to energize the node cell j, the node cell i has been restored before a RA arrives at it, as shown in (43). In this case, the other node cell j will be restored immediately after the RA has switched on ( , ) if ( , ) is healthy, or after the RA has repaired ( , ) and switched on it if it needs to be repaired, as shown in (44-45). For the de-energized operation of the switch ( , ) from i to j, the node cell i can only be energized after the switch ( , ) has been repaired and closed, as shown in (46-47). In this case, both the node cell i and j will be restored immediately at the same time, as depicted in (48). (50) Constraint (49) ensures both end cells of a faulted switch can only be energized after it is repaired. Constraint (50) limits the restoration time of any node cell must be after the repair completion time. E. Objective Functions We define the objective functions as below. (53) The proposed multi-objective functions include the EA, RA, and CA-related normalized objectives, as shown in (51-53) respectively. The in (51) is the EA-related objective, which represents the ratio (in percentage) of the weighted loads' unserved time to the scheduled time horizon ( represent the weight of load cell c). The in (52) is the RA-related objective, which includes two parts: the first part is the total travel time of all the RAs, and the second part is the total working time (including the repair time for damaged components and the manual operation time for switches) of all the RAs; both are normalized with the number of RAs multiplied by the scheduled time horizon, and they are multiplied by different weights (i.e. and , respectively). The in (53) is the CA-related objective, which also includes two parts: the first part is the total travel time of all the CAs, and the second part is the total working time at sites (i.e. the duration of stay at working sites); both are normalized with the number of CAs multiplied by the scheduled time horizon, and they are multiplied by different weights (i.e. and , respectively). It can be found that: 1) the value of represents the average unserved time of each load cell, which should be minimized; 2) the value of or represent the average time (including travel time and working time) that a RA or CA takes to coordinate with the restoration, which should also be minimized. In (51-53), all the coefficients ( with superscript) are weights given by the decision-makers. F. The Whole Optimization Models The integrated distribution system restoration optimization models are categorized into two types, according to whether the service restoration is with CAs (i.e. ECVs) or without CAs, which are respectively denoted as "OPT-WCA" and "OPT-WOCA", i.e.: OPT , and ) comparable, they are normalized and have the same units (in p.u.). In (54) and (55), the coefficients ( with superscript) are the weights of these three objectives, which are chosen by the decision-makers according to their importance. In this paper we set : : = 10: 1: 1 , because we think restoring unserved loads from outages is much more important and urgent than the cost-saving of emergency resources (including repair crews and ECVs). By solving these two optimization problems and comparing the optimization results, we can see the benefits of ECVs for distribution system restoration. G. Additional Discussion After extreme events, the communication network may be fully unavailable, or some of them may be intact. In the former scenario, the feeder automation and remote operation of automatic switches can only rely on emergency communication set up by temporary base stations such as ECVs proposed in this paper. The proposed model above can handle such a situation. In the latter scenarios, the communication network may be partially damaged. The proposed model can also handle such a situation by fixing some conditions, as introduced below. First, if a base station at a working site is intact, we exclude from the set of ECVs' work sites, i.e. ′ = \{ } . Second, for an automatic switch ( , ) and the corresponding FTU ( , )′ within the cover range of the base station at : 1) If both the automatic switch ( , ) and the FTU ( , )′ are intact, then ( , ) can be operated remotely without ECVs at any time in the scheduled horizon, and there is no need to dispatch a repair crew to operate it. We can handle this situation by the following steps: step 1, we exclude switch ( , )) ∈ \ℱ from the working site of repair crews, i.e. ′ = \{( , )}; step 2, we fix constraints (28) and (41) to be equalities, i.e. + = 1 and + = 0; step 3, we exclude constraints (29-31) and (33). 2) If either the automatic switch ( , ) or the FTU ( , )′ is damaged, then ( , ) cannot be controlled remotely and can only be manually operated by a repair crew. We can handle this situation by considering( , ) ∈ ∩ ℱ, of which the repair time ( , ) is the real repair time if the switch ( , ) is damaged and zero if FTU ( , )′ is damaged because repairing FTUs is not that urgent compared with operating switches during the DSR after large-scale outages. IV. SOLUTION METHODOLOGY The proposed optimization models (i.e. "OPT-WCA" and "OPT-WOCA") are mixed-integer programming (MIP) problems, of which all the objective functions and constraints are linear except for the nonlinear terms max (•) in (35). By linearizing these nonlinear terms, the whole models are transferred into mixed-integer linear programming (MILP) problems which can be effectively solved by off-the-shelf solvers such as Cplex and Gurobi. The maximum value of two variables ( = max { , }) can be linearized by introducing two binary variables ( , ) [22], and the equivalent MILP formulations are listed below: where and are lower and upper bound of the variable , and is the maximum value of all the upper bounds. In our problems, according to the definition the variables and must be within the scheduled time horizon ([0, ]). Thus, we use an auxiliary variable to replace the nonlinear term (i.e.max { , }) in (35) and introduce two binary variables and to formulate the equivalent MILP constraints, as listed below: \ℱ . By replacing the nonlinear term max { , } with the auxiliary variable and adding the auxiliary constraints (A.1-A.5) to the proposed "OPT-WCA" model, the problems become MILP models. V. CASE STUDY In this section, we test the proposed optimization models on IEEE 123 node test feeder, solved by Gurobi 9.5.2 on a PC with Intel Core i7-7500U 2.90-GHz CPU, 16-GB RAM, and 64-bit operating system. A. Case Design and Parameters We use the 123 node test feeder, which is a medium-size unbalanced distribution system operating at 4.16 kV nominal voltage with 3385 kW three-phase unbalanced loads in total [23]. The one-line diagram of the test system located in a rectangular coordinate system is shown in Fig. 3, in which all the nodes and lines are marked in grey and all the switches are open, which means they are all de-energized at the beginning of the scheduled horizon. Also, we assume the substation is initially unavailable and can start to supply power at the 30 th minute. As shown in Fig. 3, we have allocated 4 repair crews at 2 depots (2 crews at each one) to be prepared to visit 20 candidate working sites, including 4 faulted lines and 16 switches. By solving the integer programming problem (p1-p2), the 4 faulted lines are clustered to 2 depots, in which (13,34) and (47, 48) belong to depot D1, (76, 77) and (101, 102) belong to depot D2. For the switches, we assume all of them are automatic switches installed with FTUs (labeled with red solid dots on the top of the switches) which can be communicated with and controlled remotely. Besides, we assume all the switches could be either closed remotely through feeder automation or closed manually by repair crews. As for the cyber part, we assume the existing communication network is unavailable, which means all the automatic switches are not able to be operated remotely if there is no emergency communication. In this section, we design four cases to validate the proposed models. In Case 1 and Case 2, we assume there are no ECVs, and the repair crews can both repair faulted lines and operate switches. For Case 1, we use a simple heuristic rule to dispatch repair crews, labeled as Algorithm 1 and depicted in Table II. It can be found that Algorithm 1 is in favor of finding the optimal in (54) for a given electric route. For Case 2, we use the proposed "OPT-WOCA" model to optimize the route and sequence of repair crews. For the convenience of comparison, we use the electric path in Case 2's results to be the given electric path which is normally supplied by DSO. In Case 3 and Case 4, we have 2 ECVs at the 2 depots (1 ECV at each depot). The ECVs are prepared to visit 6 candidate working sites to set up an emergency wireless communication network, in which the cover ranges of these 2 ECVs are the same with a radius of 10 units, as labeled with dashed circles in Fig. 3. For case 3, we dispatch ECVs to WSs that can cover the largest number of FTUs, and they do not go to other WSs. This heuristic rule can be named the "Maximum Coverage Algorithm", which is commonly used in real-world communication recovery practices. In our proposed model, this algorithm can be realized only by fixing the CA's route table. It should be noted that, in this case, the interdependencies among CAs, EAs, and RAs are also considered by solving the proposed "OPT-WCA" model, which differs from only considering communication recovery. For case 4, we use the proposed "OPT-WCA" model to co-optimize the sequence of CAs, RAs, and EAs, and the interdependencies among them. Algorithm 1: Crew Dispatch by A Heuristic Rule Step 1: Form the set of working sites to be visited by repair crews ( ): according to the electric path given by the distribution system operator (DSO), decide the faulted lines and switches that are to be visited by repair crews. Step 3: Cluster work sites in cluster ( , ∈ ) to crews at each depot by solving an integer optimization model similar to (p1-p2). Step 4: For each crew: first, visit and repair the faulted lines within its cluster; then, operate switches step by step. In each crew's cluster, the visiting sequence is choosing the site that is closest to the current site, until all the sites are visited. Step 5: Calculate the repair completion time of faulted lines, operation time of switches, and energization time of node cells. As for the weights of multi-objectives in (54-55), since the most important and urgent task is restoring unserved loads as quickly as possible, we set the weights of different agents to be: = 10, = = 1. Besides, we set all the weights in (51-53) to be 1. For other parameters, we set: 1) the time horizon: 12 hours; 2) the repair time of faulted lines: 2 hours for each one; 3) the travel time for repair crews and ECVs traveling between two sites: be proportional to the Euclid distances in Fig. 3, in which the maximum is 65 minutes between depot D1 and depot D2; 4) switching time for remote operation: 1 minute; 5) switching time for manual operation: 15 minutes; and 6) the residual time for FTUs: 4 hours for each one; 7) the minimum duration of stay of ECVs at each WS: 15 minutes. All the optimization models are solved by Gurobi 9.5.2 because Gurobi has wellknown advantages in solving MIP problems compared to Cplex. The solver is set to be with a relative MIP Gap tolerance of 0.001 and a "TimeLimit" of 900 seconds (i.e. 15 minutes). B. Results and Discussions The optimization results of four cases are listed in Table III. It can be found that: 1) the computation time of Case 1 is much less than that of Case 2 because Algorithm 1 is a heuristic rulebased method and no complex optimization models are used; 2) the computation time of Case 3 is much less than Case 4 because the route table of ECVs is fixed which greatly reduces the computation complexities of the proposed "OPT-WCA" model. It can also be found that the MIP gaps of Cases 2 and 4 are larger than the given tolerance within the given computation time limit. However, by observing the Gurobi MIP solution logs of Case 2 and 4, we find that the incumbent objective values remain unchanged from the early stage of the computation (the 73 rd second for Case 2 and the 177 th second for Case 4) and that the best lower bounds (because our optimization models are minimization problems) approach the incumbent objective values slower and slower over time. Taking Case 4 as an example, this trend can be easily found in Fig. 4(a) -the MIP objective bounds and Fig. 4(b) -the MIP Gap during the Gurobi's solution process. Just as Ahmadi [24] analyzed, the program reaches the real optimal solution quickly, but it takes a long time to prove the optimality, so defining a limit for the solution time is an effective way to substantially reduce solution time. Based on the abovementioned analysis, for Cases 2 and 4, it is recommended to set the Gurobi's time limit to 180 seconds (i.e. 3 minutes) to save the computation time while ensuring the solutions' quality. As shown in Table III, Fig. 5. For all cases, the physical system is fully energized and all the loads (3385 kW) are restored after 263, 302, 242, and 169 minutes for Cases 1-4, respectively. To better compare the restoration processes of four cases, we present the EA's routes in Fig. 6, the RA and CA's routes in Fig. 7, as well as the switching sequences and energized node cells in Table IV. As formulated in (52-53), the RA-related objective value represents the average time a RA (or repair crew) takes, including travel time and work time (the latter includes repairing damaged components and manually operating switches), and the CA-related objective value represents the average time a CA (or ECV) takes, including travel time and work time (i.e. duration of stay at working sites). Both objective values reflect the time cost of dispatching emergency resources to help restore power loads. The total objective values in the designed four cases are mainly decided by , because the weights of , , and are 10, 1, and 1, which means restoring the unserved customers as quickly as possible has the highest priority compared with taking fewer emergency resources. Thus, as shown in Table III, the related part takes up the most proportion of . By comparing results in Table III, we can also find that: 1) Cases 3-4 have better performance than Cases 1-2 in terms of service restoration (lower , i.e. lower total unserved energy), which proves that the ECVs can enhance the restoration capabilities of distribution systems; 2) by comparing Case 1 and 2, we can also conclude that the crew dispatch for minimizing repair travel and work time, as indicated in the Algorithm 1 of Case 1, is not as good as co-optimizing the proposed "OPT-WOCA" model, in terms of service restoration. In other words, the proposed "OPT-WOCA" model can find how to enhance restoration capabilities by taking more time for RAs; 3) by comparing Cases 3 and 4, we can also conclude that dispatching ECVs by applying the "Maximum Coverage Algorithm" (in Case 3) is not as good as co-optimizing the proposed "OPT-WCA" model, in terms of service restoration; 4) the results of Cases 3 and 4 also highlight the necessity of considering the mobility of ECVs and the interdependencies between CAs, RAs, and EAs. For all cases, we can compare the dynamic load restoration process in Fig. 5, which depict how much load is picked up at each time step. To observe and compare the detailed dynamic service restoration process in each case, we can check the final EA's route in Fig. 6, the RA and CA's route in Fig. 7, and the switching sequence in Table IV. In Fig. 7, the crew repair faulted lines and operate switches sequentially by traveling among these working sites, represented by the RAs' routes, which are labeled with blue arrows; the ECVs set up wireless communication networks sequentially by traveling among candidate working sites, represented by the CAs' routs, which are labeled with red arrows. As depicted in Fig. 7, all the switches in Cases 1 and Case 2 are operated manually by repair crews because there are no ECVs available to set up communication links between FTUs and the control center. Case 2 has the same EA's route as that of Case 1 but can restore unserved loads more quickly than Case 1 by solving the proposed "OPT-WOCA" model. In cases 3 and 4, as shown in Fig. 6 and Table IV, some of the automatic switches can be remotely controlled via the communication set up by ECVs at working sites. Since operating switches remotely is generally much quicker than operating them manually (e.g. 1-minute v.s. 15 minutes in our cases), the overall effect is that the restoration completion time of Case 3 and 4 is earlier than that of Case 1 and 2. In Cases 3 and 4, there are 7 and 11 automatic switches that can be operated remotely, respectively. For Case 4, More automatic switches can be operated remotely because more FTUs can be covered by ECVs at different WSs due to the movement of ECVs, and the optimal route of ECVs among WSs can be found by solving the proposed "OPT-WCA" model. The detailed operation actions of all the switches are listed in Table IV, which include the operation modes (MD, ME, AD, and AE) and operation completion time, and the sequentially energized node cells are also exhibited in detail in Table IV. Through the above-mentioned analysis, we can prove that Op.(Switch): operation mode for switches; MD: manual de-energized operation; ME: manual energized operation; AD: automatic de-energized operation; AE: automatic energized operation; ENCs: energized node cells; S: substation cell; L: load cell. EAs can be fully leveraged to better enhance the restoration capabilities of distribution systems by solving the proposed "OPT-WOCA" and "OPT-WCA" models. It can also be concluded that setting up wireless communication networks by dispatching ECVs can enable the automatic switches to be controlled remotely, which reduces the travel time and the switch operation time of repair crews and speeds up the restoration of power systems. The spatial movement of the ECVs in cyber sectors causes the benefits of temporal savings in the restoration of power in physical sectors. The 123-bus test system is medium size, of which the geographic range and the feeder's line lengths are limited. Two ECVs and 6 working sites are enough in the designed scenario considering the cover range of wireless communications set up by ECVs. The dispatch of ECVs and repair crews is essentially a vehicle routing (VR) problem [25], which is an NP-hard combinational optimization problem. If the number of working sites (i.e. the visiting targets in the VR problem), ECVs and repair crews increase, the computation complexity will increase exponentially, and solving the proposed routing model will become very complex. In this situation, we can consider some practical rules to reduce the computation complexity at the expense of optimality. For example, we can fix the CA's route table by using the proposed Maximum Coverage Algorithm or visiting the WSs with the maximum priority given by operators. In sum, finding an efficient, exact, and customized solution methodology to reduce the computation complexity is a challenging task, and we will study it in our future work. VI. CONCLUSION In this paper, we first propose an integrated distribution system restoration framework, which considers the cooperation and coordination of the repair crews, the distribution system (physical sectors), and emergency communication networks (cyber sectors). Then, we give the specific optimization models and solution methodology of the proposed models. Finally, we conducted case studies, which validated the effectiveness and exhibited the benefit of considering ECVs and cyber-physical interdependencies in DSR. Future work includes modeling the interdependence of cyber and physical parts of distribution systems concerning situational awareness for more effective and efficient service restoration, exact and customized solution methodologies to reduce the computation complexity of the cooptimization models, etc.
2021-11-22T02:16:05.250Z
2021-11-19T00:00:00.000
{ "year": 2021, "sha1": "36a501fcc96bff855f1012d64ea7f955d616f5a2", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2111.09986", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "36054228868dff1f41ba4aeca200a17395cfb755", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
16498530
pes2o/s2orc
v3-fos-license
Prions on the run: How extracellular vesicles serve as delivery vehicles for self-templating protein aggregates ABSTRACT Extracellular vesicles (EVs) are actively secreted, membrane-bound communication vehicles that exchange biomolecules between cells. EVs also serve as dissemination vehicles for pathogens, including prions, proteinaceous infectious agents that cause transmissible spongiform encephalopathies (TSEs) in mammals. Increasing evidence accumulates that diverse protein aggregates associated with common neurodegenerative diseases are packaged into EVs as well. Vesicle-mediated intercellular transmission of protein aggregates can induce aggregation of homotypic proteins in acceptor cells and might thereby contribute to disease progression. Our knowledge of how protein aggregates are sorted into EVs and how these vesicles adhere to and fuse with target cells is limited. Here we review how TSE prions exploit EVs for intercellular transmission and compare this to the transmission behavior of self-templating cytosolic protein aggregates derived from the yeast prion domain Sup 35 NM. Artificial NM prions are non-toxic to mammalian cell cultures and do not cause loss-of-function phenotypes. Importantly, NM particles are also secreted in association with exosomes that horizontally transmit the prion phenotype to naive bystander cells, a process that can be monitored with high accuracy by automated high throughput confocal microscopy. The high abundance of mammalian proteins with amino acid stretches compositionally similar to yeast prion domains makes the NM cell model an attractive model to study self-templating and dissemination properties of proteins with prion-like domains in the mammalian context. Many, if not all cells, release a repertoire of vesicles in the extracellular milieu. Secreted vesicles shed from the plasma membrane or produced by the endosomal system are collectively termed extracellular vesicles (EVs). 1 EVs are important mediators of intercellular communication and transfer proteins, RNAs and other cellular components between cells, thereby modulating diverse cellular processes in acceptor cells. As biomolecules incorporated into exosomes reflect the physiological state of their donor cells, they are also intensely surveyed as biomarker sources. Interestingly, pathogens such as viruses exploit exosomes for intercellular dissemination. 2 EVs have received further attention for their proposed role as transfer vehicles for pathologic proteins in neurodegenerative diseases, including prions, SOD1, TDP-43, Ab peptides, a-synuclein or Tau. 3,4 Prions -Proteinaceous Infectious Particles The first pathogenic protein aggregates identified in exosomes were prions, self-templating protein particles that cause devastating neurodegenerative diseases in mammals. TSEs in mammals occur mostly sporadic, but can also be of genetic or iatrogenic origin and can be infectious. Scrapie in sheep and goats and chronic wasting disease in deer, elk and moose constitute prion diseases that naturally transmit horizontally. The extreme resistance to inactivation procedures that destroy nucleic acid and the discovery that the host-encoded prion protein PrP was the main component of the infectious particle led to the proposal that TSE agents are solely protein-based and devoid of coding nucleic acid. 5 The cellular PrP (PrP C ) is a highly glycosylated, glycosylphosphatidylinositol (GPI)-anchored protein enriched in lipid raft microdomains on neuronal and nonneuronal cell membranes. In a seeded polymerization reaction, PrP Sc serves as a template that induces the structural rearrangement of PrP C monomers into b-sheet rich prion polymers. 6 Accumulation of PrP Sc in the central nervous system is associated with astrogliosis and spongiform degeneration. Remarkably, PrP C cannot only take on one but a variety of self-templating conformations that are associated with different pathologies in their host. Substantial biophysical evidence supports the hypothesis that these prion strain properties are enciphered within the 3-dimensional fold of the prion polymer. 7 While initially coined for TSE agents, 5 the term "prion" was later adopted to describe proteinaceous particles that confer non-Mendelian traits in yeast. 8 Prions in lower eukaryotes are insoluble, self-perpetuating amyloid-like polymers that act as epigenetic elements of inheritance. 9 Unlike mammalian prions attached at the plasma membrane by a GPI-anchor, yeast prions are predominately cytoplasmic. Depending on the genetic makeup of the host and environmental factors, yeast prions can either be detrimental, benign or advantageous to their host. 10,11 De novo yeast prion induction and replication involve rare spontaneous nucleation events followed by growth and fragmentation of highly ordered protein fibrils, a process similar to the proposed propagation mechanism of mammalian prions. 12 The nucleation phase can be bypassed by exposure of yeast to in vitro formed prion aggregates 13 or cytosolic "propagons" extracted from prion-containing strains. 14 Yeast prion proteins share little sequence homology with PrP. Instead, prion activity is governed by so-called prion domains, disordered regions often enriched in uncharged residues such as glutamine, asparagine and glycine. 15 In 1982, Prusiner defined prions as "small proteinaceous infectious particles which are resistant to inactivation by most procedures that modify nucleic acid." 5 This original definition also holds true for protein aggregates in lower eukaryotes. We use the term "prion" to describe a biological process by which biologic information is enciphered, amplified and disseminated through protein conformation. To avoid any confusion in terminology, we will refer to prions causing TSEs as TSE prions, while we will term self-templating protein aggregates identified in yeast as "yeast prions." Here, we specifically focus on the intercellular dissemination strategies of TSE prions and compare these to the surprising self-propagating and dissemination properties of a yeast prion domain in mammalian cells. Remarkably, prion-like domains (PrLDs) compositionally similar to annotated yeast prion domains are present in 1% of mammalian proteins, including proteins forming pathogenic aggregates in Amyotrophic Lateral Sclerosis (ALS) or Frontotemporal Dementia (FTD). 16 Prions derived from the yeast prion domain of Sup35 are not homologous to mammalian proteins and thus allow us to study protein aggregation and dissemination in the absence of a loss-of-function phenotype. As such, the yeast prion domain Sup35 constitutes an excellent tool to model general aggregation and dissemination propensities of proteins with related domains. Extracellular Vesicles Are Involved in Intercellular Communication in Mammals EVs are heterogeneous and differ in their biogenesis. Most vesicles that bud off the cell membrane (referred to as microvesicles) fall in the range of 200-500 nm, but smaller and larger membrane-bound particles have been described. Although EVs are discriminated by marker proteins, size and density, substantial overlap in all 3 parameters has been observed. 17,18 Exosomes are EVs in the range of 40-100 nm, which arise through inward budding into specialized late endosomal structures, referred to as multivesicular bodies (MVBs). Fusion of MVBs with the plasma membrane liberates the intraluminal bodies (ILVs) as exosomes into the extracellular space. MVB are not only intermediates of exosome release but also subject to autophagosomal degradation. Although the selection mechanisms that define the fate of cargo proteins remain elusive, accumulating evidence suggests that cells secrete subpopulations of exosomes that differ in cargo composition, size, subcellular distribution and biogenesis. 19 Recent research has highlighted some mechanisms that sort membrane associated proteins and cytosolic proteins into ILVs. These processes can act independently or collaboratively. Protein sorting into exosomes involves "endosomal sorting complex required for transport" (ESCRT)-dependent and -independent processes. The ESCRT complex and additional regulatory proteins support sorting of ubiquitinated cargo into MVBs. 20 Several other posttranslational cargo modifications have been reported, such as sumyolation, phosphorylation or specific carbohydrate signatures. 21 There is direct evidence showing that the number of N-linked glycans is a determinant for exosomal cargo sorting. 22 Membrane microdomains enriched in ceramides were also shown to be involved in cargo sorting. 23,24 Lipid components of raft-like domains, including cholesterol, ceramide, sphingomyelin, glycosphingolipids and phosphatidylcholine, are highly enriched in exosomes. The raft-like domain not only provides the platform for the ILVs budding, but is directly involved in cargo sorting. Specific lipids and integral membrane proteins such as tetraspanin interact with cargo. 19,[25][26][27] Furthermore, aggregation of proteins or lipids might serve as a general sorting signal for exosomes, as antibody-mediated aggregation of cell surface receptors induces their sorting into exosomes. 28 Along these lines, higher-order oligomerization of plasma membrane associated retrovirus Gag protein is sufficient to target it to exosomes for hijacking exosome biogenesis for virus production. 29 Key to the function of EVs is attachment and membrane fusion to deliver biologically active cargo to the target cell. Importantly, exosomes selectively adhere to specific cells, a tropism defined by ligand-receptor interactions. While some receptor and ligand pairs mediating this interaction have been identified, most have not been explored so far. Specific integrins and cell adhesion molecules abundant on EV surfaces can facilitate attachment onto target cells and mediate host cell tropism. 30 Heparan sulfate proteoglycans, 31 phosphatidylserin receptors 32 and lectins 33 can serve as EV receptors. EVs can fuse directly with the plasma membrane and release the vesicle content into the cytoplasm. 34 Alternatively, EVs can be taken up by endocytosis or macropinocytosis. 35 Clathrin-, caveolin/lipid raft-dependent endocytosis or independent entry routes have been described for EV entry. 36,37 The size limit of cargo that can be internalized by certain pathways might influence the preferred uptake route for EVs. 38 It is possible that EVs use more than one entry route or use alternative pathways. One alternative route requires fusogenic proteins that mediate docking and direct fusion with host membranes, which has been shown for enveloped viruses and exosomes secreted by placenta. 2 Moreover, certain exosomal tetraspanin compositions can also mediate EV-host cell adhesion and membrane fusion. 39 How this fusion process is regulated for other EVs is so far unclear. Endocytosed EVs are either delivered to the lysosome or fuse with the limiting membrane of the late endosome to release their cargo into the cytosol. Exosomes as Vehicles for Intercellular Dissemination of Transmissible Spongiform Encephalopathy Agents TSE infection usually occurs through the intestinal route. 6 The spreading of prions from the gut through the lymphoreticular system and peripheral nerves to the brain involves intercellular dissemination of infectious entities. 40 How exactly TSE prions spread from cell to cell in vivo is only poorly understood. Routes for prion transmission have been mainly studied in cell culture. The formation of the infectious PrP isoform occurs after PrP C has reached the plasma membrane, either directly on the cell surface or within recycling endosomes, endolysosomal vesicles and / or MVBs. 6,41,42 Different dissemination strategies can be used by TSE prions, including direct cell contact, 43,44 for example via tunneling nanotubes, 45 or secretion of prions in EVs, such as microvesicles and exosomes. [46][47][48][49][50] Cell culture derived exosomes containing PrP Sc are infectious to permissive cell lines and produce clinical disease in mice. 47,49 The observed differences in dissemination strategies might be related to prion strain differences or infected cell types. [43][44][45]48,49,51,52 As EVs extracted from body fluids also contain prion activity, they are likely to contribute to prion dissemination in vivo. 53 Exosomal sorting is not restricted to PrP Sc , as PrP C is a normal constituent of intraluminal bodies in MVBs, 47,54 and is found in exosomal preparations of immortalized cell lines and primary cells of diverse origins. 42,48,[55][56][57][58][59][60] Exosomes and microvesicles isolated from body fluids are also decorated with PrP C , suggesting that PrP C is a normal constituent of EVs. 61,62 PrP C expression has been shown to stimulate exosome secretion in primary astrocytes and fibroblasts. 63 As both PrP C and PrP Sc are partitioned into intraluminal vesicles destined for secretion, protein polymerization is not a required trigger for secretion. Interestingly, the PrP Sc glycosylation pattern often differs between cell extract and exosomes, arguing that specific subpopulations of PrP Sc are selectively sorted into exosomes. 64 The contribution of different sorting pathways is less clear and might be cell type or strain dependent. The presence of PrP C and PrP Sc in lipid raft microdomains suggests that PrP isoforms are sorted to exosomes in association with lipid rafts. 26,65 Both ceramide dependent and Tsg101-ESCRT mediated pathways contribute to exosomal prion secretion in 2 cellular TSE models. 42,52 While ESCRT Tsg101 subunit silencing directly affected exosome and PrP Sc secretion, a compound inhibiting the ceramidedependent exosome pathway only marginally affected exosome secretion but led to selective exclusion of PrP Sc and infectivity from exosomes derived from a neuroglial cell line. 42 This is in contrast to a study using a murine hypothalamic cell line where chemical impairment of the ceramide-dependent pathway reduced exosomes and exosome-associated PrP C and PrP Sc 52 . Little is known if TSE prion-containing exosomes derived from different cell types are equally infectious to different recipient cells. Generally, very few cell lines are permissive to TSE prions, and prion strains exhibit selective infectivity for specific cell lines and even subclones thereof. 6,66 However, when tested in permissive cell cultures, exosomes isolated from different persistently infected cell lines proved infectious to recipient cells of different origin. 42,49 TSE prion-containing exosomes might thus be taken up by recipient cells unspecifically or via ligand-receptor pairs functional in the tested donor-recipient cell combinations. A problem in defining cellular pathways that mediate prion internalization and infection is that TSE prion infection takes days to weeks to be detectable in cell culture. The currently used assays rely on detection of newly formed PrP Sc weeks post infection by cell colony blot or western blot. 42,[47][48][49]51 These assays do not measure single cell events and cannot discriminate between early events following internalization and subsequent secondary amplification and spreading events (Fig. 1A). Cellular uptake of PrP Sc can also be visualized by confocal microscopy, but also non-permissive cells internalize PrP Sc . 67 Thus, these studies do not allow drawing conclusions on the internalization pathways that lead to productive infections. A Yeast Prion Domain as a Model Protein to Study Dissemination Pathways of Cytosolic, Self-Templating Protein Aggregates in Mammalian Cells Yeast prions have been studied extensively in the past to unravel basic principles of conformational templating. The translation termination factor Sup35 of S. cerevisiae is the beststudied yeast prion. Under rare circumstances, Sup35 adopts an inactive amyloid fold that induces heritable nonsense suppression in progeny and mating partners. Its prion propensity is governed by the prion domain N. The N domain together with a highly charged M domain are modular but otherwise dispensable for the termination function of the carboxyterminal C domain. Like most yeast prion domains, the N domain is enriched in uncharged amino acids, such as glutamine, asparagine, tyrosine, serine and glycine. 15 Interestingly, the prionogenic properties of the Sup35 prion domain are conserved when it is expressed in bacteria 68 and mammalian cell models. 69 Investigating prion-like propagation and dissemination mechanism by using S. cerevisiae Sup35 can thus help to understand basic principles of cytosolic prion-like behavior in heterologous systems. Consistent with the finding that the Sup35 prion state can be induced in prion-free yeast cells by in vitro formed prion aggregates, 13,70 we recently demonstrated that cytosolically expressed NM stays soluble in neuroblastoma cells but can be induced to aggregate upon addition of recombinant NM amyloid fibrils. 69,71 Once induced, NM aggregates are faithfully propagated to daughter cells over multiple cell divisions. Furthermore, Sup35 NM protein aggregates in mammalian cells not only transmit vertically to progeny but also horizontally to naive cells in coculture. In analogy to the transmission pathways of TSE prions in mammalian cells, we found evidence for NM aggregate transmission to adjacent cells, potentially via actincontaining cytonemes, 9,71 and via EVs. 72 Although S. cerevisiae also secretes infectious prions in extracellular vesicles, so far it is unclear if these vesicles naturally transmit the prion state to bystander cells. 14,73 Different N2a clones all produced NM-containing EVs that were taken up by recipient cells and induced aggregation of GFP-tagged NM in the cytosol. Induction efficiency was, however, low, compared with aggregate induction efficiency when cells were in close proximity, suggesting that direct cell contact is the most efficient way of NM aggregate dissemination in our model. 71 As limiting dilution cloning had been successfully used in the past to isolate cell clones with increased susceptibility to TSE prions, 74 we used the same strategy to isolate cell clones that secrete EVs capable of efficiently shuttling prion infectivity to recipient cells (Fig. 1B). Through sequential centrifugation and Optiprep gradients, prion activity could be traced to vesicle fractions that fall in the size and density range of exosomes. NM released via exosomes was protected from proteolysis, arguing that at least a fraction of NM was present in the exosomal lumen 72 . How is NM prion activity packaged into exosomes? We found the neutral sphingomyelinase inhibitor Spiroepoxide significantly reduced exosome and NM release, suggesting that ceramide-mediated exosome biogenesis is involved in NM secretion. Both soluble and insoluble protein was packaged into exosomes, and no correlation existed between NM FIGURE 1. Cell culture assays to study prion infection by exosomes. (A) Cell culture assays to study exosome-mediated TSE prion infection. Published exosome-mediated TSE infection assays are time consuming und rely on the detection of newly formed, proteinase K (PK) resistant PrP Sc . Naive cells permissive to infection with the respective TSE prion strain are exposed to exosomal preparations isolated from prion-infected cells for 4-5 days, followed by several weeks of culture. Read-out is PK-resistant PrP Sc detected by cell blot or western blot. 42,[47][48][49]51 (B) Quantitative imaging of exosome-mediated NM aggregate induction. Recipient NM-GFP sol cells are seeded on a 384 well plate for 1 hour. Exosomes isolated from conditioned medium of donor cells are added to the wells. Life or fixed cells are subjected to automated high throughput confocal microscopy. Read-out is induction of NM-GFP aggregates in recipient cells. Life imaging analysis demonstrates the appearance of NM-GFP aggregates as soon as 3 hours post exosome addition. The arrowhead marks cells with exosome-induced NM-GFP aggregates. The assay can also reveal bidirectional inheritance of NM aggregates by daughter cells, a characteristic of TSE prions replicating in cellular models. 6 aggregation state and exosome numbers. 72 Donor cells expressing soluble NM secreted even more vesicle-associated NM than donor clones containing NM prions, arguing that aggregation per se was not a required trigger for incorporation into exosomes. NM shares no sequence homology with mammalian proteins, so it is unlikely that specific recognition signals mediated selective recruitment. The finding that different cell clones secreting exosomes with distinct infectivity can be isolated correlates with findings for TSE infected cells. 51 NM prion producing cell clones had been originally derived from a bulk population of N2a cells transduced with lentivirus coding for NM that were subsequently exposed to recombinant NM fibrils. 69 Cell clones differ in NM expression levels and show phenotypic variation of NM aggregates. The morphological phenotype of NM prions is remarkably persistent and does not change even over prolonged culture. 69 We compared 2 cell clones for their secretion of NM aggregates via EVs. Interestingly, cell clone 1C expressed more total NM 69 and also secreted more total NM in association with exosomes than cell clone s2E. 72 Cell clone s2E selected for its production of highly infectious conditioned medium secreted approximately 6 x more exosomes than 1C clone, but exhibited a seeding activity which was approximately 280 x higher than that of exosomes derived from clone 1C. Secreted EVs from clones 1C and s2E did not differ in size. Filter trap assay and SDD AGE demonstrated that a considerable amount of aggregated NM was present in exosomal preparations of both cell clones. However, comparison of the aggregation states of exosomepackaged NM revealed that lower-order NM oligomers were preferentially sorted into exosomes by the clone s2E. 72 The finding that the aggregation state of NM within exosomes was distinct from that seen in whole cell extracts suggests that NM aggregate sorting into exosomes is a selective process. Our data are in line with the hypothesis that lower-order oligomers constitute highly active templates for seeded polymerization. 75 Notably, rupture of exosomal membranes by sonication left NM oligomers relatively unaffected but drastically reduced the infectivity of the preparation, strongly arguing that only intact exosomes efficiently deliver NM aggregates to target cells. 72 Further evidence that distinct exosomes are released from different donor cell populations comes from new experiments with human HEK cells engineered to express NM. Similar to our N2a model, exposure of engineered HEK cells to recombinant NM fibrils turned soluble cytoplasmic NM into morphologically heterogeneous, self-templating protein aggregates that were stably propagated by individual cell clones (Fig. 2A, B). Also HEK cells released soluble and aggregated NM in association with exosomes (Fig. 2C). Consistent with our previous results, we did not observe increased exosome release in cells with aggregated NM-HA (Fig. 2D). HEK donor cells secreted significantly less exosomes compared with N2a donor clones 1C and s2E (Fig. 2E). While we expected to achieve lower induction rates due to lower exosome numbers, exosomes derived from HEK NM-HA agg cells were basically non-infectious to HEK NM-GFP sol recipient cells (data not shown). Comparison of the NM aggregation states in donor cell populations revealed that the oligomerization state of NM in the cell lysates of all donor cell populations was remarkably similar (Fig. 2F). Exosome-associated NM from HEK NM-HA agg cells was also enriched for lower-order oligomers comparable to the exosomes produced by highly efficient donor clone s2E (Fig. 2F). The lack of aggregate induction by NM oligomer-bearing exosomes in the HEK system argues that there is considerable difference in composition and activity of EVs isolated from different cell lines and even cell clones. Possible differences in the seeding activities of exosome populations are likely related to the relative number of secreted exosomes, the relative expression level of amyloidogenic protein, and the relative amount and oligomerization state of incorporated aggregated protein (Fig. 3). Another intriguing possibility is that subsequent exosome-target cell interactions could influence the biologic activity of the NM cargo. This possibility, however, needs further elucidation. Evidence for Secretion of Human Proteins with PrLDs in Association with Exosomes Algorithms devised to identify novel prion proteins predict that approximately 1% of mammalian proteins contain PrLDs. 15,16,76 The majority of mammalian proteins with PrLDs are nucleic acid binding proteins. The PrLDs play critical roles in protein function by mediating protein-protein interactions or phase transition required for the formation of physiologically relevant membrane-less organelles, such as stress granules. 16 A prominent protein known to contain a PrLD is the RNA-binding protein TIA-1, an essential component of stress granules. The finding that replacement of TIA-1 PrLD with the Sup35 prion domain restores its normal function argues that yeast prion domains and predicted PrLDs are indeed functionally related. 77 aggregated NM-HA. NM-HA was stained with anti-HA antibody (red) and nuclei were counterstained with Hoechst (blue). Maximum intensity projections were generated from Z-stacks. (C) Western blot analysis of exosomes from HEK NM-HA sol , NM-HA agg and N2a NM-HA agg s2E cell clones for exosomal marker Alix and NM-HA. Exosomes were isolated according to a previously described method. 72 (D) Exosome numbers released from HEK NM-HA sol and NM-HA agg were determined using ZetaView PMX 110-SZ-488 Nano Particle Tracking Analyzer with the same measurement setting. Results shown are means § SD (n D 3; *** p < 0.001; unpaired student t test). (E) Exosome numbers released from HEK NM-HA agg , N2a NM-HA agg clone s2E (selected for high aggregate inducing activity in recipient cells) and N2a NM-HA agg clone 1C. Results shown are means § SD (n D 3; *** p<0.001; one-way ANOVA). (F) Glutaraldehyde cross-linking of proteins in cell lysates or exosomes from HEK NM-HA sol , HEK NM-HA agg and N2a NM-HA agg clones s2E or 1C to determine the oligomerization state of NM-HA. Cross-linking was done as described previously. 72 Importantly, aberrant aggregation of proteins with PrLDs might be the underlying cause of degeneration in several neurodegenerative diseases and myopathies. [78][79][80][81] ALS is a fatal motor neuron disease that is mostly sporadic. Ten percent of cases are genetic and have been linked to mutations in a variety of proteins, such as SOD1, VCP, OPTN, TDP-43, hnRNPA1, hnRNPA2 and FUS, many of which form insoluble pathological inclusions. FUS, TDP-43, hnRNPA1 and hnRNPA2 contain putative PrLDs similar to annotated yeast prion domains. Systematic screens in yeast recently identified TAF15 and EWSR1 as further aggregation-prone PrLD-bearing proteins linked to neurodegenerative diseases. 82,83 Several other proteins listed as PrLD-like proteins await further characterization. Deregulated PrLDmediated protein assembly has been proposed to promote the formation of protein aggregates with self-templating and dissemination properties. Indeed, a recent study showed that replacement of the Sup35 prion domain with the human hnRNPA2B1 PrLD generates a protein with definite prion activity in yeast, arguing that PrLDs of human proteins can drive prion assembly at least in lower eukaryotes. 81 TDP-43 is a nuclear RNA-binding protein involved in transcription and splicing and is associated with cytoplasmic inclusions in ALS and FTD. The predicted PrLD of TDP-43 mediates its aggregation in vitro and in vivo. 84 Recombinant TDP-43 fibrils and TDP-43 aggregates extracted from ALS and / or FTD patients have seeding activity and cause mislocalization and aggregation of TDP-43 in cell culture. 85,86 TDP-43 oligomers or aggregates also transmit from donor to recipient cells in culture, either through tunneling nanotubes or exosomes. [86][87][88][89] Cell culture experiments suggest that the ceramide-dependent exosomal pathway is involved in exosomal TDP-43 release. 88 As TDP-43 is also present in exosomal fractions from brains and CSF of healthy controls, its assembly into diseaseassociated aggregates might not cause the sorting into EVs. 87,89 Still, exosome-associated TDP-43 was reported to be increased in ALS FIGURE 3. Factors influencing seeding activity of protein aggregates incorporated into exosomes. Exosome-mediated secretion of NM-HA by donor cells and subsequent uptake and seeding of NM-GFP prions in recipient cells. Infectivity of NM-HA bearing exosomes is likely determined by the following parameters: (1) Enhanced secretion of exosomes, (2, 3) Selective sorting of low-order oligomers, (4, 5) Specific exosome-target cell interaction (ligand-receptor recognition). This could include cell specific ligand-receptor interactions or differences in the intracellular fate of endocytosed exosomes. After internalization, the NM-HA aggregates contained in exosomes are released and induce new aggregate formation in N2a NM-GFP cells. The mechanism of NM-HA release into the cytosol is so far unknown. brains compared with controls. 88 Interestingly, mammalian proteins that harbor intrinsically disordered domains with amino acid compositions similar to yeast prion domains 15,76 appear to be frequent constituents of exosomes. Of the human RNA-binding proteins with PrLDs, 80 71% have been previously reported in exosomal fractions (http://www.exocarta.org/). PrLD-containing proteins can even be actively involved in the selective sorting of specific microRNAs into EVs for secretion. A sumyolated form of hnRNPA2B1 controls the sorting of a subpopulation of microRNAs into exosomes. 90 The presence of PrLD containing proteins in exosomes could thus reflect the physiological function of the respective protein. Whether aberrantly folded proteins with PrLDs are generally sorted into exosomes and how this might contribute to intercellular aggregate spreading remains to be established. CONCLUSION Research over the last years has demonstrated that not only TSE prions are sorted into exosomes, but also pathogenic protein aggregates associated with more common neurodegenerative diseases. Among them, proteins with domains compositionally similar to yeast prion domains have been found associated with EVs, suggesting that EVs might contribute to their intercellular dissemination. Our knowledge on mechanisms that drive cargo sorting into EVs and uptake by recipient cells is limited. There is an urgent need for assays that monitor cargo delivery to target cells that are amenable to high throughput screening. Here we showed that the non-mammalian prion domain of Sup35 can serve as a versatile tool to study exosome-mediated induction of self-templating protein aggregates. The NM prion cell assay has been successfully adapted to automated high throughput confocal microscopy. The fast and accurate detection of aggregate induction in recipient cells will help to characterize general cellular pathways involved in aggregation and dissemination of protein aggregates. DISCLOSURE OF POTENTIAL CONFLICTS OF INTEREST No potential conflicts of interest were disclosed.
2018-04-03T00:50:38.892Z
2017-03-04T00:00:00.000
{ "year": 2017, "sha1": "6fc8ca785a79c03a865c9b2e566c0212c952431a", "oa_license": "CCBYNCND", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/19336896.2017.1306162?needAccess=true", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6fc8ca785a79c03a865c9b2e566c0212c952431a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
46923379
pes2o/s2orc
v3-fos-license
Unexpectedly decreased plasma cytokines in patients with chronic back pain Introduction Chronic back pain is one of the most important socioeconomic problems that affects the global population. Elevated levels of inflammatory mediators, such as cytokines, have been correlated with pain, but their role in chronic back pain remains unclear. The effectiveness of anti-inflammatory drugs seems to be limited for chronic back pain. The authors wanted to investigate the levels of inflammatory mediators in long-term medically treated patients with persistent chronic back pain. Methods Cytokine plasma levels of patients with chronic back pain (n=23), compared to pain-free healthy controls (n=30), were investigated by immunoassay. Patients with chronic back pain were exposed to long-term conservative medical therapy with physiotherapy and anti-inflammatories, also combined with antidepressants and/or muscle-relaxants. Results The patients with chronic back pain expressed lower levels of the chemokines MCP1, CCL5, and CXCL6 compared to pain-free healthy controls. Significantly lower concentrations of the anti-inflammatory cytokines, interleukin (IL)-4 and granulocyte-colony stimulating factor were also found. Interestingly, levels of proinflammatory cytokines (IL-2, IL-6, IL-1β, tumor necrosis factor alpha), IL-10, granulocyte-macrophage colony-stimulating factor, and stromal cell-derived factor 1 alpha showed no significant differences between both groups. Conclusion This decrease of inflammatory mediators in medically treated patients with chronic back pain is of unclear origin and might be either a long-term side effect of medical therapy or related to chronic pain. Further longitudinal research is necessary to elucidate the underlying cause of these findings. Introduction Chronic back pain, defined as pain lasting at least 3 months duration, 1 is recognized as a major public health problem, producing significant economic and social burdens. 2 Several studies have demonstrated that chronic back pain condition interferes with everyday activities and results in direct medical costs and lost productivity. 3,4 Increased serum levels of pro-inflammatory cytokines (interleukin [IL]-1β, IL-2, IL-6, and tumor necrosis factor alpha [TNF-α]) have been previously correlated with increased pain intensity in patients with different types of chronic pain. 5 Furthermore, low concentrations of the anti-inflammatory cytokines IL-4 and IL-10 were found in patients with chronic widespread pain, and the lack of anti-inflammatory cytokine activity was associated with a possible contribution to pain pathogenesis. 6 The most common spinal degenerative problems manifest in back pain, followed by neck and head pain. 7 Degeneration of the intervertebral disc (IVD) is a widely recognized contributor to back pain. [8][9][10] Patients with discogenic low back pain showed high levels of IL-6, IL-8, 11 and IL-1β 12 in IVD tissues; IL-1β was suggested as the key regulatory cytokine in the upregulation of factors involved in innervation and vascularization of human degenerated IVD. 13 High levels of inflammatory mediators (IL-1β, TNF-α, IL-6, and IL-8) were found in degenerated and herniated IVD, 14,15 and associated with pain development during IVD herniation and degeneration. 16 Painful 12 and degenerated 17,18 IVD tissues showed higher expression of the chemokines RANTES (CCL5) and granulocyte chemotactic protein 2 (CXCL6), and high levels of monocyte chemoattractant protein 1 (MCP-1) were found in the herniated lumbar nucleus pulposus. 19 MCP-1 was also elevated in the blood at the chronic stage of complex regional pain syndrome, 20 and elevated plasma levels of CCL5 and CXCL6 were found in patients with lumbar disc degeneration. 21 Furthermore, higher plasma levels of pro-inflammatory TNF-α and IL-6 were associated with painful herniated IVD 22,23 and low back pain. 24 People with chronic back pain, along with pain and impaired function, frequently experience anxiety and depression. Analgesics, non-steroidal anti-inflammatory drugs (NSAIDs), opioids, antidepressants and muscle-relaxants may be used for medical treatments of low back pain. Unfortunately, there is only limited evidence of the effectiveness of those drugs. 25 In line with the observation that medications perform poorly as treatments for chronic back pain, in this study we analyzed the expression levels of cytokines in conservatively medically treated patients with chronic back pain. We hypothesized that, despite medical therapy, patients with chronic back pain will show high levels of inflammatory mediators related to back pain, and low levels of anti-inflammatory mediators. Sample collection Blood was collected from patients with chronic back pain and pain-free healthy controls after written informed consent and approval by ethics committee of Canton of Lucerne (Study 730-May 16, 2013) were obtained. Plasma was isolated by Ficoll density gradient (Bioconcept, Allschwil, Switzerland) centrifugation for 20 min at 800 g in Greiner Leucosep tubes (Huberlab, Aesch, Switzerland). Inclusion/exclusion criteria We included patients aged over 18 years old with long-term chronic back pain resistant to therapy. Any chronic back pain (cervical, thoracic, or lumbar spine origin) was considered, with or without irradiation to the extremities. Patients who had spine surgery in the past were also included. We excluded patients with acute back pain, and those who reacted to medical therapy. Quantitative ELISA assays Quantitative determination of CXCL6, IL-1β, CCL5, and stromal cell-derived factor 1 alpha (CXCL12) (Quantikine ELISA kits -R&D Systems, Abingdon, UK) in human plasma of patients with chronic back pain (n=23) and agematched pain-free healthy controls (n=16) was done with a DTX 880 Multiplex reader (Beckman Coulter, Nyon, Switzerland). Experiments were performed according to the respective manufacturer's protocols. Statistical analysis For statistical analysis and comparison between the main two groups of chronic back pain and healthy control, we used the non-parametric Mann-Whitney-Wilcoxon U test for independent variables. For multiple comparison and statistical analysis between groups and subgroups, we used the one-way Anova test with Tukey's post-hoc analysis. Data analysis was performed with SPSS version 24.0 for Windows (IBM Corporation, Armonk, NY, USA). Significance was indicated as *p<0.05; **p<0.01; ***p<0.001. Patients with chronic back pain For this study, we collected plasma from patients with chronic back pain (n=23) undergoing therapy at a pain clinic. All patients had back pain for more than 1 year, and 56% for more than 5 years. The mean age was 52.5 years old (SD =±15.9; range =26-84). Back pain was prevalently lumbar with discogenic origin, in two-thirds of the cases. The reported maximum pain intensity value was on average 7.7±1.7 (range =4-10; Numeric Pain Rating Scale, NPRS= 0-10/10). All patients were long-term treated with one or more conservative medical therapies, such as physiotherapy and NSAIDs, alone or combined with antidepressants and/ or muscle-relaxants. Patients' characteristics are summarized in Table 1. Quantitative single ELISA assay showed significantly lower concentrations of CCL5 ( Figure 2A) and CXCL6 ( Figure 2B) in plasma of patients with chronic back pain (n=23), compared to age-matched pain-free healthy controls (n=16). There were no significant differences in CXCL12 ( Figure 2C) or IL-1β ( Figure 2D) plasma concentrations. Furthermore, we divided the chronic pain group into 14 subgroups (according to type of pain, cause of pain, spine surgery operation, medical therapy, and pain history), and we performed a multiple comparison statistical analysis to analyze if there were differences between subgroups and to compare them to the healthy control group. For all analyzed cytokines, there were no significant differences between chronic pain subgroups. The multiple comparison analysis confirmed statistical significance between chronic back pain and the healthy control group for G-CSF, IL4, MCP1 (Table 2), CCL5, and CXCL6 (Table 3). These cytokines were significantly different in almost all subgroups compared to the healthy control group. Tables 2 and 3 show, for each subgroup, the mean ± standard deviation and the statistical significance (p-value) of comparison with the healthy control group. Discussion Contrary to our expectations of positively correlating chronic back pain to increased levels of pro-inflammatory cytokines, in this study we found that plasma levels of proinflammatory cytokines were comparable between medically treated patients with chronic back pain and pain-free healthy controls. Furthermore, patients with chronic back pain showed significantly lower plasma levels of chemotactic and anti-inflammatory cytokines. We quantified, by immunoassay, a circulating concentration of possible biomarkers related to pain, inflammation, and degeneration of the IVD, and we found that, in plasma of medically treated patients with chronic back pain, there were significantly lower concentrations of chemokines, such as MCP1, CCL5, and CXCL6. Furthermore, pro-inflammatory cytokines, such as (IL)-2, IL-6, IL-1β, IL-10, TNF-α, GM-CSF, and CXCL12 showed no significant differences between chronic back pain patients and pain-free healthy controls. This is a surprising result, because in several studies a high Notes: Mean ± standard deviation (p-value: *p<0.05; **p<0.01; ***p<0.001). a Multiple types: 2 or more type of pain-lumbar and/or cervical and/or thoracic. b There is local pain at the spine area, but additional unspecific radiation to extremities. c Facetogenic, osteochondrosi, vertebrogenic, fracture. d Post-operation pain: indicates chronic pain following spine surgery operation in the pain area. e Other drugs: antidepressants and/or muscle-relaxants. Abbreviations: G-CSF, granulocyte-colony stimulating factor; GM-CSF, granulocyte-macrophage colony-stimulating factor; IL, interleukin; MCP-1, monocyte chemoattractant protein 1; TNF-α, tumor necrosis factor alpha; NSAIDs, non-steroidal anti-inflammatory drugs. Notes: Mean ± standard deviation (p-value: *p<0.05; **p<0.01; ***p<0.001). a Multiple types: 2 or more type of pain-Lumbar and/or Cervical and/or Thoracic. b There is local pain at the spine area, but additional unspecific radiation to extremities. c Facetogenic, Osteochondrosi, Vertebrogenic, Fracture. d Post-operation pain: indicates chronic pain following spine surgery operation in the pain area. e Other drugs: antidepressants and/or muscle-relaxants. Abbreviations: CCL5, RANTES; CXCL6, granulocyte chemotactic protein 2; CXCL12, stromal cell-derived factor 1 alpha; IL, interleukin; NSAIDs, non-steroidal antiinflammatory drugs. submit your manuscript | www.dovepress.com Dovepress Dovepress expression of pro-inflammatory and chemotactic cytokines has been found in herniated and degenerated IVD, 11,12,15,17,18,26,27 and correlated with the pathogenesis of pain. 12,16,28 Elevated plasma levels of MCP-1 have been observed in the blood at the chronic stage of complex regional pain syndrome, 20 while high levels of CCL5 and CXCL6 have been found in patients with lumbar disc degeneration. 21 However, in those studies the intervention with medical treatments was mostly not specified. In a study where one of the exclusion criteria was the use of analgesic drugs, elevated serum levels of TNF-α and IL-6 were shown in individuals with back pain, due to herniated lumbar disc. 23 Interestingly, we also showed significantly lower plasma levels of anti-inflammatories cytokines IL-4 and G-CSF in medically treated patients with chronic back pain. The anti-inflammatory cytokines IL-4 and IL-10 have been demonstrated to have potential as treatments for persistent inflammatory pain, 29 but low concentrations of these two cytokines have been found in patients with chronic widespread pain and associated with a possible contribution to pain pathogenesis. 6 G-CSF, hematopoietic growth factors for neutrophils, could have immune-stimulatory effects, and serum levels are often elevated in response to infection; 30 however, G-CSF has also been proven to be an antiinflammatory immune-modulator. 31 There is a possibility that reduced plasma levels of some cytokines and/or chemokines are related to chronic back pain; however, the findings could be the result of the exposure to long-term conservative medical therapy. Plasma cytokine levels have been shown to be altered by various environmental and personal factors: medical treatments, [34][35][36][37]39,40 depression, 41,42 physical activity, 43 and alcohol and nicotine consumption. 44 In our study, patients had a long-term history of chronic pain and long exposure to physiotherapy and pharmaceutical treatments with NSAIDs, also combined with other drugs (antidepressants and muscle-relaxants). Such a multimodal conservative therapy is a standard approach to treating chronic back pain of different origin. The drugs which are commonly prescribed for chronic low back pain 32,33 can reduce cytokine expressions, as has been demonstrated for antidepressants 34,35 and NSAIDs. 36,37 In patients with herniated IVD, higher plasma levels of TNF-α decreased after treatment with an opioid pain medication (tramadol), 22 while natural phyto-pharmaceutical components, such as curcuma and epigallocatechin 3-gallate, have been recently shown to reduce IVD inflammation in vitro. 20,25 It is possible that pain still persists, while NSAIDs and other anti-inflammatory molecules lower the levels of pro-inflammatory cytokines. We show, in another study, 38 that people with spinal cord injury (SCI) had, despite higher infection rates and elevated serum C-reactive protein concentrations, lower plasma levels of TNF-α and other cytokines when compared with age matched able bodied healthy controls. Similar to the chronic back pain patients, persons with SCI consume above-average NSAIDs, antidepressants, and muscle-relaxants. We further analyzed chronic back pain subgroups in a multiple statistical comparison analysis to test if different conditions, such as type and origin of pain, spine surgery operation, medical therapy, and history of pain, influence cytokines plasma levels. We found no differences between chronic back pain subgroups and, in most cases, the analysis confirmed a significant decrease of cytokines in chronic back pain subgroups compared to the healthy control group. On the other hand, not all studies could correlate cytokines with chronic pain. Andrade et al 45 observed that local IL-1β and IL-6 cytokine expression of lumbar disc hernia patients, suffering from chronic sciatic pain, did not differ from those of the painless healthy control group. Such a lack of correlation between systemic and local cytokine levels and pain is in line with the observation that some antiinflammatory drugs perform poorly as a treatment for chronic pain. The minor effects of most current medications 46 could be because inflammation is not the only causal variable for back pain. Indeed inflammation is viewed as only one of the aspects, according to the recognized bio-psycho-social model of pain. 47 Several studies revealed that chronic pain is related to a pain memory encoded within the nervous system, 48,49 and neural modifications could be a possibility to reverse the pain memory circuits and alleviate chronic pain. 50,51 A limitation of this study is the small number of samples due to recruitment of long-term chronic pain patients resisting to every type of pain therapy. Generally, back pain evolves to chronic pain in approximately one third of the affected individuals, 52 and the further two-thirds of patients with different grades of chronicity are satisfied with medical therapy. 53 Further analysis of a larger patient sample and, even better, a longitudinal study could be very interesting in the future. A minor limitation of our study was the unequal gender distribution in the healthy control group for one of the immunoassays (Multiplex ELISA assay). However, this is unlikely to have introduced bias because there were no differences between males and females in the chronic back pain group and in the other immunoassay (Quantitative ELISA assay). Conclusion We cannot conclude if the observed reduced plasma levels of inflammatory mediators in medically treated patients with chronic back pain are related to the chronic pain or are due to the long-term effects of medications. Inflammatory mediators could be altered by both physiological and environmental factors. Our results support the idea that inflammation is not the only cause of chronic back pain, and that other factors are involved in the process of pain. In view of that, more studies are needed to discover the underlying reason for the decrease in the studied biomarkers, which may ultimately lead to better pain management. In addition, since chronic back pain is considered to be a disease of the central nervous system, 54 we suggest that it could be worthwhile to analyze biomarkers implicated in the regulation of the central nervous system and the risk of developing chronic back pain. For example, a polymorphism of the potassium channel alpha subunit KCNS1 is one of the first prognostic indicators of chronic pain risk; 55 the calcium channel gamma subunit gene CACNG2 significantly affects susceptibility to chronic pain following nerve injury, 56 and the brain-derived neurotrophic factor BDNF regulates neuronal function and induces expression of pain-associated cation channels. 15 Author contributions SC: substantial contributions to conception and design, data acquisition, data analysis and interpretation, drafting the article; DP: data acquisition, data analysis, critically revising the article; AB: data analysis and interpretation, critically revising the article; GL: patient recruitment and consent, sample management, critically revising the article; JVS: substantial contributions to conception and design, data interpretation, critically revising the article, final approval of the version to be published, agreement to be accountable for all aspects of the work. All authors read and approved the final manuscript. Disclosure The authors report no conflicts of interest in this work.
2018-06-29T00:26:10.027Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "2639cb85571587baeb9723447e7ee2faced0890e", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/front_end/cr_data/cache/pdf/download_1614085466_6034fd5a9254b/jpr-153872-unexpectedly-decreased-plasma-cytokines-in-patients-with-chr-062018.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e6f5324f1b990f77833af1a9567af2b15e7c5825", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
16962285
pes2o/s2orc
v3-fos-license
Use of ultrasound Doppler to determine tooth vitality in a discolored tooth after traumatic injury: its prospects and limitations When a tooth shows discoloration and does not respond to the cold test or electric pulp test (EPT) after a traumatic injury, its diagnosis can be even more difficult due to the lack of proper diagnostic methods to evaluate its vitality. In these case reports, we hope to demonstrate that ultrasound Doppler might be successfully used to evaluate the vitality of the tooth after trauma, and help reduce unnecessary endodontic treatments. In all three of the present cases, the teeth were discolored after traumatic injuries and showed negative responses to the cold test and EPT. However, they showed distinctive vital reactions in the ultrasound Doppler test during the whole observation period. In the first case, the tooth color returned to normal, and the tooth showed a positive response to the cold test and EPT at 10 wk after the injury. In the second case, the tooth color had returned to its normal shade at 10 wk after the traumatic injury but remained insensitive to the cold test and EPT. In the third case, the discoloration was successfully treated with vital tooth bleaching. Introduction Tooth vitality is determined using the cold test, electric pulp test (EPT), radiographic examination, or clinical signs such as tooth discoloration. However, tooth vitality could be more properly evaluated by the blood supply in the pulp rather than these other tests, such as the cold test and EPT, which actually evaluate the sensitivity of the nerves. 1 When the tooth experience a traumatic injury, the evaluation of tooth vitality is difficult because they occasionally do not respond to the cold test or EPT due to the reduced conduction ability of the sensory nerves or nerve endings. 2 This lack of response seems to be caused by the damage, inflammation, compression or tension state of the apical nerve fibers, which require approximately eight weeks or more to return to normal functioning. 3 Tooth discoloration may follow a traumatic injury. 4,5 When the tooth shows discoloration and also does not respond to the cold test or EPT after a traumatic injury, its diagnosis can be even more difficult due to the lack of proper diagnostic methods to evaluate its vitality. The discolored tooth may return to its original shade and translucency completely or incompletely when the tooth vitality is preserved. 4,5 Malgren and Hübel reported that the discoloration disappeared within 4 weeks to 6 months in eight out of nine permanent teeth that had been root fractured and showed tooth discoloration after the trauma. 6 They reported that all of the teeth had regained their normal sensibility when the discoloration disappeared. Transient color changes were also described in connection with transient apical breakdown (TAB) after luxation injuries in permanent teeth. 7,8 The discoloration and loss of electrometric sensibility returned to normal when there was radiographic evidence of the resolution of the TAB. However, this resolution usually takes a long time to be confirmed. Ultrasound Doppler imaging has been used in many medical fields as a non-invasive and radiation-free technique to assess the blood flow in micro-vascular systems. Ultrasound has also recently been applied to dentistry. Some studies have shown that ultrasound Doppler imaging provides sufficient information on microvascularity for dental treatment. [9][10][11] Recently, Yoon et al. reported that ultrasound Doppler could be effectively used to evaluate the pulp blood flow in the pulp spaces. 1,12 They reported that it can measure the reduced blood stream speed after a local anesthetic injection containing 1 : 80,000 epinephrine. They also indicated the possibility that this Doppler system could be used effectively in the diagnosis of traumatic injury. 12 In this paper, three cases are presented that were seen in the Department of Conservative Dentistry, Yonsei University Dental Hospital, Seoul, Korea, during the past two years. In the beginning, all three teeth were discolored after a traumatic injury and showed negative responses to the thermal test and EPT but also showed a distinctive vital reaction in the ultrasound Doppler test unit (MM-D-K, Minimax, Moscow, Russia). In the first and second cases, the tooth discolorations returned to normal at 10 weeks after the injuries. In the third case, the tooth discoloration was successfully treated by vital bleaching. In this case series, we hope to demonstrate that ultrasound Doppler might be successfully used to evaluate the vitality of teeth after trauma and help reduce unnecessary endodontic treatments. Case 1 A 47-year-old female patient visited our department due to traumatic injury to her upper right lateral incisor (tooth #12). She sustained the injury 3 days before she visited our clinic by a fist blow injury to her face. Tooth #12 was subluxated, and showed a positive response to a percussion test. It did not show any response to a cold test or EPT. The tooth was diagnosed with subluxation, and we decided to wait and observe its course. There was no discomfort during the 2 weeks after the injury, but there was no response to the thermal test or EPT, and a reddish discoloration was observed (Figure 1a). At 6 weeks after the injury, the patient did not show any discomfort, but the discoloration lasted, and the tooth did not respond to cold or EPT. We decided to use the ultrasound Doppler unit to evaluate the vitality of the pulp, and the result was shown in Figure 1b. Tooth #12 produced a typical pulsated image, which represents normal vital pulp ( Figure 1b). We explained the results and implications of the test to the patient. We decided to continue to wait and observe the tooth because the patient had no discomfort, did not mind the discoloration at that time, and was willing to wait to determine whether the tooth could recover to normal without any treatment. At 10 weeks after the injury, the tooth had returned to a normal shade and regained its normal responses to the cold test and EPT ( Figure 1c). Ultrasound Doppler and tooth vitality in traumatic injury Figure 1. (a) In case 1, discoloration of tooth #12 was observed at 2 weeks after the injury; (b) The result of an ultrasound Doppler test at 6 weeks after the injury. It shows a typical pulsated image, which represents normal vital pulp; (c) At 10 weeks after the injury, the tooth had returned to a normal shade. (a) (b) (c) Case 2 A 30-year-old female patient visited our clinic for further treatment of traumatized anterior teeth. She had sustained an injury from a fall 2 weeks ago, and had visited a local clinic immediately after the trauma. The subluxated tooth #21 was splinted with composite resin and wire from tooth #13 to tooth #23, and then the local dentist referred her to our clinic. In the periapical radiographic view, the root and periapical area were normal (Figure 2a). Tooth #21 showed negative responses to the thermal test and EPT, a positive response to the percussion test, and pinkish discoloration (Figure 2b). The other teeth showed normal responses to all of the tests. In the ultrasound Doppler test, tooth #21 produced a normal pulsated response like those of the other teeth, and we were also able to hear the beat of the pulsation from the speaker (Figure 2c). At 4 weeks after the injury, tooth #21 showed normal response to percussion, again. In the other tests, the results were also the same as in the previous visit. At 6 weeks after the injury, tooth #21 still showed pinkish discoloration and negative responses to the thermal test and EPT. At 10 weeks after the injury, the shade of tooth #21 returned to normal (Figure 2d). At 12, 16, 20, and 24 weeks after the injury, the patient did not feel any discomfort at all. In the ultrasound Doppler test, tooth #21 showed a vital response, but it did not respond to the cold test or EPT. In the periapical view, the root and periapical area were within the normal range. The negative response continued throughout the follow-up period for 9 months. At that time, she was pregnant and wanted to delay her next visit until after her delivery. Case 3 A 22-year-old female patient visited our department to have her teeth bleached. She thought her teeth were generally yellowish, and she was especially unsatisfied with the shade of tooth #11, which showed yellowish brown discoloration (Figure 3a). She reported experiencing trauma to her anterior teeth when she was in primary school, and she had finished orthodontic treatment approximately 7 years before presentation. However, she did not know exactly when tooth #11 started to become discolored. In the radiograph, the coronal pulp space was obliterated, whereas the pulp space was present in the root area. There was no radiolucency in the periapical region, but the root apex was slightly shortened (Figure 3b). In the cold test, tooth #11 did not show any response, although she occasionally displayed a delayed response. The tooth did not respond to the EPT. In the ultrasound Doppler test, tooth #11 showed an image and sound typical of a vital tooth (Figure 3c). We decided to perform vital tooth bleaching first and then re-evaluate the color to decide whether restorative treatment was needed. Home bleaching was started using 15% carbamide peroxide gel (Opalescence, Ultradent, South Jordan, UT, USA). Additional home bleaching was continued only for tooth #11 after she was satisfied with the shade of her other teeth. After approximately 2 months of bleaching, she was satisfied with the shade of tooth #11 and did not want any further treatment (Figure 3d). Figure 2. (a) In case 2, tooth #21 was splinted at a local clinic after a subluxation injury that had occurred 2 weeks before the patient visited our clinic. It showed a negative response to the thermal test and EPT, and a positive response to the percussion test; (b) Tooth #21 showed pinkish discoloration; (c) In the ultrasound Doppler test, tooth #21 showed a normal pulsated response like that of the other teeth; (d) At 10 weeks after the injury, the shade of tooth #21 had returned to normal. Discussion Pink discoloration, which may occur within 2 -3 days after a traumatic injury, is caused by the rupture of capillaries and the release of red blood cells into the pulp chamber. Hemolysis leads to the diffusion of hemoglobin into the dentinal tubules, which shift the tooth color from pinkish to grayish-blue. Some fading of the grey-blue tint can occur when the blood supply to the pulp is maintained and the pulp survives. 6 In the first case, the ultrasound Doppler showed a typical pulsated image when the tooth did not respond to the cold test and EPT in the early phase after a traumatic injury. Then, in ongoing follow-up, the Doppler test continued to show vital image, but the tooth still did not respond to the other two tests. The tooth regained its shade by 10 weeks after the traumatic injury, and its response to the cold test and EPT returned to normal. This finding is consistent with previous reports that indicated that the discoloration returned to normal when the teeth regained their vitality and demonstrating that ultrasound Doppler can be successfully used to determine the vitality of teeth during the period when they do not respond to the cold test and EPT after a traumatic injury. [4][5][6][7][8] Ultrasound Doppler may help decrease unnecessary endodontic treatments, which could be performed due to a lack of the proper diagnostic methods after a traumatic injury. In the first and second cases, 10 weeks were needed to regain the tooth's color and responses to the cold test and EPT. This result is consistent with a previous study in which the discoloration disappeared within 4 weeks to 6 months after root fracture resulting in tooth discoloration after trauma. 6 The second case was interesting in that the discoloration returned to normal by 10 weeks after injury, but the tooth did not respond to the cold test and EPT even at 9 months after the traumatic injury, although it showed a consistent vital image in the Doppler test from the beginning. False positive responses in the ultrasound Doppler test have not yet been studied. In the present study, a 20-MHz ultrasound Doppler probe was used. The frequency of ultrasound is very important because it determines the penetration depth of the ultrasound wave. Although a 20-MHz frequency was reported to efficiently penetrate the enamel and dentin, and detect the blood flow in the pulp spaces, it might be possible to detect the blood flow outside of the pulp spaces if the thickness of the hard tissue is very thin. 1,12 The potential for false positive responses with the ultrasound Doppler probe requires further investigation. In the second case, longterm follow-up is necessary to verify whether the vitality was actually maintained, which could be confirmed by a positive response to the cold test and EPT. However, in this case, the tooth returned to its normal shade by 10 weeks after the traumatic injury, which suggests that the blood supply to the pulp was maintained and the pulp survived. 6 More time might be needed for the nerve fiber to heal. Further follow-up is required to determine whether the test results are true or false positive. In the third case, the patient did not respond to the cold test and EPT, although she occasionally showed an obscure positive delayed response to the cold test. The cold test depends on the hydrodynamic movement of fluid within Ultrasound Doppler and tooth vitality in traumatic injury Figure 3. (a) In case 3, tooth #11 showed yellowish brown discoloration; (b) The coronal pulp space was obliterated, whereas the pulp space was present in the root area. There was no radiolucency in the periapical area, but the root apex was slightly shortened; (c) In the ultrasound Doppler test, tooth #11 showed an image typical of a vital tooth; (d) The patient was satisfied with the shade of tooth #11 after vital bleaching treatment. (a) (b) (c) (d) the dentinal tubules, which excites the A-fibers. 13 Teeth with calcified pulp spaces might have normal and healthy pulps, but cold stimuli might not be able to excite the nerve endings due to the insulating effect of the thicker layer of dentin, which is the result of secondary and reactionary dentin formation. 14 Ehrmann reported that EPT is particularly effective in older patients and in teeth that have limited fluid movement through the dentinal tubules as a result of dentine sclerosis and calcification of the pulp space because thermal pulp tests are usually inadequate in these situations. 14 Klein reported that a patient was unlikely to respond to a cold test but may respond to an EPT if the pulp space had been significantly calcified. 15 In their case, more electric pulp current was often needed to elicit a response because there was an increased dentin layer and a diminished pulp cavity or a fibrotic pulp. In the third case, tooth #11 was diagnosed as a vital tooth based on the results of the ultrasound Doppler test because it displayed a consistent positive sign throughout the observation period. In this case, the coronal pulp space was obliterated, whereas the pulp space was present in the root area. Because the ultrasound Doppler probe tip was positioned apically, there was a possibility of detecting the blood flow of the root canal. Furthermore, the patient showed a response to the cold test, although the response was delayed and inconsistent. For further research, we need more cases and studies related to ultrasound Doppler. Other methods for evaluating the vascularity of pulp are laser Doppler and pulse oximetry. [16][17][18][19][20] Laser Doppler applies a laser to transmit light into the pulp blood vessels through the tooth structure, and a red and infrared LED light beam is used in pulse oximetry for the same purpose. However, the discoloration of the tooth caused by the deposition of blood pigments in the traumatized tooth may hinder the penetration of light in both laser Doppler and pulse oximetry. 18,20,21 The ultrasound wave used in the ultrasound Doppler unit can detect blood flow regardless of coronal discoloration, so it can be more useful for discolored teeth. Conclusions Tooth discoloration after a traumatic injury was corrected when the ultrasound Doppler produced a typical pulsated image, which represents normal vital pulp. Ultrasound Doppler might be an effective tool to evaluate tooth vitality when the cold test and EPT do not give proper information, especially after a traumatic injury. However, the use of ultrasound Doppler requires further research on the potential for false positive and negative responses to increase its clinical reliability.
2016-05-12T22:15:10.714Z
2014-01-20T00:00:00.000
{ "year": 2014, "sha1": "5448f78ab0f186cc46a980a618bc0c77bb41a2ba", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.5395/rde.2014.39.1.68", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5448f78ab0f186cc46a980a618bc0c77bb41a2ba", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
213456872
pes2o/s2orc
v3-fos-license
Low Cost High Integrity Platform Developing safety critical applications often require rare human resources to complete successfully while off-the-shelf block solutions appear difficult to adapt especially during short-term projects. The CLEARSY Safety Platform fulfils a need for a technical solution to overcome the difficulties to develop SIL3/SIL4 system with its technology based on a double-processor and a formal method with proof to ensure safety at the highest level. The formal method, namely the B method, has been heavily used in the railways industry for decades. Using its IDE, Atelier B, to program the CLEARSY Safety Platform ensures a higherlevel of confidence on the software generated. This paper presents this platform aimed at revolutionising the development of safety critical systems, developed through the FUI project LCHIP (Low Cost High Integrity Platform). A Revolution for developing of safety critical application Developing safety critical applications often requires rare human resources to complete successfully while off-the-shelf block solutions appear difficult to adapt especially during short-term projects. Developed during the R&D project FUI LCHIP [5], the CLEARSY Safety Platform fulfills a need for a technical solution to overcome the difficulties to develop SIL3/SIL4 system. Its technology is based on a smart combination of diverse hardware (2x PIC 32 micro-controllers) and a formal method with proof heavily used in the railways industry for decades. It avoids most testing and ensures safety at the highest level. The CLEARSY Safety Platform is both a software and a hardware platform aimed at designing and executing safety critical applications. One formal modelling language (B) is used to program the board. Programs are developed using a dedicated IDE or could be the by-product of some translation from a Domain Specific Language to B. The IDE takes care of the verification of the software (type check, proof, compilation) and then ensures its uploading to the hardware platform. Program is guaranteed to execute until a misbehaviour is detected, leading to a safe restricted mode where board outputs are deactivated. Added value The CLEARSY Safety Platform eases the development of safety critical applications as: it covers the whole development cycle of control-command systems based on digital inputs/outputs. the testing phase is dramatically reduced as the mathematical proof replaces unit and integration testing (based on a formal language (B) and related proof tools). Eased certification The CLEARSY Safety Platform eases the certification of safety critical applications as the safety cannot be altered by the developer. It comes with a certification kit to be used for the safety case of the system embedding the CLEARSY Safety Platform. The building blocks of the CLEARSY Safety Platform, already certified in international railway projects (platform screen doors controller in Brazil (São Paulo line 15) and Sweden (Stockholm Citybanan), remote IOs in Canada), have been used to develop a generic version of this technology that could fit a broader range of applications. B Technology With the CLEARSY Safety Platform, the very technical aspects related to safety are taken into account by the platform, leaving the developer to focus only on the development of the function to perform. The CLEARSY Safety Platform is made of two parts: an IDE to develop the software and an electronic board to execute this software. The full process is described in figure 1. Software development It starts with the function specification (natural language) to develop. The developer has to provide a B model of it (specification and implementation) using the schema: the function to program is a loop, where the following steps are performed repeatedly in sequence: • the inputs are read. • some computation is performed. • the outputs are set. -The steps related to inputs and outputs are fixed and cannot be modified. -Only the computation may be modified to obtain the desired behaviour. The implementation is usually handwritten but could also be generated automatically with the B Automatic Refinement Tool. The B models are proved (mostly automatically as the level of abstraction of typical command & control applications is low) to be coherent and to contain no programming error. From the implementable model, two binaries are generated: binary 1 , obtained via a dedicated compiler, developed by CLEARSY, transforming a B model into HEX file, binary 2 , produced with the Atelier B C code generator then compiled with the GCC compiler into another HEX file. Each binary represents the same function but is supposed to be made of different sequences of instructions because of the diversity of the tool chains. Then the two binaries binary 1 and binary 2 are linked with: a sequencer, in charge of reading inputs, executing binary 1 then binary 2 , then setting the outputs a safety library, in charge of performing safety verification (more details in https://www.clearsy.com/en/download/download-documentation). In case of failing verification, the board enters panic mode, meaning the outputs are deactivated (no power is provided to the Normally Open (NO) outputs, so the output electric circuits are open), the board status LED start flashing, and the board enters an infinite loop doing nothing. A hard reset (power off or reset button) is the only possibility to interrupt this panic mode. The final program is thus made of binary 1 , binary 2 , the sequencer and the safety library. The memory mappings of binary 1 and binary 2 are separate. This program is then uploaded on the two micro-controllers µC 1 and µC 2 . Verification The bootloader, on the electronic board, checks the integrity of the program (CRC, separate memory spaces). Then both micro-controllers start to execute the program. During its execution, the following are performed: internal verification: • every cycle, binary 1 and binary 2 memory spaces (variables) are compared; • regularly, binary 1 and binary 2 memory spaces (program) are compared in deferred mode; • regularly, the identity between memory output states and physical output states is checked to detect if the board is unable to command the outputs. external verification: • regularly (every 50ms at the latest), memory spaces (variables) are compared between µC 1 and µC 2 . If any of these verifications fail, the board enters the panic mode. Tools The whole process is fully supported by adequate tools. In the figure 2, the tools and text/binary files generated are made explicit for both the application (path used every time an application is developed) and the safety belt (developed once for all by the IDE development team. Note that from the abstract formal model, one part of the software is developed in B with concrete formal model, while the other part is developed manually. It happens when using B provides no added-value (for example low-level IO). A component modelled in B and implemented manually is called a basic machine. The tools are issued from Atelier B, except: the B to HEX compiler, initially developed to control platform screen doors for metro lines in Brazil. This tool proceeds in two steps: a translation from B to ASM MIPS, then from ASM MIPS to HEX. the C to HEX GCC compiler. the linker combining the 2 hex files with the safety sequencer and libraries. the bootloader. Safety principles The safety is built on top of few principles: a B formal model of the function to develop, proved to be coherent, to correctly implement its specification, and to be programming error-free, four instances of the same function running on two micro-controllers (two per micro-controller with different binaries obtained from diverse tool-chains) and the detection of any divergent behaviour among the four instances, the deferred cross-verification of the programs on the two µC, outputs require both µC 1 and µC 2 to be alive and running as one provides energy and the other one the command, output physical states are regularly verified to comply with the memory states, to check the ability of the board to command its outputs, input signals are continuous (0 or 5V) and are made dynamic (addition of a frequency signal) in order to prevent the short-circuit current from being considered as high level (permissive) logic. From a safety point of view, the current architecture is valid for any kind of mono-core processor. The decision of using PIC32 micro-controllers (able to deliver around 50 DMIPS) was made based on our knowledge and experience of this processor. Implementing the CLEARSY Safety Platform on other hardware would "only" require the existing electronic board and software tools to be modified, without impacting much the safety demonstration. C Complementary technologies The CLEARSY Safety Platform implements a well-oiled automatic process from a proved B model to a safe execution. However several limitations prevent a larger exploitation: the B language (even restricted to a subset) is not widely disseminated and often considered as difficult to use by engineers. the fully automatic model proof is only achieved with "bounded complexity algorithms", even if we are not considering implementing metro automatic pilots with the CLEARSY Safety Platform. fine-tuning and debugging applications that run simultaneously on two processors are difficult to achieve. To overcome these limitations, several features (see figure 3 for the global picture) have been added to the CLEARSY Safety Platform and are detailed in the forthcoming sections: connection with Domain Specific Languages, to entitle engineers to model with their usual language and to keep the formalities behind the curtain. improved proof performances. That way, the formal verification is also hidden behind the curtain, as log as the complexity of the implemented algorithms is compatible with the automatic proof capabilities (we are not aimed at metro automatic pilots here) debugging facilities on host with a dedicated VM C.1 Connection with DSL A B model is the mandatory entry-point of the CLEARSY Safety Platform. However this B model could be obtained from the translation of a DSL model into B. This approach allows to seamlessly involve domain experts without changing their modelling languages. Experiments were conducted with SNCF [4] in order to translate relay-schemes into B models for various applications (ITCSwrong track temporary installation, signal controller). The translation allows to interactively describes relay-schemes and to generate formal models ( figure 4). Finally a wired-logic installed for decades could be easily replaced by a safe programmed-logic. The poster "Porting Relay-Based Schemas to a SIL4 Programmable Control Platform" is available at https://www.clearsy.com/wpcontent/uploads/2019/07/Affiche-A1-v3.pdf. Other DSLs are being considered, such as grafcet / sequential functional / chart to directly address automation outside railways. C.2 Improved Proof The B method enforces that the development is mathematically sound by producing proof obligations. A proof obligation has a goal and hypotheses, all expressed in a mathematical formalism (first order logic, with set theory and integer arithmetic). Historically, proof obligations have been verified using a mix of automatic proof procedures provided in Atelier B and user insight (e.g. case splitting, quantifier instantiation, etc.) All proof related tools are part of the IDE and have been in place since the inception of Atelier B. In order to benefit from the technical and scientific improvements in the field of automated reasoning, a new proof obligation generator has been developed. Proof obligations are then produced in a XML-based format which can then be translated to the native format of any formal verification tool. As a proofof-concept, a connection to third-party provers was conducted as part of the B-Ware initiative, using the Why3 platform [1] as a gateway. This experiment produced excellent results in terms of proof automation, in particular for the Alt-Ergo prover [3]. The results of this work are now being integrated in Atelier B to provide new proof features to the IDE. Using this approach, we target full proof automation for selected DSLs. Cubicle [2] is a model checker for verifying safety properties of transition systems manipulating arrays. Cubicle finds inductive invariants of systems by the integration of the SMT solver Alt-Ergo with a backward reachability algorithm. In order to improve its reachability algorithm, we developed a new approach based on program unwinding, reminiscent to property-driven reachability. This algorithm has then been extended for reasoning about weak memory model for verifying properties of assembly x86-TSO programs. Our extension relies on a tuned axiomatic memory model for SMT and a specific backward reachability algorithm that exploits a new partial order reduction technique for the TSO model of x86. First experiments benchmarks from synchronization barriers to mutual exclusion implementation like the spinlock from Linux 2.6 kernel show that this approach is very promising. C.3 A Virtual Machine Approach One original contribution of the LCHIP project is the possibility to run the embedded program in a virtual machine (VM) of a high-level programming language. This primarily provides an alternative mean of execution, which offer new ways of debugging and improving security. Indeed, the original compilation models considered (which target the C language or the PIC assembly) follow similar patterns, but with the use of a VM we can produce binaries with a very different execution path, ensuring redudancy checks of the behaviour of programs. To this end, we thus propose a specific implantation of the virtual machine of the OCaml language. This machine takes advantages from the expressiveness of this language (implementing functional, imperative, object-oriented and modular paradigms), as well as its guarantees (such as the safety provided by its static type system). A first experience of implementing this VM for PIC18, called OCaPIC [9], has shown the feasibility of running complex programs on micro-controllers with scarce resources (a few KiB for RAM, 32KiB for flash memory). This approach was then generalized with the OMicroB [8] system, which starts from a bytecode executable program for the OCaml VM and embeds the byte-code in a C file. This generic approach provides the possibility to compile and load a given program on different families of micro-controllers (AVR, ARM, PIC32, . . . ). To do this, several types of optimization are necessary both to reduce the size of the byte-code and for automatic memory management. OMicroB produces an binary that embeds the byte-code interpreter, the execution library and the bytecode. Note that since OMicroB compilation path uses the C language, like the previously described compiling path that consists of translating B to C, security concerns would entail the use of another C compiler in order to dissipate probable unknown bugs inherent to the chosen C compiler. The OMicroB system has successfully been ported for the LCHIP hardware, and various OCaml programs have been executed on the platform. Debugging can be rendered easier by using OMicroB since it comes with a simulator that can represent interactions with the external hardware to which is connected the microcontroller(s). Furthermore, for possible modifications of the hardware (as discussed in section B), this approach is easily adaptable, since it has been designed to be portable on many architectures. In order to run the program derived from the B model, we need to provide a source-to-source translator from B0 to OCaml, as it is shown in figure 3. This translator will be very close to the C4B translator that produces a C source code, since we can take advantage of all the imperative constructs of the OCaml language (by using mutable records and arrays as equivalent to C variables and data structures, for example). As other future works, the internal checks between the two binaries could be orchestrated by a synchronous extension "à la Lustre" called OCaLustre [7]. Since this extension is built above the OCaml language, it is completely compatible with our virtual machine. An additional interest of this virtual machine approach is to factorize in property checks at the byte-code level, such as the estimation of the WCET (Worst Case Execution Time) for programs that do not dynamically allocate memory (like synchronous programs) [6]. This is currently possible on hardware where the memory model is simple (no caches, simple scalar pipelines), like some AVR microcontrollers, and thus guarantees timing compositionality of each (bytecode) instruction. On simple hardware, we can herewith project the byte-code analysis to the actual architecture instructions set. However, the application of such a computation on more complex hardware like the PIC32 (that has pre-fetch cache system) is still under consideration. D Education, dissemination and exploitation The CLEARSY Safety Platform is being used in several universities in Europe and America, to teach both formal methods and embedded systems 4 . It is an interesting mean to involve mathematicians into close-to-hardware thematic, and conversely to bring embedded systems practitioners to more abstract reasoning. Lectures were given up to Master2, aimed at demonstrating safety engineering at computer scientists and electronic engineers, with dedicated hands-on sessions. The two existing starter kits SK 0 and SK 1 are clearly aimed at respectively education and prototyping. A forthcoming industry-strength version will be provided as a daughter-board (PLC without inputs/outputs) to integrate into inhouse development. Easier certification, lower development and deployment costs will have a dramatic impact on public safety, allowing to embed safety in systems for a limited cost. More over on-going research projects are aimed at bringing safety to robotic systems.
2020-03-19T20:07:48.513Z
2020-01-29T00:00:00.000
{ "year": 2020, "sha1": "a4a25b5388b43431552d837d3367fe14d8aba77c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "a11823d3b05e8c247c0685e7a6c2abe084823eb9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
196110743
pes2o/s2orc
v3-fos-license
A Bayesian Hau-Kashyap Approach for Hepatitis Disease Detection Additional information Introduction Hepatitis is a medical condition defined by the inflammation of the liver and characterized by the presence of inflammatory cells in the tissue of the organ. The word "hepatitis" comes from the ancient Greek word "hepar," root word "hepat," meaning liver [1]. Hepatitis may occur with limited or no symptoms. Hepatitis is acute when it lasts less than 6 months and chronic when it persists longer. In medical, hepatitis means injury to the liver with inflammation of the liver cells. The liver is the largest glandular organ of the body [2]. It weighs about 1.36 kg. It is reddish brown in color and is divided into four lobes of unequal size and shape. There are six main hepatitis viruses, referred to as types A, B, C, D, E and G. Hepatitis A and E are typically caused if patients eat the contaminated food or water. Hepatitis B, C and D are typically caused by parental contact by infected body fluid, and Hepatitis B also can be infected through sexual contact. Hepatitis B is primarily found in the liver. Researches have been done through methods for diagnosis of hepatitis [3,4,5]. Bayesian approaches are successfully applied to a variety of problems [6,7,8]; recently, several studies have been conducted and have focused on medical diagnosis. These studies have applied different approaches and have achieved various classification accuracies. Neshat et al. [9] studied an adaptive neural fuzzy system for diagnosing the hepatitis B intensity rate. Neshat et al. [10] describes the combination of two methods of particle swarm optimization, and case-based reasoning has been used to diagnose hepatitis. Mahesh et al. [5] proposed a generalized regression neural network-based expert system for the diagnosis of the hepatitis B virus disease. The system classifies each patient into infected and noninfected. If infected, then how severe it is in terms of intensity rate. Panchal et al. [11] described an artificial intelligence-based expert system for Hepatitis B diagnosis. The main reason for using a Bayesian approach to hepatitis detection is that it facilitates the uncertainties related to models and parameter values. It gives a characteristic and principled method of combining prior information with data, within a solid decision theoretical framework. We can fuse past data about a parameter and form a prior distribution for future analysis. When new observations become available, the previous posterior distribution can be used as a prior. All inferences logically follow from Bayesian Hau-Kashyap approach. The structure of the paper is as follows. Section 2 presents a Bayesian Hau-Kashyap approach. Section 3 presents implementation of Bayesian approach. Bayesian approach results are presented in Section 4. Section 5 presents a Bayesian Hau-Kashyap approach for hepatitis disease detection. Results and discussion are presented in Section 6. Finally, Section 7 presents some concluding remarks. A Bayesian approach Let the events A 1 , A 2 , …, An form a partition of the sample space S with P A i ð Þ < 0, i ¼ 1, …, n: For any event B ⊂ S with P B ð Þ > 0, as shown in Eq. (1): We may rationalize this result as follows. Given If the A i 's are mutually exclusive, then so are the events B ∩ A i , i ¼ 1, …, n, and thus, as shown in Eq. (2), From the multiplication rule since P A ∩ B ð Þ appears in the numerator of each of these conditional probabilities, it follows that, as shown in Eqs. (3)- (5). Dempster-Shafer theory Belief functions offer a non-Bayesian method for quantifying subjective evaluations by using probability. In the 1970s, it was further developed by Shafer, whose book Mathematical Theory of Evidence [13] remains a classic in belief functions or the so-called Theory of Evidence. This theory has been also called the Dempster-Shafer Mathematical Theory of Evidence. In the 1980s, the scientific community working with Artificial Intelligence got involved in using the theory of evidence in applications. The Dempster-Shafer theory or the theory of belief functions is a mathematical theory of evidence, which can be interpreted as a generalization of probability theory [13,14] in which the elements of the sample space to which nonzero probability mass is attributed are not single points but sets. The sets that get nonzero mass are called focal elements [13]. The sum of these probability masses is 1; however, the basic difference between Dempster-Shafer mathematical theory of evidence and traditional probability theory is that the focal elements of a Dempster-Shafer structure may overlap one another. The Dempster-Shafer mathematical theory of evidence also provides methods to represent and combine weights of evidence. The Dempster-Shafer theory assumes that there is a fixed set of mutually exclusive and exhaustive elements called hypotheses or propositions and symbolized by the Greek letter Θ, where h i is called a hypothesis or proposition. A hypothesis can be any subset of the frame, in example, to singletons in the frame or to combinations of elements in the frame. Θ is also called frame of discernment. A basic probability assignment (bpa) is represented by a mass function m : 2 Θ ! 0; 1 ½ . Where 2 Θ is the power set of Θ. Integrating Bayesian and Hau-Kashyap approach Hau and Kashyap [15] presented an alternative Dempser-Shafer rule of combination, denoted by ⊙. Method to integrate Bayesian theory and Hau-Kashyap approach as follows: 1. Step 1: Assume m 1 and m 2 are two mass functions on the frame of discernment m Θ ð Þ. We can get m from the result of Eq. (5). m P ð Þ is called basic possibility assignment value, which presents the level of trust to proposition P. Let R i , Z j be their sets of focal elements. m 1 ⊙m 2 ð Þ∅ ð Þ ¼ 0. Step 3: The fundamental distinction between the Dempster-Shafer combination rule and the Hau-Kashyap combination rule is that with the use of Hau-Kashyap rule, the conflict A Bayesian approach for hepatitis disease detection Everyday medical practice contains many examples of probability. Medical doctor often uses words such as probably, unlikely, certainly, or almost certainly in all conversations with patients. Medical doctor only rarely attach numbers to these terms, but computerized systems must use some numerical representation of likelihood in order to combine statements into conclusions. Probability is represented numerically by a number between 0 and 1. This study conducts experiments on hepatitis dataset. The main goal of the dataset is to forecast the presence or absence of hepatitis virus. The dataset contains probability of the initial symptoms of hepatitis, which are often similar to other diseases. The initial symptoms of hepatitis include malaise, fever and headache. The probability of malaise given the presence for hepatitis, malaria, influenza and gastroenteritis. The probability of fever given the presence for hepatitis, malaria, influenza and gastroenteritis. The probability of headache given the presence for hepatitis, malaria, influenza and gastroenteritis. The probability was obtained by studying a series of patients with proven hepatitis by looking up diagnosis codes in the medical records department, and computing the percentage of these patients who present with malaise, fever and headache. Probability of hepatitis given the symptom of malaise Malaise is a feeling of general discomfort, uneasiness or pain, often the first indication of an infection. Table 1 shows the probability of malaise (Ma) given the presence for hepatitis (H), malaria (M), influenza (I), and gastroenteritis (G). P(Hepatitis | Malaise), which is read as the probability of hepatitis given the symptom of malaise. Pr(Malaise (Ma) | Hepatitis (H)), which is the probability of malaise given the presence of hepatitis. Bayes rule allows us to compute the probability we really want Pr (Hepatitis | Malaise) with the help of the more readily available number Pr(Malaise | Hepatitis). Bayes's theorem is a formula with conditioned probabilities. Calculating the probability of hepatitis given the symptom of malaise, which is calculated as follows: There is about a 37.5% chance that the probability of hepatitis given the symptom of malaise actually has the attribute given that it tested positively for it. Calculating the probability of malaria given the symptom of malaise, which is calculated as follows: There is about a 35% chance that the probability of malaria given the symptom of malaise actually has the attribute given that it tested positively for it. Calculating the probability of influenza given the symptom of malaise, which is calculated as follows: There is about a 9.8% chance that the probability of influenza given the symptom of malaise actually has the attribute given that it tested positively for it. Calculating the probability of gastroenteritis given the symptom of malaise, which is calculated as follows: There is about a 17.7% chance that the probability of gastroenteritis given the symptom of malaise actually has the attribute given that it tested positively for it. Probability of hepatitis given the symptom of fever Fever is defined as having a temperature above the normal range due to an increase in the body's temperature set point. Table 2 shows the probability of fever (Fe) given the presence for hepatitis (H), malaria (M), influenza (I) and gastroenteritis (G). Calculating the probability of hepatitis given the symptom of fever, which is calculated as follows: There is about a 28.8% chance that the probability of hepatitis given the symptom of fever actually has the attribute given that it tested positively for it. Calculating the probability of malaria given the symptom of fever, which is calculated as follows: There is about a 28.8% chance that the probability of malaria given the symptom of fever actually has the attribute given that it tested positively for it. Calculating the probability of influenza given the symptom of fever, which is calculated as follows: There is about a 28% chance that the probability of influenza given the symptom of fever actually has the attribute given that it tested positively for it. There is about a 14.4% chance that the probability of gastroenteritis given the symptom of fever actually has the attribute given that it tested positively for it. Probability of hepatitis given the symptom of headache Headache is pain in any region of the head. Headaches may occur on one or both sides of the head, be isolated to a certain location, radiate across the head from one point or have a viselike quality. Table 3 shows the probability of headache (He) given the presence for hepatitis (H), malaria (M), influenza (I), and gastroenteritis (G). Calculating the probability of hepatitis given the symptom of headache, which is calculated as follows: There is about a 31.8% chance that the probability of hepatitis given the symptom of headache actually has the attribute given that it tested positively for it. Calculating the probability of malaria given the symptom of headache, which is calculated as follows: There is about a 19.9% chance that the probability of malaria given the symptom of headache actually has the attribute given that it tested positively for it. There is about a 24.3% chance that the probability of influenza given the symptom of headache actually has the attribute given that it tested positively for it. Calculating the probability of gastroenteritis given the symptom of headache, which is calculated as follows: There is about a 24% chance that the probability of gastroenteritis given the symptom of headache actually has the attribute given that it tested positively for it. Table 4 shows probability of diseases given the symptom of malaise. These probabilities are probability of hepatitis given the symptom of malaise, probability of malaria given the symptom of malaise, probability of influenza given the symptom of malaise and probability of gastroenteritis given the symptom of malaise. Table 4. Hepatitis | malaise. Table 5 shows probability of diseases given the symptom of fever. These probabilities are probability of hepatitis given the symptom of fever, probability of malaria given the symptom of fever, probability of influenza given the symptom of fever and probability of gastroenteritis given the symptom of fever. Table 6 shows probability of diseases given the symptom of headache. These probabilities are probability of hepatitis given the symptom of headache, probability of malaria given the symptom of headache, probability of influenza given the symptom of headache and probability of gastroenteritis given the symptom of headache. shows overall malaria disease diagnosis. Condition 1 of malaria disease diagnosis obtained value 35% for probability of malaria given the symptom of malaise, 28.8% for probability of malaria given the symptom of fever and 19.9% for probability of malaria given the symptom of headache. Condition 2 of malaria disease diagnosis obtained value 32.3% for probability of malaria given the symptom of malaise, 27.5% for probability of malaria given the symptom of fever and 24.5% for probability of malaria given the symptom of headache. Condition 3 of malaria disease diagnosis obtained value 35.7% for probability of malaria given the symptom of malaise, 28.3% for probability of malaria given the symptom of fever and 28.4% for probability of malaria given the symptom of headache. Condition 4 of malaria disease diagnosis obtained value 16.6% for probability of malaria given the symptom of malaise, 32.6% for probability of malaria given the symptom of fever and 25.6% for probability of malaria given the symptom of headache. Condition 5 of malaria disease diagnosis obtained value 30% for probability of malaria given the symptom of malaise, 20.4% for probability of malaria given the symptom of fever and 24.7% for probability of malaria given the symptom of headache. Figure 5 shows overall influenza disease diagnosis. Condition 1 of influenza disease diagnosis obtained value 9.8% for probability of influenza given the symptom of malaise, 28% for probability of influenza given the symptom of fever and 24.3% for probability of influenza given the symptom of headache. Condition 2 of influenza disease diagnosis obtained value 11% for probability of influenza given the symptom of malaise, 30% for probability of influenza given the symptom of fever and 24.1% for probability of influenza given the symptom of headache. Condition 3 of influenza disease diagnosis obtained value 12.8% for probability of influenza given the symptom of malaise, 23.6% for probability of influenza given the symptom of fever and 18.9% for probability of influenza given the symptom of headache. Condition 4 of influenza disease diagnosis obtained value 14.8% for probability of influenza given the symptom of malaise, 26.1% for probability of influenza given the symptom of fever and 24.7% for probability of influenza given the symptom of headache. Condition 5 of influenza disease diagnosis obtained value 11% for probability of influenza given the symptom of malaise, 28.8% for probability of influenza given the symptom of fever and 29.7% for probability of influenza given the symptom of headache. Figure 6 shows overall gastroenteritis disease diagnosis. Condition 1 of gastroenteritis disease diagnosis obtained value 17.7% for probability of gastroenteritis given the symptom of malaise, 14.4% for probability of gastroenteritis given the symptom of fever and 24% for probability of gastroenteritis given the symptom of headache. Condition 2 of gastroenteritis disease diagnosis obtained value 25.7% for probability of gastroenteritis given the symptom of malaise, 15.5% for probability of gastroenteritis given the symptom of fever and 28.4% for probability of gastroenteritis given the symptom of headache. Condition 3 of gastroenteritis disease diagnosis obtained value 24.8% for probability of gastroenteritis given the symptom of malaise, 12.1% for probability of gastroenteritis given the symptom of fever and 23.2% for probability of gastroenteritis given the symptom of headache. Condition 4 of gastroenteritis disease diagnosis obtained value 36.9% for probability of gastroenteritis given the symptom of malaise, 15.2% for probability of gastroenteritis given the symptom of fever and 20.1% for probability of gastroenteritis given the symptom of headache. Condition 5 of gastroenteritis disease diagnosis obtained value 35.4% for probability of gastroenteritis given the symptom of malaise, 15.7% for probability of gastroenteritis given the symptom of fever and 20.5% for probability of gastroenteritis given the symptom of headache. Figure 7 shows overall hepatitis diagnosis. Condition 1 of hepatitis diagnosis obtained value 37.5% for probability of hepatitis given the symptom of malaise, 28.8% for probability of hepatitis given the symptom of fever and 31.8% for probability of hepatitis given the symptom of headache. Condition 2 of hepatitis diagnosis obtained value 31% for probability of hepatitis given the symptom of malaise, 27% for probability of hepatitis given the symptom of fever and 23% for probability of hepatitis given the symptom of headache. Condition 3 of hepatitis diagnosis obtained value 26.7% for probability of hepatitis given the symptom of malaise, 36% for probability of hepatitis given the symptom of fever and 29.5% for probability of hepatitis given the symptom of headache. Condition 4 of hepatitis diagnosis obtained value 31.7% for probability of hepatitis given the symptom of malaise, 26.1% for probability of hepatitis given the symptom of fever and 29.6% for probability of hepatitis given the symptom of headache. Condition 5 of hepatitis diagnosis obtained value 23.6% for probability of hepatitis given the symptom of malaise, 35.1% for probability of hepatitis given the symptom of fever and 25.1% for probability of hepatitis given the symptom of headache. 1. There is about 37.5% chance that the probability of hepatitis given the symptom of malaise A Bayesian approach for hepatitis disease detection results There is about 35% chance that the probability of malaria given the symptom of malaise The calculation of the combined m 1 and m 2 is shown in Table 7. Each cell of the table contains the intersection of the corresponding propositions from m 1 and m 2 along with the product of their individual belief. From Table 7, we get: 3. There is about 9.8% chance that the probability of influenza given the symptom of malaise The calculation of the combined m 3 and m 4 is shown in Table 8. Each cell of the Table 15. The third combination of probability of hepatitis given the symptom of headache. We compare the Bayesian approach and Bayesian Hau-Kashyap approach, where the comparison results are shown in Table 16. As shown in Table 16, it is obvious that the Bayesian Hau-Kashyap approach has minimum probability, so it can minimize the hepatitis disease level. Conclusion The initial symptoms of hepatitis are often similar to other diseases. A Bayesian approach has been proposed and implemented in order to diagnosis hepatitis. The hepatitis is a serious disease, its treatment is expensive and severe side effects can appear very often. Therefore, it is important to set a correct diagnosis and to identify those patients who most probably have hepatitis. That is for what the use of such a system can support the medical doctor decisions. The most highest probability of hepatitis given the presence of disease in this work which include condition 1 of hepatitis diagnosis obtained value 37.5% for probability of hepatitis given the presence of malaise, condition 2 of hepatitis diagnosis obtained value 31% for probability of hepatitis given the presence of malaise, condition 3 of hepatitis diagnosis obtained value 36% for probability of hepatitis given the presence of fever, condition 4 of hepatitis diagnosis obtained value 31.7% for probability of hepatitis given the presence of malaise, condition 5 of hepatitis diagnosis obtained value 35.1% for probability of hepatitis given the presence of fever. Using the Bayesian Hau Kashyap approach, the most highest probability of hepatitis given the presence of malaise obtained value 14.2% in condition 4, probability of hepatitis given the presence of fever obtained value 17.3% in condition 3 and probability of hepatitis given the presence of headache obtained value 14.7% in condition 1. A numerical example was illustrated that the Bayesian Hau-Kashyap approach was efficient and feasible.
2018-12-29T21:09:56.182Z
2018-05-02T00:00:00.000
{ "year": 2018, "sha1": "011d7e88a6c4bcb7fff92d8b1499dee8656190b2", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/60169", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "8349bc7ebccfea9342375bf6364b7e38b9c9f4f9", "s2fieldsofstudy": [ "Medicine", "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
241453360
pes2o/s2orc
v3-fos-license
Aβ Oligomers Alter NMDA Receptor Composition and Function in Early Stages of Alzheimer´s Disease Amyloid beta (Aβ)-mediated synapse dysfunction is an early event in Alzheimer's disease (AD) pathogenesis and previous studies suggest that NMDA receptor (NMDAR) dysregulation may contribute to these pathological effects. Although Aβ peptides impair NMDAR expression and activity, the mechanisms mediating these alterations in early stages of AD are unclear. Here, we show that Aβ oligomers activate PKC, phosphorylate NR2B subunit and modulate its synaptic localization and function. We isolated postsynaptic fractions (PSD) of AD prefrontal cortex and hippocampus of 6-month-old 3xTgAD mice to quantify NR2B, PSD-95 and Aβ1-42 levels. To investigate the effects of Aβ oligomers on NR2B and PSD-95 expression, we use a range of techniques including mouse intrahippocampal injections of Aβ oligomers, isolation of protein membranes by cell-surface biotinylation, and synaptosomal fractionation as well as in vivo surface immunolabeling of EGFP-NR2B. Ca2+ imaging and PKC activity were monitored by uorescent Ca2+ indicators and FRET analysis. means S.E.M. of volume intensities normalized to with two-way ANOVA followed by Bonferroni test and exposed to Aβ (D) or incubated with TS2-16 (E) and intracellular Ca2+ levels after 30 μM NMDA application were measured by microuorimetry. Violin plots show data distribution and mean ± S.E.M. of area under Ca2+ curve for each condition expressed as arbitrary units (a.u.) of 70-90 cells from at least 3 experiments. Data were analyzed with paired Student`s t test *p<0.05. (F, G) Immunoblot of NR2B subunit levels in functional synaptosomes of primary cortical neurons pretreated with 0.5 μg/ml IgM or CD29 antibodies and stimulated with 1 μM Aβ (n=4). Data are represented as means ± S.E.M. of band volume intensities normalized to corresponding β-Actin. Data were analyzed with two- way ANOVA followed by Bonferroni test *p<0.05. Abstract Background Amyloid beta (Aβ)-mediated synapse dysfunction is an early event in Alzheimer's disease (AD) pathogenesis and previous studies suggest that NMDA receptor (NMDAR) dysregulation may contribute to these pathological effects. Although Aβ peptides impair NMDAR expression and activity, the mechanisms mediating these alterations in early stages of AD are unclear. Here, we show that Aβ oligomers activate PKC, phosphorylate NR2B subunit and modulate its synaptic localization and function. Methods We isolated postsynaptic fractions (PSD) of AD prefrontal cortex and hippocampus of 6-month-old 3xTg-AD mice to quantify NR2B, PSD-95 and Aβ1-42 levels. To investigate the effects of Aβ oligomers on NR2B and PSD-95 expression, we use a range of techniques including mouse intrahippocampal injections of Aβ oligomers, isolation of protein membranes by cell-surface biotinylation, and synaptosomal fractionation as well as in vivo surface immunolabeling of EGFP-NR2B. Ca2+ imaging and PKC activity were monitored by uorescent Ca2+ indicators and FRET analysis. Results We observed that NMDAR subunit NR2B and PSD-95 levels were aberrantly upregulated and correlated with Aβ42 load in human PSD fractions from early stages of AD patients as well as in hippocampus of 3xTg-AD mice. Importantly, NR2B and PSD95 dysregulation was revealed by an increased expression of both proteins in Aβ-injected mouse hippocampi. In cultured neurons, Aβ oligomers increased NR2Bcontaining NMDAR density and NMDA-induced synaptic Ca2+ in ux in neuronal membranes in addition to colocalization in dendrites of NR2B subunit and PSD95. Mechanistically, Aβ oligomers required integrin β1 to promote synaptic location and function of NR2B-containing NMDARs and PSD95 by phosphorylation through classic PKCs. Conclusions These results provide evidence that Aβ oligomers modify the contribution of NR2B to NMDAR composition and function in early stages of AD through an integrin β1 and PKC-dependent pathway. These data reveal a novel role of Aβ oligomers in synaptic dysfunction that may be relevant to early-stage AD pathogenesis. Full-text Due to technical limitations, full-text HTML conversion of this manuscript could not be completed. However, the manuscript can be downloaded and accessed as a PDF. showing NR2B and PSD-95 levels in controls (Ctrl) and AD subjects (n = 4-7 per group). Data were analysed with one-way ANOVA followed by Bonferroni´s test; *p<0.05, **p<0.01, ***p<0.001. (D) ELISA determination of Aβ peptide levels in PSD fractions of same samples with different AD stages and controls as indicated in the scatter plot. Data were analysed with one-way ANOVA followed by Sidak´s test; **p<0.01, ***p<0.001. (E, F) Scatter plots of NR2B and PSD95 versus Aβ levels in samples of PSD fractions of human prefrontal cortex from controls and AD patients. Color codes of control and AD samples are indicated in F. Note the signi cant positive correlation between NR2B or PSD95 and Aβ levels in control and AD I-III samples (p=0.0133 and p=0.0311, respectively). Data were analyzed with a linear regression method. Figure 2 Aβ peptide load in 3xTgAD mice correlates with NR2B and PSD95. (A-D) Western blots and quantitative analysis of NR2B, PSD-95, and synaptophysin (Syn) in isolated synaptic terminals of 6-month-old control and 3xTg-AD mice (n=5 animals per group). Scatter plots represent the means ± S.E.M. of values normalized to corresponding β-Actin; *p<0.05, **p<0.01; un-paired Student's t test. (E) Scatter plot of Aβ peptide levels in synaptosomes of same samples as determined by ELISA. Data were analyzed with unpaired t-test; ***p<0.001. (F, G) Scatter plots of NR2B and PSD95 versus Aβ levels in synaptosomes of controls and 3xTg-AD mice. Note the signi cant positive correlation between NR2B or PSD95 and Aβ levels in mouse samples (p=0.036 and p=0.011, respectively). Data were analyzed with a linear regression method. (H, J) Coronal sections of mouse brains were analyzed after 7 days of vehicle or Aβ (135 ng) injection. Photomicrographs show NR2B (H) and PSD-95 (J) immunolabeling in dentate gyrus. (I, K) Scatter dot plots show the mean values of NR2B and PSD95 intensities in vehicle-and Aβ-injected mice. 2-3 brain sections of 9-16 mice were used. Data were analyzed with un-paired Student's t-test; *p<0.05. Figure 3 Aβ promotes alterations of NR2A and NR2B subunit distribution and function in neurons. (A-C) Cultured neurons were incubated with Αβ 1μM for 30 min or 24h, and total and biotinylated neuron-surface NR2A and NR2B were identi ed by western blot (A). Graph bars (means ± S.E.M.) represent the volume band intensities of total NMDAR subunits normalized to β-actin (B) and surface NMDA receptors normalized to total NMDARs (C). (D, E) Neurons were exposed to 1 μM Aβ for 30 minutes or 24 h, loaded with Fura-2AM Aβ treatment increases the location of NR2B subunit and PSD-95 in neuronal surface and favors their colocalization. (A) Neurons were exposed to 1 μM Aβ for 30 min, 3 h and 24 h. Total, synaptic and cytosolic protein samples were extracted and NR2B and PSD-95 levels were detected by immunoblot. (B, C) Graph bars show the means ± S.E.M of NR2B and PSD95 band volumes normalized to corresponding β-Actin of three independent experiments expressed as arbitrary units (a.u.). *p<0.05, **p<0.01 paired twoway ANOVA followed by Bonferroni´s test. (D) Neurons, expressing pEGFP-NR2B protein, were labeled in vivo using an antibody against EGFP (green), xed in methanol and stained with anti PSD-95 in red. (E, F) High magni cation photographs and dot plot of Pearson correlation coe cient shows co-assembling of EGFP-NR2B with PSD-95 in dendrites is higher in Aβ-treated neurons. Data are represented as means ± S.E.M. of 52 ROIs from at least 3 independent experiments. Data were analyzed with paired Student`s t test *p<0.05. Figure 5 Amyloid β promotes phosphorylation and activation of PKC which controls NR2B surface localization. (A) Images show effective targeting of Myr-Palm CKAR into neuron plasma membranes. (B-D) FRET recordings of PKC activity using Myr-Palm CKAR in neurons after application of 1 mM PMA, 500 nM Gö6983 or 1 μM Aβ plus 100 nM calyculin. Recordings are represented as means ± S.E.M. of three to ve experiments. (E) Neurons were pretreated or not with 100 nM Gö6983 for 1 h and stimulated or not with 1 μM Aβ for 30 minutes. Immunoblots show pNR2B at Ser1303 in total cell extracts. Histogram represents pNR2B normalized with β-actin. Data were analyzed with two-way ANOVA followed by Bonferroni test *p<0.05. (F) Neurons were exposed to 1μM Aβ for 15, 30 and 60 min, and pPKC was examined by western blot (n=5). Histogram represents means ± S.E.M. of band volume intensities of pPKC normalized to βactin levels. Data were analyzed with with one-way ANOVA *p<0.05. (G-I) NMDA-mediated Ca2+ responses in isolated neurons pretreated for 30 min with 1 mM PMA (G), with 1μM Aβ (H) or Aβ together with 100 nM Gö6983 (I). (J) Violin plot represents the data distribution and mean ± S.E.M. of area under Ca2+ curve for each condition expressed as arbitrary units (a.u.). Data were analyzed with one-way ANOVA followed with Dunnet´s test; **p<0.01; n=3 cultures, 262 cells. (K) Neurons were treated with 1 μM Aβ for 30 min in the presence or in the absence of 100 nM Gö6983 and NR2B subunit levels were examined by western blot in total and synaptic fractions. (L) Histogram shows quanti cation of synaptic NR2B in immunoblots (n=3). Data are represented as means ± S.E.M. of band volume intensities normalized to β-actin. Data were analyzed with two-way ANOVA followed by Bonferroni test *p<0.05. Figure 6 Integrin β1 mediates Aβ-induced PKC activation and surface NR2B expression in primary cortical neurons. (A) PKC phosphorylation was measured by western blot in total cell extracts from neurons previously preincubated with 100 μM RGDS and stimulated with 1 μM Aβ. Histogram represents quanti cation of phosphorylated PKC after normalization with β-actin (n=4). Data were analyzed with one-way ANOVA *p<0.05. (B) Total cell extracts were obtained after treatment with 1 μM Aβ or 0.5 μg/ml TS2-16 for 30 minutes and PKC phosphorylation was analysed by western blot. Histogram represents quanti cation of phosphorylated protein after normalization with β-actin (n=4). Data were analyzed with one-way ANOVA *p<0.05, 937 **p<0.01. (C) Neurons expressing Myr-Palm-CKAR reporter were pretreated with 100 nM calyculin, 0.5 μg/ml isotype control IgM or CD29 antibody, an integrin β1 inhibitor, and Aβinduced PKC activity was measured by FRET. Data are represented as means ± S.E.M. of three to ve different experiments. (D, E) Neurons, loaded with Fura-2AM, were preincubated with 0.5 μg/ml IgM or
2021-01-07T09:01:27.091Z
2021-01-05T00:00:00.000
{ "year": 2021, "sha1": "449070694cc2aa7b7449b04d4865e60b06d889d5", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-136875/v1.pdf?c=1631884185000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "8091cfcac8444e7e693fe5eaefebf9004b531a65", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
257556764
pes2o/s2orc
v3-fos-license
Predicting Covid-19 pandemic waves with biologically and behaviorally informed universal differential equations During the COVID-19 pandemic, it became clear that pandemic waves and population responses were locked in a mutual feedback loop in a classic example of a coupled behavior-disease system. We demonstrate for the first time that universal differential equation (UDE) models are able to extract this interplay from data. We develop a UDE model for COVID-19 and test its ability to make predictions of second pandemic waves. We find that UDEs are capable of learning coupled behavior-disease dynamics and predicting second waves in a variety of populations, provided they are supplied with learning biases describing simple assumptions about disease transmission and population response. Though not yet suitable for deployment as a policy-guiding tool, our results demonstrate potential benefits, drawbacks, and useful techniques when applying universal differential equations to coupled systems. The advance warning provided by these models allowed public health institutions to prepare by implementing policies to mitigate the second wave when it arrived [12,13].Modeling efforts were widely applied to investigate the impact of public health measures such as testing [14], school and workplace closures [15,16], vaccination strategies [17][18][19][20], or to stimulate action by projecting the impacts of a worst-case 'do nothing' scenario where governments and populations did not attempt to mitigate the pandemic [13,15]. Fortunately, most governments and members of the public did respond to the pandemic by taking measures to reduce case incidence.Numerous studies have shown that non-pharmaceutical interventions such as lockdowns, school closures, and social distancing protocols reduce case notifications and health impacts of COVID-19 [21][22][23][24].The anticipation of infection risk in the face of rising case incidence supports adherence to these measures [25].However, these prophylactic measures are economically costly and mentally fatiguing [26,27].So, as the risk of infection wanes, the public's willingness to abide by them wanes as well. The ensuing relaxation of COVID mitigation efforts may potentially result in another pandemic wave.This two-way interaction -where infection spread influences behavior, which in turn influences infection spread, -suggests that the concept of coupled behavior-disease systems [28] may be useful for studying COVID-19 pandemic waves. Among the most valuable insights provided by these models is the occurrence of multiple pandemic waves, which are predicted under a wide range of conditions by these models due to waning stringency causing a resurgence of the infection [41,17,42,43].With hindsight, we can confirm that these models were correct -second waves occurred virtually everywhere during the COVID-19 pandemic (and did so before the arrival of new variants). Alongside these mechanistic models, the plethora of epidemiological, sociological, and economic data generated by the pandemic allowed machine learning models to flourish [44][45][46][47][48].These models have proven adept at integrating vast quantities of data on a multitude of factors (including behavior) affecting disease spread.Consequently, they often adapt better to regional variability compared to mechanistic models [48,47].However, machine learning models have significant drawbacks.They can fit existing data well and accurately predict days to a couple of weeks into the future, but pay for this predictive accuracy with reduced interpretability compared to traditional models [44].Compared to mechanistic models with relatively few easily understood parameters, it is far more difficult to extract qualitative understandings of disease dynamics (such as second waves) from the hundreds or thousands of parameters in purely machine-learning models.They are also easy to over-fit (although mechanistic models also suffer from this risk), meaning their predictive value may be limited. Recently, advances in high-performance automatic differentiation have enabled new techniques that combine the interpretability and qualitative understanding from mechanistic models with potentially higher predictive power and scalability of machine learning.Physics-informed machine learning (PIML) is one such methodology.The key idea is to create ML models that encode physical laws by inferring them from large amounts of data (observational bias), building them into the model's architecture (structural bias), or training the model to uphold them (learning bias) [49]. Of particular interest for qualitative epidemic modeling are the latter two biases, as they reduce the model's reliance on large amounts of data.By incorporating these biases, the model is prevented (in the case of structural biases) or at least discouraged (for learning biases) from making biologically impossible predictions such as negative population sizes or proportions that do not sum to unity.Learning biases can also discourage overfitting the data by introducing other objectives for the model. Thus far, learning biases have primarily been limited to solving various forms of partial differential equations (PDEs) [50][51][52].In these models, a neural network is trained to simultaneously fit data and to satisfy a PDE.In addition to physics, learning biases have been used in biologically informed machine learning (BIML) applications.These include blood flow dynamics [53], drug responses [54], and cancer detection and classification [55][56][57]. In terms of structural bias, universal differential equations (UDEs) have recently emerged as a method of interest.UDEs involve training neural networks embedded in differential equation models.Known dynamics can be included explicitly while leaving unknown processes to be learned by the neural network [58].The explicit parts of the UDE can be made to retain valuable laws such as invariant quantities.UDEs have been applied successfully on predator-prey models, metabolic networks, batteries, and photonics [58,59].For instance, recent research uses a neural network to learn the change in COVID-19 quarantine measures in a population over time, within the framework of a modified QSEIR (quarantine, susceptible, infectious, recovered) model.The trained network was then used to quantify the effectiveness of those measures for different regions [46,60]. UDEs and learning biases both have a promising track record in these contexts, but their ability to make qualitative long-term predictions about coupled behavior-disease dynamics (of the sort provided by mechanistic models) has not yet been widely tested.In fact, to our knowledge, learning biases and UDEs have yet to be combined at all.This research gap motivated our study.Our objective was to combine observational biases (UDEs) with satisfiable learning biases in a coupled behavior-disease model for COVID-19.We trained a compartmental UDE model to fit behavioral and epidemic data while penalizing deviations from several simple socio-biological assumptions.We hypothesized that a UDE model can learn the pattern of coupled behavior-disease interactions and hence predict a second wave (either qualitatively or quantitatively), having only seen the first wave (and its learning biases).We also hypothesized that without those learning biases, the model will learn much less effectively.We note that mechanistic epidemic models [10] commonly predict a second wave of COVID-19 if the modeler imposes an increase in the contact rate parameter after the first wave, to replicate the effect of relaxing restrictions [61].In contrast, here we are interested in the more challenging problem of endogenizing the decision to relax restrictions by using a coupled behavior-disease dynamical framework that is intended to predict decision-making regarding COVID-19 restrictions, along with the resulting changes in the contact rates. Results A complete description of our model appears in the Materials and Methods section.A complicated mathematical model can easily be made to fit an epidemic curve, but runs the risk of over-fitting the data and thus not being useful for prediction [62,63].Simpler mathematical models allow us to test our hypotheses by incorporating aspects we understand without becoming overburdened by details that we cannot reliably describe mathematically [63]. Hence, we used a UDE framework that allows us to leave the coupled behavior-disease dynamics of a simple compartmental behavior-disease model unspecified, save for a few plausible assumptions ("learning biases").In doing so, we can test the validity of those assumptions.Compartmental models divide the human population into mutually exclusive compartments based on infec-Fig.1. Infection prevalence time series predictions for all regions produced by the model with learning biases.Infection prevalence is the proportion of the total population that is infected at any given time.Green dots represent training data (first 22 weeks) and black dots show unseen data (a further 23 weeks).Predictions are generated using the median (solid line) and interquartile range (ribbon) of 100 independently trained instances of the model per region.Note that many of these populations (e.g.Texas, California) had larger first waves than is apparent in these plots, on account of high under-reporting rates in the first wave.In all regions, the model fits training data well.It frequently predicts a second wave in all regions except Ontario and British Columbia, in which it predicts greater continuation of the first wave.tious status, and which are generally implemented as differential equations.These models have been a mainstay of mathematical epidemiology for decades [10]. The algorithm learned the manner in which the force of infection responds to mobility and manner in which mobility responds to its current value, the number of active cases, recent new cases, and recovered cases.The learning biases inform the model with several plausible assumptions: namely, that force of infection increases with mobility, that mobility decreases with more active and recent cases, that mobility tends toward 0 (the pre-COVID average) in the absence of cases, that this tendency is stronger the more people have recovered, and that mobility cannot fall below a 100% reduction or exceed a 200% increase from the pre-COVID average (see Methods).The learning biases strongly discourage infeasible values of mobility and make data-fitting relatively less important for the optimizer.As a result, the model makes out-of-sample predictions (i.e.second waves) frequently. To ensure consistent and repeatable results, we ran the model on each region 100 times both with and without learning biases.We trained the algorithm on the first wave and tested whether it could predict the second wave.Overall, the model with learning biases was successful in every region in which we tested it, though some more so than others.It consistently learned to fit the data and constraint losses, predicted second waves, and seldom made biologically implausible predictions. We compared UDE models with and without learning bias.The model without learning biases, while not entirely a failure, was much less successful.Though it was generally able to fit the data, it predicted second waves much less frequently and made many more unrealistic predictions.Details are provided in the following subsections. Model predictions To get a sense of the model's average behavior, we plotted the median prediction of the 100 simulations for each region.An example for New York can be found in Section 2, Fig. 2(a-f) (analogs for other regions can be found in the Supplementary Appendix, figures 3-13).The model with learning biases has consistent behavior within the training region.The median prediction shows a small second wave and the interquartile range shows one of similar size to the first. The model without learning biases fits the data comparably well, but has greatly reduced variability outside of the training region.Second wave predictions are smaller or non-existent, typically only suggested by the upper quartile rather than the median.Section 2.1 Table 1 gives a numerical summary of the biased model's second wave prediction performance across all regions (an analogous table for the unbiased model can be found in the Supplementary Appendix (Table 1).Section 2.1 Fig. 1(a-m) shows a graphical summary of the biased model's performance across all regions.The Supplementary Appendix (Figure 2) contains an analog for the unbiased model. Biological feasibility Both models, with and without learning biases, tended to make feasible predictions, in the sense that all model states remained within their respective bounds.The biased model was stable 88% of the time, while the unbiased model was stable 85% of the time, across all regions. However, when evaluating the learning bias loss functions on the trained models, it becomes clear that the model with learning biases is more reliable in this regard.The biased model achieves better losses across all loss objectives, including accuracy, compared to the unbiased model.Comparison of all loss functions can be found in the supplementary material. The unbiased model does particularly poorly on the mobility upper and lower bounds (on the order of 10 4 times worse than biased), and the tendency for mobility to return to baseline in the absence of infection (roughly 103 times worse). Second wave prediction As a more robust metric for second wave prediction, we counted the number of local maxima exceeding at least 10 −3 in the infected time series for each model simulation.The value 10 −3 was tuned to exceed the size of any insignificant background fluctuations during the lulls between actual waves.With learning biases, the model predicted second waves regularly for most regions.(For example, it predicted second waves more than 63% of the time for all European regions.It performed worst on Ontario and British Columbia.This may be because the training data for these regions did not include the peak of the first wave, so the model predicts the first wave to increase further. The unbiased model, meanwhile, rarely predicted second waves for any region (see Supplementary Appendix Table 1 for details).Its best performance was on Quebec, where it predicted second waves 51% of the time.This was also the only region in which it outperformed the biased model, which predicted second waves 48% for of the time.Otherwise, it predicted second waves less than 66% as often as the biased model, sometimes as little as 1.6% as often.It predicted no second waves at all for British Columbia. Most of the time, both models predict the second wave too early (exceptions being Texas and Quebec), but the biased model's estimate is usually closer (only excepting Texas and Quebec).In terms of wave size, both models' median predictions undershoot the actual second wave size, but the biased model's upper quartile frequently exceeds it.The biased model's upper quartile only exceeds the true size for Germany and otherwise falls well short of it. Transmissibility One of the main uses of this model is that the trained neural network representing the force of infection can, once trained, be analyzed to examine the learned relationship between mobility and the transmission rate.Section 2.4 Fig. 3(a,b) shows the distribution in the response of to mobility predicted by the model with learning biases for New York.The models all converge on the same relationship within the training region and on low out-of-sample values, but they diverge for large ones.It is also worth noting that the prediction is, as expected, monotonically increasing.Once again, all regions demonstrate similar behavior (see Supplementary Appendix figures [14][15][16][17][18][19][20][21][22][23][24]. As with the time series predictions, the model without learning biases fits the data similarly well within the region on which it has been trained.However, outside that region, it extrapolates a flatter curve that is about equally likely to be higher or lower than the median. For a quantitative sense of how responds to mobility, we evaluated each trained network at the baseline value of mobility to determine the value of , and hence 0 (= ∕), the basic reproduction number of the virus at the baseline value of mobility.We also applied Newton's method to the trained neural network to find the value of mobility ( ) at which 0 drops below 1, the value below which the infection will die out [64].Results for the biased model can be found in table Section 2.4 Table 2.The unbiased model results are negligibly different for the 0 .The results for are more variable.These unbiased model results can be found in the Supplementary Appendix Table 2. The 0 predictions, averaged over all simulations for a given region, range from 1.60 (British Columbia) to 2.60 (Germany).While estimates of 0 for COVID-19 vary significantly between countries and times, this is in line with estimates of between 2.4 and 2.4 for the original COVID-19 strain [65][66][67][68].It is also consistent with other models, which found Germany and the Netherlands to have higher values [69]. The model typically estimates that a 40-50% reduction in mobility is necessary to reduce 0 below 1.This is consistently more extreme than other studies have found (20-40%) [70], but not entirely implausible considering the interquartile range.That said, we cannot interpret any result for Belgium, California, or the UK where the interquartile range exceeds physically realistic bounds. Discussion Our results show that socially and biologically informed machine learning models can perform qualitative prediction tasks.When supplied with learning biases, the model routinely predicted second pandemic waves similar to those that occurred in most populations during the COVID-19 pandemic.The model seldom produced implausible predictions for mobility, and where it did, this tended to result from a failure to converge during training. The most significant result is that the biased model predicts a second wave in every region except the Canadian province of British Columbia.The biased model predicted the second wave peak more consistently and closer to the actual time than the unbiased model without behavioral (mobility) feedback.The biased model also tended to predict a second wave that was much larger than the first wave, as occurred in most populations during the COVID-19 pandemic, although the predicted second wave was often larger in magnitude than what occurred in reality.This ability to predict second waves is valuable from a public health perspective, for mitigation of population health impacts.Though our model does not explicitly include government policy, it can influence behavior, and knowing the likely trajectory of future cases under current policy can help decision-makers assess whether mandates should be tightened or loosened [43,41].In practice, our model could be used to simulate possible outcomes by using the trained network, but changing to a time signal representing total lifting of restrictions, gradual reopening, or continuing heavy restriction.Such a model may need to account in some way for the costs of each policy. Mixed machine learning models need not supplant traditional models entirely, but they can be a valuable auxiliary.As our model shows, they need not be overly complex or computationally expensive.They can interpret large amounts of data, generalize well to a variety of different regions, and given appropriate learning biases, can be relied upon to make feasible predictions. Epidemic models are often under-determined by data [71].UDEs allow a new approach to this problem.Since neural networks are universal approximators [72], they can represent the full range of possible functions that could fit the available data.By training multiple iterations of a UDE model and analyzing their trajectories, we can see a range of feasible outcomes for the system with just one single model.For example, two UDE predictions can fit the data and biological constraints equally well, yet one may predict a massive second wave, while the other predicts a rapid return to normalcy.A third may produce several smaller waves with corresponding mobility changes.That said, it is important to assign sufficient weight to the learning biases to avoid discouraging such a range of behaviors in favor of a single, overfitted solution. The ability of UDEs to examine a range of data-fitting functions could be further enhanced with sparse regression methods [58,73,74].By applying sparse regression to our trained model's output, one could derive a multitude of symbolic equations that could be used to mathematically model the system.The results also support our hypothesis that learning biases are effective at accelerating training and assuring socially and biologically plausible solutions while achieving superior training performance.While some attributes can be learned passably well by the unbiased model given sufficient training time, the biased model still achieves better losses on these attributes by at least two orders of magnitude.Good performance on training data should not be taken too seriously since it may be a sign of overtraining.However, this does not appear to be the case in our model.The vast majority of the average training loss comes from a few highly divergent solutions.The improved performance by the biased model indicates reduced proclivity for such solutions. The fact that a monotonic is learned comparably well by both models indicates that both of them are instrumentally useful for the model to learn when satisfying the loss objective.This gives a good sanity check that these features are present in the real system and assuming them in the model is reasonable.The upper and lower bounds on mobility, however, are not typically inferred by the model without explicit instruction.This is not unexpected since the observed data never nears the bounds.By fitting the data well, the model never needs to learn what happens at those bounds.However, including these boundaries as learning biases gives greater assurance that the model will not produce divergent or unstable solutions if, for example, it were used to predict what would happen in a scenario where those bounds were neared.Of course, it is preferable to ensure stability mathematically using structural biases, but this may not always be feasible.The other objective that the unbiased model tends not to learn is the tendency for to return to the baseline value of 0. This may be because, as people become accustomed to life with the virus, the "baseline," i.e. the average societal preference in the absence of disease, shifts downward. It is interesting that the learning biases help generate greater variability in the out-of-sample time series predictions.This is likely because, in the absence of any other objective, the model consistently converges to a single global optimum for data fitting.Since the model's extended prediction tends to remain within a small region of state space ( remaining negative, relatively small), the greater potential variance is never realized.The model with learning biases, meanwhile, is relatively less concerned with fitting B. Kuwahara and C.T. Bauch the data and hence has more freedom to explore the parameter space.The fact that the constraint losses are evaluated according to randomly generated sample points also confers greater variability to the results of the biased model. The biased model also has greater variability in the upper quartile of its response but reduced variability in the lower quartile.This makes sense -the biased model has learned that for any greater than those it has seen, the value of must also be greater (and vice-versa for less than what it has seen).The unbiased model, having no such information, cannot make an informed prediction, and so is equally likely to predict a continued increase or an unrealistic decrease. These variability trade-offs favor the biased model.Greater variability in time series prediction is valuable because (assuming the predictions are biologically feasible) it shows a greater variety of possible outcomes and assigns a degree of confidence to those outcomes.The reduced variability in predicting the transmissibility is also desirable because it derives from a better understanding of the system. Although our current model is retrospective and hence not useful as a predictive tool, it demonstrates potential for the future.Socioeconomic factors will continue to be complex, and regional and temporal variability will persist.If data-driven approaches can help overcome the challenges these factors present, we should facilitate their use by ensuring continued availability of high-quality data -both for endemic diseases and future pandemics.Universal differential equations specifically, when fine-tuned and supplied with appropriate learning biases, could be useful alongside traditional models to quickly gain perspective on the state of outbreaks across the world without having to develop specialized models for each region. Limitations We used a heavily simplified model of COVID-19.It is not intended to capture all details of the pandemic, nor is it meant to recommend specific health policies.We assumed the acquired immunity is permanent, which it may well not be [75].We do not account for vaccines, which came into play around the end of 2020 [76].Thus, the long-term predictions (i.e.beyond 300 days or so) should be taken only as evidence that the model does not produce wildly implausible behavior rather than a serious attempt to forecast cases too far in the future.The emergence of new variants, first reported in September 2020 [77] at the end of the second wave in many populations, means that predictions for the tail end of 2020 are beyond the model's intended scope.Similarly, spatial structure is important and can influence dynamics [78,24].Even in the short term, the model is not intended to predict cases or to precisely estimate the virus's basic reproduction number.It is limited by our ability to consistently measure recovery rates and estimate under-reporting ratios, which almost certainly vary between regions and over time within regions.For simplicity, we also left out asymptomatic transmission, seasonal changes in infectiousness, age structure, and reinfection, all of which hamper the model's short-term predictive ability compared to more complex models [71,79]. In addition to epidemiological features, the UDE framework also comes with limitations.For one, each model instance provides a single prediction with no indication of confidence level.We have done our best to mitigate this by running many model instances, but the optimization process may still favor certain solutions over others.As such, the intervals we present should not be considered true prediction intervals of the sort a probabilistic model might provide, but rather a general measurement of the model's tendency.For another, while UDEs are a flexible framework, they are still deterministic differential equations.They do not account for intrinsic randomness in the system or the technically discrete nature of an epidemic.These limitations are mitigated by choosing regions with large, fairly concentrated populations and averaging data weekly.Further considerations on network structure are discussed in 4.1.None of these limitations changed our conclusions, since our goal was to show that UDEs and PIML can fit available data while making qualitatively correct our-of-sample predictions. Future directions Future work could improve our model by incorporating some of the aforementioned details of the pandemic.This could give insight into other behavior-disease interactions like vaccine usage [80] or allow an examination of how these dynamics changed over the course of the pandemic.In sections 4.1-4.2.1 we also provide some methodological changes that could further develop the UDE/PIML themes, particularly regarding how to use learning biases effectively. Probably the biggest opportunity for future work is to apply this type of data-driven differential equation model to other systems.Other infectious diseases, particularly those for which vaccines are available, are also coupled behavior-disease systems [80,74] and so could be amenable to this type of model.Beyond epidemic modeling, climate systems are also known to have important behavioral components [81,82].Ultimately, one of the greatest advantages of UDEs is that, as per their name, they can theoretically be applied to any dynamical system [58].It is only a matter of testing them to see if they provide valuable insight.where , and represent the susceptible, infected and recovered proportions of the population respectively ( can be recovered as 1 − − ), and represents the relative difference in mobility compared to the baseline (i.e.0 is the baseline, +1 is double the baseline, and -1 is complete reduction to no mobility.() represents the transmission rate as it depends on mobility, and (, Δ, , ) represents the dynamics governing social/behavioral (mobility) response to the infectious disease [42].Both and were learned by the algorithm.Δ() represents the change in between the current time and a previous time − 2 .The − factor accounts for several factors that reduce the population response to the virus over time, including pandemic fatigue, the development of medical interventions that make the infection less fatal (such as 'proning'), substituting less disruptive interventions (such as masking) for mobility reductions, and (for longer-term predictions than we study in this model) the evolution to milder virulence over time. is a trainable parameter.Section 4 Fig. 4 shows a schematic of the model. Our The non-trainable model parameters are , the per-capita recovery rate, 1 , the delay between a change in and the corresponding change in prevalence, and 2 , the reverse delay: the time between a change in prevalence and corresponding behavioral response [42,83].The values we used are = 0.25day −1 [84], 1 = 14days, and 2 = 10days [85,70].This does assume these variables do not vary spatially or temporally (which they may not). We chose the SIR model as a template for our model for two main reasons.First, relevant data in the form of case notifications suffice to reconstruct the values for all model states as shown in 4.3.Second, a simple model (like the SIR model) allows all more complex dynamics to be learned from the data by the neural network components.A model with an "exposed" category (SEIR) or even something more complex could function as well, but previous work has found that although COVID-19 may have a latency period, the SIR model performs just as well if not better than the SEIR model for estimating disease parameters [68]. Our model inherits several structural biases from the standard SIR model template.First, = 0 and = 0 are both invariant, preventing any infeasible negative values for these variables.Second, it retains the conservation relation + + = 1.Thus, regardless of the functions fit by the neural network, () and () are guaranteed to be plausible.Of course, the model also inherits some biases and limitations from its SIR base.Namely, it assumes removal rate from death and recovery are constant and does not account directly for asymptomatic cases, latency period, differing severity, or potential for loss of immunity and subsequent reinfection. Neural networks The influence of mobility (i.e.contact rate) on the transmission rate is represented by neural networks that are used to represent () and (, Δ, , ).These networks each have linear output layers with one neuron and 2 hidden layers with 3 neurons per hidden layer and Gaussian Error Linear Unit (GELU) activation functions.This gives the network 22 parameters and the network 31.Accounting for the decay parameter, the model has 54 trainable parameters. UDEs present an inherent tradeoff between parsimony and bias.Part of the appeal of UDEs is the ability to represent arbitrary functions -including discontinuous, non-differentiable ones -with neural networks.However, this universal approximation property relies on arbitrarily large networks [72].However, smaller models train faster and can achieve a more favorable ratio of predictive power to number of parameters.We chose to lean more towards parsimony, so our model may struggle to learn highly discontinuous effects (such as the sudden implementation of country-wide lockdowns), being biased instead towards simpler continuous functions.The hyperparameter space was too large for us to optimize every aspect of the model, so different network parameters (size, activation, structure) may yield better results. Training methodology The baseline (unbiased) model, which received no social or biological feedback, was trained only to fit the data (details in section 4.3).The model's prediction is generated by solving the delay differential equation system to get its prediction for each state at each time step.We use the method of steps with the Rosenbrock23 differential equation solver to perform this process.The model's predictions are recorded for each day.This prediction is then compared to the training data using a scaled mean-squared error loss function: Here, is the dimension of the system, is the number of data points, is the true value of the th variable's th data point, and ȳ is the prediction for th variable's th data point. is the size of the parameter vector Θ, and Θ is the th entry in Θ. Scaling the loss function in this way helps ensure all variables are given equal importance despite having different ranges [86]. Both biased and unbiased models for all regions were trained on the first 160 days, giving = 22 data points after weekly averaging (see 4.3).This time period fully encompasses the first wave for all populations studied, but does not include the beginning of the second wave. Learning biases The socially and biologically informed model was trained to minimize the same accuracy loss objective as well as 8 other objectives, each encoding a social or biological assumption.These biologically informed loss functions are deliberately constructed to give 0 loss to any functions that satisfy the relevant assumptions.This allows the model greater freedom to explore the range of biologically feasible functions. To evaluate these additional loss functions, we generate 100 random points in the region 0 ≤ ≤ 1, − 1 ≤ Δ ≤ , −100 ≤ ≤ 100 and evaluate each loss function at each point.The total loss at each iteration is then a weighted sum of these losses and the accuracy loss.We tried dynamically updating the weights for each loss function as in [31], but this did not significantly improve results.The biological assumptions and corresponding loss functions are displayed in Section 4.2.1 Table 3. We only tested a few values for the learning bias weight.The optimal value for achieving tolerable performance on training data while assuring qualitatively realistic long-term predictions may be higher, lower, or vary between loss functions or throughout the training process. Model parameters were randomly initialized.To save training time, parameter choices that gave initial errors of more than 10 4 were re-initialized.To optimize the parameters, we use the Zygote package [87] for reverse-mode automatic differentiation to obtain gradients of each loss function with respect to the model parameters (note that we only take the accuracy gradient for the unbiased model).We pass these gradient values to the Adam optimizer to update the model parameters.One training epoch thus consists of a one predict-differentiate-update cycle.The code used to implement this algorithm is available online (see section Data availability). We found that training the model on the entire training set at once caused it to become stuck in a local optimum where never increased.Thus, we trained the models in stages to achieve a better fit more quickly.The model trained on the first quarter of the data in the first stage (50,000 epochs at a learning rate of 0.001), the first half in the second (10,000 epochs at a learning rate of 0.001), and the entire training set in the third (20,000 epochs at a learning rate of 0.0005). Repeating our model with more computing time and power could be informative.Although we were able to run the model with enough iterations to ensure all models converged to a good degree, some certainly converged better than others.The mobility data was a particular challenge, with fairly sharp downturns and upturns occasionally not always fully captured.This could be assisted by using collocation-based training to speed up the process [88]. 2 Fig. 2 . Fig. 2. Predicted time series of all model states for New York state with learning biases (a-c) and without (d-f).Panels (a) and (b) show susceptible fraction, (b) and (d) show infected, and (c) and (f) show mobility.Green dots represent training data, while black dots represent unseen data.All predictions are generated using the median (solid line) and interquartile range (ribbon) of 100 independently trained instances of the model. Fig. 3 . Fig. 3. Predicted force of infection based on mobility level for New York state with learning biases (a) and without (b).Dotted lines indicate values of mobility seen by the model during training.Solid line shows the median prediction of 100 model instances and ribbon shows interquartile range. 1 )Fig. 4 . Fig. 4. Schematic of our model showing the relevant differential equations, neural networks, and training procedure.Neural networks are depicted with the actual topology used in the model.The learning biases are present only in the biased model.Otherwise, the biased and unbiased models have the same structure. Table 1 Summary of true second wave and the biased model's second wave predictions. Table 2 Predicted 0 and required mobility reduction. * Interval exceeds physically realistic values.
2023-03-17T01:25:44.524Z
2023-03-16T00:00:00.000
{ "year": 2024, "sha1": "789aabdf45b47b8fc0e7a85f707373b4a08fbbff", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S240584402401394X/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3e83d789e61c8382a6846c9b5a9bdc174cdea9be", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
235627680
pes2o/s2orc
v3-fos-license
The mechanisms of celastrol in treating papillary thyroid carcinoma based on network pharmacology and experiment verification Background Celastrol, a triterpene present in the traditional Chinese medicine (TCM) Triptergium wilfordii, has been demonstrated to have remarkable anticancer activity. However, its specific mechanism on papillary thyroid carcinoma (PTC) remains to be elucidated. Methods Potential targets of celastrol were screened from public databases. Through the Gene Expression Omnibus (GEO) online database, we obtained the bioinformatics analysis profile of PTC, GSE33630, and analyzed the differentially expressed genes (DEGs). Then, a protein-protein interaction (PPI) network was constructed by utilizing the STRING database. Furthermore, Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis were conducted. Finally, drug interactions between hub genes and celastrol were verified by molecular docking. Results Four core nodes (MMP9, JUN, ICAM1, and VCAM1) were discerned via constructing a PPI network of 47 common targets. Through functional enrichment analysis, it was confirmed that the above target genes were basically enriched in the interleukin-17 (IL-17), nuclear factor kappa-B (NF-κB), and tumor necrosis factor (TNF) signaling pathways, which are involved in the inflammatory microenvironment to inhibit the development and progression of tumors. Molecular docking results demonstrated that celastrol has a strong binding efficiency with the 4 key proteins. Conclusions In this research, it was demonstrated that celastrol can regulate a variety of proteins and signaling pathways against PTC, providing a theoretical basis for future clinical applications. Introduction In recent years, the incidence of thyroid cancer (TC) has increased rapidly, and it has become the most common endocrine system malignant tumor (1). Approximately 85% of TCs are papillary thyroid carcinoma (PTC) (2). Although most early-stage PTC patients have a favorable prognosis after treatment, the likelihood of recurrence is greatly increased if metastasis is already present at the time of diagnosis (3). Metastases are known to cause more than 90% of deaths from cancer (4,5). In addition to the primary tumor cells, multiple stromal cells and inflammatory cells in the tumor microenvironment are also involved in metastasis. These cells can affect the progression of PTC by secreting various chemokines to participate in several mechanisms (6). It has been reported that inflammation can promote the transformation of normal thyroid tissues to malignant tumors by creating an advantageous immune microenvironment (7,8). Consequently, targeted treatment of the inflammatory microenvironment is promising for the prevention and treatment of PTC. Celastrol, also known as tripterine, is a natural bioactive ingredient isolated from the plant Tripterygium wilfordii. Previous studies have shown that celastrol possesses a good therapeutic effect in many diseases, including Alzheimer disease, bronchial asthma, systemic lupus erythematosus, rheumatoid arthritis, and obesity (9)(10)(11). Most intriguingly, celastrol has clearly been demonstrated have anticancer effects in many tumor cells and animal models (12)(13)(14). It has been demonstrated that celastrol could inhibit the NLRP3 inflammasome, reduced the potency of macrophage to stimulate migration and invasion of tumor cells (15). Nevertheless, the underlying mechanism of celastrol in PTC is not well clarified. Network pharmacology (NP) is a new approach that uses bioinformatics to observe the interactions of drugs on diseases, which provides a new logical guide and technical routes for developing and understanding of drugs, especially suitable for complex Traditional Chinese Medicine (TCM) (16). Molecular docking is a method which is most used for calculating protein-ligand interaction (17). Because celastrol has multiple pharmacological effects and multiple targets, its mechanism of action in tumor therapy is difficult to be revealed by traditional research methods. In recent years, many studies have combined bioinformatics and pharmacology through the application of network pharmacology to reveal the mechanism of action of these drugs and systematically elucidate their role in the treatment of diseases (18,19). Therefore, we will use network pharmacology to predict the target of celastrol in PTC and analyze the interaction between target and pathway-related PTC, so as to provide reference for the further study of the material basis and mechanism of anti-PTC. It will provide a new way and method for the research of celastrol. The workflow of this research is shown in Figure 1. We present the following article in accordance with the MDAR reporting checklist (available at http://dx.doi. org/10.21037/atm-21-1854). Screening the differentially expressed genes (DEGs) in PTC Based on the GPL570-55999 Print_1437 platform, the dataset of gene expression in PTC and normal thyroid tissue, GSE33630, was downloaded from the Gene Expression Omnibus (GEO) online database. There were 105 samples in total, including 60 tumor tissues from PTC surgery patients and 45 normal thyroid tissues from nonmalignant surgery patients. The pheatmap and limma packages in R were utilized to assess the results. DEGs between PTC tissue and normal thyroid tissue were checked based on the criteria |logFC| >1 and adjusted P value <0.05. The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). Potential targets of celastrol The celastrol chemical structures were acquired from PubChem (20). The Swiss Target Prediction online database and PharMapper were used to predict the target proteins which corresponded to chemical small molecules. Protein-protein interaction (PPI) network construction To visualize and understand the interaction mechanisms of these target proteins, we constructed a PPI network by utilizing the STRING 11.0 database (21). The candidate target proteins were used by STRING to construct and visualize the PPI network based on a minimum interaction value >0.9 (22). In the network, each node represented a target protein and each edge represented the PPI. Enrichment analysis of Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) GO enrichment analysis and KEGG pathway annotation were performed using the Bioconductor data package in R software (23,24). Adjusted P values ≤0.05 and subsequent pathways related to PTC were identified based on the pathological and clinical data. Molecular docking Based on the crystal structures of the proteins that were downloaded from RCSB Protein Data Bank (PDB) databases, molecular docking studies were performed with select proteins using AutodockTools 1.5.6. Docking results, the score ranging from 0 to 10 indicates that weak to strong combing ability of the proteins, as a negative logarithm of experimental dissociation/inhibition constan value (25). Cell culture and treatment PTC cell line BCPAP was obtained from Guangzhou Cellcook Biotech Co. (Guangzhou, China). Cells were cultured in RPMI 1640 (Gibco) supplemented with 10% fetal bovine serum, penicillin/streptomycin (5,000 units/mL, Gibco) and l-glutamine (2 mM, Gibco). The passage number of the cells used for the experiments was approximately 20-30. Celastrol was purchased from Sigma-Aldrich (St. Louis., MO, USA) and was dissolved into DMSO to a final concentration of 50 mM (26). Quantitative real-time PCR (qRT-PCR) RT-PCR assays were performed as previously described (27). Briefly, total RNA was extracted from cells using a RNeasy Mini Kit (Qiagen). cDNA synthesis was conducted using a Transcriptor First Strand cDNA Synthesis Kit (Takara) according to the manufacturer's instructions. PCR was performed with FastStart Universal SYBR Green Master Mix (Takara) on an ABI ViiA7 system. The primers are listed in Table 1. Statistical analysis The software SPSS 22.0 software (SPSS Inc., Armonk, NY, USA) was used for statistical analysis. All values are presented as the mean ± standard deviation (SD) of three independent replicates. Student's t-test was performed to compare differences. Significant differences were indicated by P<0.05 and P<0.01. Acquisition of DEGs in PTC The 105 samples which were downloaded from the GEO database were separated into two groups, including the normal group containing 45 normal thyroid samples and the tumor group containing 60 PTC samples. A total of 904 DEGs were identified from the tumor group compared with the normal thyroid samples, including 423 up-regulated genes and 481 down-regulated genes. The identified DEGs between normal thyroid samples and the tumor group were presented in a volcano plot ( Figure 2A). The heatmap shows the gene expression of DEGs ( Figure 2B). Potential targets of celastrol against PTC The 2D molecular structure of celastrol is shown in Figure 3A. In total 392 targets of celastrol were identified by using the online database, then 47 common targets associated with both PTC and celastrol were summarized by a Venn diagram ( Figure 3B). The PPI of candidate targets against PTC In order to illustrate the relationships between the 47 common target proteins, a PPI network was constructed ( Figure 4) which showed the interrelationships in the development and progression of PTC through the PPI network nodes in the disease. There were 47 nodes and 100 edges, the average node degree was 4.26, and the local clustering coefficient was 0.549 in our PPI network. The color of the node showed the degree of node contribution in the network and that there was a positive correlation in the development of PTC. Nodes such as MMP9, JUN, ICAM1, VCAM1, HMOX1, CASP1, and MMP1 were significantly enriched. The top 10 genes in the network ranked by degrees are shown in Table 2. Based on the results, MMP9, JUN, ICAM1, and VCAM1 were selected for further molecular docking experiments. GO functional and KEGG pathway enrichment analysis The 47 common genes of celastrol and PTC were further analyzed for biological processes and KEGG pathways. Biological processes with high enrichment scores such as neutrophil activation involved in the immune response, leukocyte migration, and extracellular matrix organization were associated with PTC ( Figure 5A). The KEGG analysis results showed that the genes were mainly enriched in the interleukin-17 (IL-17), nuclear factor kappa-B (NF-κB), and tumor necrosis factor (TNF) signaling pathways ( Figure 5B). The key pathway as shown in Figure S1. Hub proteins designation and molecular docking analysis The molecular docking results showed that there was good binding between celastrol and the hub proteins, partially explaining the treatment mechanism of celastrol in PTC. Celastrol inhibited the development of PTC through the regulation of targets such as MMP9, JUN, ICAM1, and VCAM1, which was consistent with the results of NP screening. At the same time, the results of molecular docking verified the reliability of NP. The docking results between celastrol ligand and the PTC target protein receptors are shown in Table 3 and Figure 6. PDZK1IP1 DPP4 PRR15 RXRG GABRB2 ZCCHC12 CITED1 LRRK2 LRP4 FN1 ECM1 LAMB3 PLAU CLDN1 KCNJ2 ENTPD1 QPCT TFF3 DIO1 TPO SERTM1 MMRN1 LRP1B LYVE1 PROM1 CCL21 TCEAL2 BEX1 IPCEF1 KIT PKHD1L1 MUM1L1 COL9A3 CDH16 IGFBPL1 DGKI In vitro experiment In order to clarify the mechanism of celastrol on PTC, the four core genes were measured by using qRT-PCR and western blot methods. The results were shown in Figure 7. Compared with control group, the MMP9, ICAM1, VCAM1 activity of celastrol treatment group has decreased, however the JUN activity of celastrol group has increased significantly, which was consistent with the above results. Discussion Nowadays, research focused on natural products has gained more and more attention (28,29). There are many novel therapeutic drugs derived from natural products that have been decoded through a combination of NP and molecular biological analysis (30). This has provided new insights into the systemic connection between different diseases and therapeutic targets as a whole, and provides a promising and powerful tool to clarify disease mechanisms at a systems level and discover potential active compounds (31). In the current study, publicly available databases with information on PTC were integrated to predict the potential therapeutic targets of celastrol and their interactions. These results showed that 47 target genes were up-or down-regulated by celastrol in PTC. After PPI network analysis, MMP9, JUN, ICAM1, and VCAM1 were identified as the key genes with the highest degrees. Modifications in matrix metalloproteinases 9 (MMP9), also known as 92-kDa gelatinase B type IV collagenase, play a significant role in various human tumors, including morphogenesis, differentiation, angiogenesis, metastasis, and tissue remodeling during tumor invasion (32,33). Previous studies have reported that MMP9 is overexpressed in PTC tissues, and targeted inhibition of MMP9 reduced migration and invasion in PTC cells (34). JUN, a leucine zipper protein dimer also known as AP-1, regulates many important cell processes, including cell proliferation, survival, apoptosis, invasion, and metastasis (35). In PTC, the up-regulation of JUN expression played a potential role in proliferation and transformation (36). Intercellular adhesion molecule 1 (ICAM1), a member of the immunoglobulin superfamily, is considered to play an important role in the inflammatory response and immune processes (37). It was demonstrated that the overexpression of ICAM1 was associated with extrathyroidal invasion and lymph node metastasis (38). Vascular cell adhesion molecular 1 (VCAM1), also known as CD106, is also an important member of immunoglobulin superfamily. It has been reported that VCAM1 overexpression can promote thyroid tumor cell migration and invasion in vitro (39). On the basis of GO enrichment analysis, the BP terms of target gene enrichment mainly focus on the response to various materials and the biochemical processes of different substances. Neutrophil activation involved in the immune response, leukocyte migration, and extracellular matrix organization are all important in PTC tumorigenesis (40). Some of the enriched MF terms were related to inflammation, for instance endopeptidase activity, serine-type endopeptidase activity, and carbon-oxygen lyase activity. KEGG enrichment analysis suggested that the celastrol pharmacological action in PTC is closely related to well-known pathways which are thought to be associated with tumors, such as the IL-17, TNF, and NF-κB signaling pathways. The proinflammatory cytokine IL-17 was shown to be significantly upregulated in PTC tissues (41). The overexpression of IL-17 induces MHC class I expression in PTC and facilitates tumor antigenicity via the PD-1/PD-L1 signaling pathway (42). TNF-α is an important cytokine associated with cell growth, differentiation, and apoptosis (43), and also plays a significant role in regulating the adhesion and migration of PTC cells (44). Serum TNF-α level can serve as an indicator of the risk of benign thyroid nodule cancerization, and may be a predictive factor in the occurrence and development of TC and the prognosis of TC patients (45). NF-κB transcription factors regulate cell proliferation, invasion, migration, and EMT processes of PTC cells, and is a key regulatory factor (46). It has been revealed that targeted inhibition of the NF-κB signaling These results indicate that celastrol can inhibit PTC through the above mechanisms, which are closely associated with the key target proteins, biological processes, and tumor signaling pathways described in this study. Further experiments and clinical validation are still required to verify our findings. Conclusions In our study, we first clarified the pharmacological effects of celastrol against PTC by using NP analysis. A total of 47 target genes including MMP9, JUN, ICAM1, and VCAM1 were shown to be the hub target proteins for the anticancer effects of celastrol. On the other hand, functional enrichment analysis including GO and KEGG demonstrated that proteins targeted by celastrol were Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the noncommercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
2021-06-25T06:17:14.941Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "fec762f902b44da58566c2321948789ff1530846", "oa_license": "CCBYNCND", "oa_url": "https://atm.amegroups.com/article/viewFile/69610/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "33e7a152d75944be6d8c87699d606dbcf9c5af5f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
231863555
pes2o/s2orc
v3-fos-license
Green Silver and Gold Nanoparticles: Biological Synthesis Approaches and Potentials for Biomedical Applications The nanomaterial industry generates gigantic quantities of metal-based nanomaterials for various technological and biomedical applications; however, concomitantly, it places a massive burden on the environment by utilizing toxic chemicals for the production process and leaving hazardous waste materials behind. Moreover, the employed, often unpleasant chemicals can affect the biocompatibility of the generated particles and severely restrict their application possibilities. On these grounds, green synthetic approaches have emerged, offering eco-friendly, sustainable, nature-derived alternative production methods, thus attenuating the ecological footprint of the nanomaterial industry. In the last decade, a plethora of biological materials has been tested to probe their suitability for nanomaterial synthesis. Although most of these approaches were successful, a large body of evidence indicates that the green material or entity used for the production would substantially define the physical and chemical properties and as a consequence, the biological activities of the obtained nanomaterials. The present review provides a comprehensive collection of the most recent green methodologies, surveys the major nanoparticle characterization techniques and screens the effects triggered by the obtained nanomaterials in various living systems to give an impression on the biomedical potential of green synthesized silver and gold nanoparticles. Introduction Owing to a number of revolutionary developments in nanobiotechnology, the synthesis methods of various nanomaterials seem uncomplicated and straightforward and enable the construction of literally any type and structured nanoparticle designed and tailored to essentially every possible application let it be in industry, technology or medicine. Metal nanoparticles represent a major class of nanomaterials, where singular physicochemical characteristics yield an ideal platform for the exploitation of such nanomaterials (mainly of silver and gold nanoparticles) for electronics, optics, household items, catalysis, and for various biomedical applications as well [1]. Together with the widespread utilization, the exponentially growing need for nanomaterials and the industrial scale production of these nanomaterials, some concerns have emerged mainly from environment-conscious and eco-sensitive individuals, including numerous researchers [2]. These originate from the fact that nanoparticle production places an enormous burden on the environment, since conventional synthetic approaches often require the administration of toxic chemical entities during the production process, which may cause harmful reactions in the environment and possibly in animal and human health; moreover, such unpleasant chemicals might critically restrict the application possibilities and the biocompatibility of the generated particles [2]. Thus, the pressing demand for metal nanoparticles must be accompanied with eco-friendly, cheap and novel synthesis approaches in order to minimize or completely avoid the administration of dangerous chemicals and at the same time diminish the accumulation of hazardous wastes. Safer production alternatives applying gentle solvents, environment-friendly reducing or stabilizing materials or mild experimental conditions, or even involving the application of biological materials-such as plant extracts or biomolecules of plants, or bacteria, fungi or their lysates-are called green approaches [3,4]. These strategies, although currently in an early phase-thus without substantial and reliable experimental background information or know-how-rapidly gather ground owing to their low environmental footprint, easy methodology and low costs. In the following chapters, we summarize the available primary experimental data on nanoparticles synthesized by means of biological entities, the characterization techniques suggested to describe properly the physicochemical properties of the obtained particles and review the different biological activities exhibited by green synthesized nanomaterials, highlighting the major differences in nanoparticle performance in various biological host systems. Synthesis of Silver and Gold Nanoparticles by Microorganisms Since traditional physical or chemical methods of metal nanoparticle synthesis have obvious limitations and disadvantages, green chemical processes have emerged as a new direction in the chemical industry about two decades ago [5]. Ever since, these biologically inspired green syntheses have attracted considerable attention, offering a promising alternative for maintaining economy while protecting the environment. Over the years, a number of innovative, sustainable synthesis methods have been developed to produce metal nanoparticles using mild experimental conditions (such as ambient pressure, pH and temperature), and a great variety of different non-toxic reducing-capping agents and solvents. In these processes, living organisms, cellular extracts or cell-free growth media of biological agents such as bacteria, fungi, yeasts, viruses, algae or plants are employed as green reaction milieu supplying the reducing as well as the capping agents for nanoparticle formation. These biological entities have been considered as biological "nano-factories" (see Table 1) [6]. Biological synthesis protocols offer a clean, highly tunable, and environmentally benign method for producing nanoparticles with a broad range of sizes, shapes, physical, chemical and biological properties and compositions. The so-formed nanoparticles have a huge advantage over conventionally produced materials: they are more environmentally friendly as compared to the materials covering their surface and are also originally natural, thus biocompatible. [10] Cordyceps militaris cell filtrate Au: 15-20 nm face-center-cubic structure n.a. Cytotoxic (HepG2) [11] Agaricus bisporus filtrate Ag: 8-20 nm spherical n.a. Cytotoxic in vitro (MCF- 7) and in vivo combined with gamma radiation (Ehrlich solid tumor cells in mice) Recently, various microorganisms, mainly bacteria and fungi, have been engaged to produce different metal nanoparticles, such as silver, gold, silver-gold alloy, iron, copper, zinc, palladium and titanium nanomaterials [31,32]. The earliest studies in the research area pointed out that microorganisms have always had a direct or indirect interaction with inorganic materials via geochemical biological processes, originating essentially from the beginning of life; therefore, microorganism-assisted particle synthesis should be regarded as a viable green option and shall be exploited even further. The synthesis of NPs via microbes is a bottom-up approach where nanoparticles are formed as a part of a defense mechanismbased detoxification, as a fundamental survival procedure involving oxidation/reduction of metal ions, generating phosphate, carbonate and sulfide metal forms, or volatilization of metal ions [33]. These processes are carried out by biomolecules, such as various proteins, enzymes, carbohydrates, sugars etc., of the microorganism; however, the exact events of the nanoparticle synthesis have not been fully elucidated yet [34]. The difficulty of identifying the precise mechanism and the active components responsible for the generation of nanoparticles lies in the fact that each kind of microorganism interacts in a different way with a particular metal ion and that the morphology, size and surface properties of the nanoparticles formed are greatly influenced by numerous other factors (mainly environmental conditions such as pH, pressure, temperature), not simply by the biological ingredients of the applied organism [35]. Certain metals, such as silver, are well known for their toxic effects; however, some silver-resistant bacteria can accumulate metals on/in their cell wall. This phenomenon is responsible for the idea of the first pioneering silver nanoparticle synthesis using a silverresistant bacterium Pseudomonas stutzeri [36]. Samadi and co-workers demonstrated similar results obtained via Proteus mirabilis bacteria [37]. They also showed that a change in the culturing parameters can massively influence the formation of nanoparticles. Although this conclusion seemed quite unpleasant, it also offered the possibility of modulating nanoparticle features by varying the experimental conditions and shifting the particle syntheses into a more favorable outcome. Nanoparticles can be generated by microbes either intra-or extracellularly [38]. The intracellular mechanisms involve three main steps called trapping, reduction and stabilization. They rely primarily on the transport of the metal ions into the microbial cell wall. The method involves electrostatic interactions between the negatively charged cell wall and the positively charged metal ions. Then, enzymes residing within the cell wall reduce the toxic metals to harmless nanoparticles, and subsequently, these particles diffuse through the cell wall. Several reports suggested that metal NPs, such as silver and gold, can be easily and readily biosynthesized intracellularly. For example, by using Pseudomonas and Bacillus strains, small, monodispersed gold nanoparticles were produced [39]. Nair et al. successfully extended this synthesis method to prepare silver, gold and silver-gold alloy nanoparticles [40]. It was also proposed that the formation of nanoparticles using certain yeast strains could carry the greatest potential for nanoparticle manipulation, especially in case of maneuvering nanoparticle shape and size, by controlling culture parameters such as growth and other cellular activities [41]. As for the extracellular nanoparticle synthesis, metal ions on the surface of the cells are converted to metal nanoparticles by microbial enzymes, generally by nitrate reductase or hydroquinone-mediated redox reactions [42]. Successful extracellular biosynthesis of silver and gold nanoparticles was achieved using Aspergillus, Fusarium and Rhodopseudomonas strains [43][44][45]. Moreover, Lengke et al. reported that during extracellular synthesis using the cyanobacterium Plectonema boryanum, the size and shape of the formed nanoparticles could be controlled very simply, simply by varying the external temperatures [46]. It is noteworthy that the recovery of metals from the environment by their adsorption onto bacteria also results in bioreduction, yielding metal nanoparticles [47]. In addition to the above described examples, several strains, such as Pseudomonas fluorescens, Geobacillus stearothermophilus and Staphylococcus epidermidis have been successfully applied for the bioproduction of spherical gold nanoparticles in the size range of 5 and 90 nm [48]. Shape selectivity was again observed upon varying the culture conditions. Gold and silver nanoparticles in various shapes (such as spherical and triangular) have been synthesized using algal strains such as Padina gymnospora and Ecklonia cava. One such study showed that the astaxanthin-containing green alga Chlorella vulgaris can also be applied for gold nanoparticle synthesis [49]. Based on this finding, we utilized Phaffia rhodozyma (perfect state Xanthophyllomyces dendrorhous), a basidiomycetous red yeast with high astaxanthin content for microbe-assisted nanoparticle synthesis [50]. The cell-free extract of P. rhodozyma provided almost monodisperse, well-separated and spherical silver and gold nanoparticles with a narrow size distribution (see Table 1) [5]. Numerous yeasts, fungal and actinomycete strains, and even viruses, have been utilized to assemble gold nanoparticles to form microstructures [48]. Nevertheless, a high number of studies highlighted that among microorganisms, fungi-mediated syntheses hold major advantages over bacteria-, algae-or virus-assisted approaches. They justified this rationale as metal ion conversion to nanoparticles by means of fungal cells offers the easiest and most straightforward procedures to control nanoparticle size, shape and achieve monodispersity. As an example, fungi and yeast strains were used to demonstrate that by varying the pH and temperature during culturing, the size and shape of gold particles can be perfectly adjusted, and that decreasing the pH results in nanoplate formation instead of nanoparticles [45]. Despite the huge potential of using microorganisms for nanomaterial production, there are some limitations which should be considered before use. In fact, using biological agents for the synthesis of silver and gold NPs is preferable over chemical methods due to the simple, eco-friendly approach and also for minimizing the application of harmful chemical solvents and reagents. However, after carefully examining the above-presented microbe-assisted syntheses and the applied biological entities, it can be concluded that despite these approaches being relatively straightforward and favorable, they require specific and rather tedious preparations and multistep processes such as culture isolation, maintenance or growth and inoculum standardization. Moreover, on the surface of the obtained particles, multicomponent residuals from microorganisms can accumulate, which would not only define the physical, chemical and biological characteristics of the obtained nanomaterials and their fate in the presence of living systems, but would also trigger potential immunological reactions after entering the organism. For this reason, the synthesis of metal particles using plants or any part of plants came forth and were later more and more prioritized. These reactions tend to be faster than those performed by microorganisms, are more cost effective and are relatively easy to scale-up for the generation of larger amounts of nanoparticles. Synthesis of Silver and Gold Nanoparticles by Plants As discussed above, nanoparticle synthesis using microorganisms is often rather slow, as the availability and maintenance of the various species used in the process is difficult and expensive; moreover, their application on a large scale is fairly restricted [51]. On the other hand, plant-mediated synthesis of metal nanoparticles grants numerous benefits over chemical, physical and microbial methods due to its rapid, well-reproducible, ecological, environmentally friendly, inexpensive procedure that can also be applied readily on an industrial scale [52][53][54]. Therefore, utilization of biological extracts obtained from different plant parts (leaf, fruit, seed, stem, callus, peel and root) for the production of metal nanoparticles such as silver and gold has attracted an extensive amount of interest from the nanobiotechnology research community. As a consequence, a plethora of research papers has been published in the last decade dealing with synthesis approaches using plants, plant extracts or biomolecules deriving directly from plants or plant parts (see Figure 1 and Tables 2 and 3) [55,56]. Plants contain complex structures that can be used in the reduction and stabilization of the nanoparticles [57]. Plant materials generate nanoparticles by taking up, utilizing, accumulating and using different nutrients [58]. The general protocol for a typical plant-mediated metal nanoparticle synthesis requires first the collection and the purification of the plant part of interest [59]. The plant piece is then dried and powdered. For the plant extract preparation, usually, deionized distilled water is added to the plant powder according to the desired concentration. This solution is boiled and finally filtered. A certain volume of the extract is mixed with the appropriate amount of metal salt solution and the mixture is heated to the necessary temperature for the prescribed time under efficient mixing. To achieve the desired nanoparticles, optimization of every protocol is mandatory using different temperatures, solvents, pH conditions, extract concentrations and incubation times [60,61]. The reduction of metal ions to metal nanoparticles results in a color change of the solution, which can then be monitored by assessing UV-visible spectra. The obtained nanoparticles are usually further characterized using an X-ray diffractometer, scanning or transmission electron microscopy (for the characterization methods, please refer to the next chapter of the present review). Peltophorum pterocarpum leaf ∼55 nm, primarily spherical n.a. Non-cytotoxic (HUVEC, ECV-304) [125] Dendropanax morbifera leaf 10-20 nm; polygonal and hexagonal n.a. Non-cytotoxic (A549, HaCaT) [77] Anemarrhena asphodeloides rhizome 10 nm, crystalline face-centered cubic n.a. The possible mechanisms of nanoparticle formation using plant extracts have been examined by several authors [63]. Two main theoretical directions have been suggested: 1. Some studies proposed that the bioreduction of the metal ions was the result of trapping these on the protein surface due to electrostatic interactions between the metal ions and the proteins in the plant material extract. Proteins would reduce the metal ions, which ultimately leads to a change in the secondary structure of proteins and also to the formation of metal nanoparticle seeds or, as these are called, nuclei. The formed nuclei increase gradually in size upon accumulation and further reduction of metal ions on the nuclei, leading to the formation of nanoparticles [141]. 2. The second, generally more accepted approach is that the key mechanism behind the plant-mediated synthesis of nanoparticles is a plant-assisted reduction of the metal ions due to various phytochemicals [142]. Based on available literature data, it is probably not one biomolecule that is responsible for the reduction of metal ions, but several plant components and secondary metabolites together are accountable [106,143]. Such active components include various proteins, among them numerous enzymes, amino acids, vitamins, polysaccharides, alkaloids, polyphenols, flavonoids and organic acids, which are known to be non-toxic and biodegradable, and during nanoparticle synthesis, these can act both as reducing and capping agents, thus promoting the formation of nanoparticles and inhibiting their agglomeration [144]. On Au + -dihydromyricetin [145] and Ag + -hydrolysable tannin [89] pairs, the mechanism of particle formation upon the reduction of metal ions with a plant extract has been analyzed in detail. Metal ions first form complexes with the phenolic hydroxyl groups in the biomolecule; then, the ions are reduced to zero oxidation state metals, while the biomolecule is oxidized, which is then capable of stabilizing the particles in parallel ("capping"). Therefore there is no requirement for the addition of further capping and stabilizing agents during the synthesis. These syntheses are generally very simple: the nanoparticles form spontaneously by mixing the metal salt solution with the plant extract. The formation time of the particles varies between a few minutes and a day, depending on the metal-plant extract pair used. It is also worth noting that the conjugated π-electron system of polyphenols and flavonoids allows the donation of electrons, or hydrogen atoms, from the hydroxyl groups to various free radicals, so these molecules have an antioxidant capacity, which can also extend the life of nanoparticles [146]. The properties and biological performance of silver and gold nanoparticles, which have been generated using plant extracts, are summarized in Tables 2 and 3 [147]. In 2003, Shankar and colleagues were among the first to report the rapid green production of silver nanoparticles [148]. In their experiments, large amounts of silver nanoparticles were synthesized using plant extracts made from the leaves of Geranium (Pelargonium graveolens) and Indian lilac (Azadirachta indica) mixed with an aqueous solution of silver nitrate [149]. Among the indisputable advantages of the process, the authors highlighted first of all the speed of the synthesis. The nanoparticles formed much faster during the reduction with the plant extract than in their previous experiments using microorganisms. In the same year, as another step toward plant-mediated nanobiotechnology, Gardea-Torresdey and colleagues reported the production of AgNP particles using Alfalfa (Medicago sativa) [150]. Since then, the synthesis of metal NPs has been performed by different research groups utilizing a great variety of plants, plant extracts and their molecular components, where the bioactive materials were used as reducing and stabilizing agents in the production of silver nanoparticles, e.g., extracts of camphor tree (Cinnamomum camphora), lemon balm (Melissa officinalis), peppers (Capsicum annuum), Japanese red pine (Pinus densiflora), ginkgo (Ginkgo biloba), kobus magnolia (Magnolia kobus), oriental planetree (Platanus orientalis) and common grape vine (Vitis vinifera) [4,151]. These plants contain large amounts of active compounds (e.g., polyphenols, flavonoids) that are suitable for the reduction of metal ions [64,152]. Baharara et al. suggested that phenolic groups and proteins in plant extracts are responsible for the reduction of silver ions [65]. Ajitha et al. showed that the reduction of silver ions can be attributed to the hydroxyl and carbonyl groups in the active components (e.g., flavonoids, terpenoids, phenols, proteins) of plant extracts [66]. Furthermore, they revealed that proteins and peptides form a protective coating around the particles, thereby increasing the stability of the particles and preventing their aggregation. Nadagouda and colleagues were the first to produce silver nanoparticles using coffee and tea extract. In their work, they demonstrated that in addition to the use of plant extracts, no other stabilizing agent was required, as the active ingredients of the extracts served as both reducing and stabilizing agents during the synthesis [153]. This simple one-step synthesis method has also been successfully extended to produce palladium, gold and platinum nanoparticles. Silver nanoparticles have also been produced with aqueous-alcoholic solutions of roasted coffee (Coffea arabica), green tea and black tea [54,58,59]. The authors found that caffeine and theophylline in the extracts were responsible for stabilizing the produced nanoparticles. Moreover, Dhand et al. described that chlorogenic acid is the major phenolic component in the coffee extract that plays an essential role in the reduction of silver ions [64]. Importantly, it has been proposed that the quality as well as the quantity of the potential reducing or stabilizing components of the plant extract used for the synthesis determine the properties of the resulting particles (e.g., size, morphology), including their reactivity in subsequent reactions. Ashokkumar and co-workers observed that particle size decreased with increasing plant extract concentration, while-as described in another study-the number of particles formed correlated with the amount of plant extract used [89]. Moreover, shape selectivity was observed by varying the dose of the bioreducing agent. Chandran et al. also achieved shape and size selectivity of produced silver nanoparticles by modulating the concentration of the starting metal salt solution and aloe vera plant extract [154]. Loo et al. produced round-shaped silver nanoparticles with green tea extract [155]. They made the observation that when the concentration of the extract was increased, the size of the nanoparticles decreased while the number of the particles increased [156]. Recent works pointed out that besides the nature of the plant extracts and the types and concentrations of the active biomolecules within, several factors, including reaction time and temperature, pH and the electrochemical potential, can have an effect on the reduction process [157]. For instance, it was demonstrated that increasing the temperature can improve the nucleation rate, leading to the synthesis of smaller AgNPs and to increased synthesis rate of AgNPs. Furthermore, it was also proven that proteins in the plant extract significantly affect the shape, size and yield of nanoparticles during synthesis [158,159]. Green synthesis of silver nanoparticles was performed using the aqueous solution of Ziziphus mauritiana leaves extract as a bio-reducing agent. In this study, the effects of the leaf extract and silver nitrate concentrations, as well as of the temperature on the preparation of nanoparticles were investigated in detail [62]. These green synthesized silver nanoparticles were often produced for specific application purposes-not necessarily for medical utilizations-and sometimes, only the possibility of nanoparticle synthesis using a given plant extract was tested. However, by their application, these nanoparticles could come into contact with living systems; therefore, several research groups following the generation of nanoparticles, rightly, examined their impact on different biological systems [160]. Despite these attempts to assess the effects of the as-prepared nanomaterial on some living organism, only a few studies have examined thoroughly, or compared the complex (antibacterial, antifungal, antiviral and cytotoxic) biological activity of the produced nanoparticles [161]. To follow the most approved characterization procedure, we have investigated the chemical and biological characteristics of nanoparticles prepared by coffee and tea extracts [69]. Our results clearly showed that green materials used for stabilization and for reduction of metal ions have a defining role in the biological activity of the obtained nanomaterial against bacteria, fungi or human cells. Based on our results, we have recommended to obtain a circumspect selection of the green extracts used for the synthesis of nanoparticles, and suggested that a comprehensive screen of the products should be carried out prior to their applications to delineate their behavior in the presence of living systems. Based on today's nanotechnology results, combining the available methods of biology, chemistry and material sciences, more complex investigations can be carried out [162,163]. Using such an approach, systematic examinations regarding AgNP aggregation behavior with simultaneous measurements of its effect on biological activity can be performed to offer new frontiers to preserve nanoparticle toxicity by enhancing colloidal stability [70]. The synthesis and utilization of gold nanoparticles is an emerging research area due to the unique and tunable surface plasmon resonance and the electrical conductivity, excellent catalytic activity and the biomedical potential of AuNPs, including drug delivery, molecular imaging and biosensing [164]. Therefore, there is also a growing need for environmentally benign synthesis processes of these particles without losing sight of the major aim, i.e., to provide a safe application and avoid adverse effects in medical applications. To date, several examples of gold nanoparticles produced by plant extracts have been reported in the literature (see Table 3) [165,166]. We observed that the same types of plants and their respective components are generally exploited for the synthesis of AuNPs as for AgNPs. The first to report the fabrication of gold nanoparticles using living plants was in 2002 by Gardea-Torresdey and co-workers [150]. They described the formation and growth of AuNPs inside live alfalfa plants. In their study, alfalfa plants were grown in a tetrachloroaurate ion-rich environment. The absorption of gold metal by the plants was proven by transmission electron microscopy and X-ray absorption measurements. Sesbania drummondii seedlings were also successfully applied in a similar system [167]. Beattie and Haverkamp revealed that the bioreduction of gold salts to metal occurs in chloroplasts [168]. Although the idea of utilizing living plants is revolutionary; nevertheless, the purification of the intracellularly formed nanoparticles proved to be a difficult task. Therefore, extracellular syntheses of nanoparticles, which utilize the extracts of plants, have gained immediate popularity [169]. As was mentioned before, one of the first studies on the biosynthesis of metallic silver and gold nanoparticles using plant extracts was performed by Shankar et al. who applied geranium leaf extract during the synthesis [148]. These reactions lasted for 2 days and the generated nanoparticles had various morphologies, such as spherical, triangular, icosahedral and spherical. This method was later optimized using other plant extracts (neem leaf), where a shorter reaction time (~2.5 h) was achieved [149]. The rapid green synthesis of monodispersed, spherical gold nanoparticles with dimensions of~20 nm was observed using Mangifera indica leaf extract [170]. The reduction of gold cations to gold nanoparticles by this extract was completed within 2 min and the obtained colloid was found to be stable for more than 5 months. Highly stable crystalline gold NPs were produced using Momordica charantia as well [171]. Dwivedi et al. also reported about rapid biosynthesis of metal nanoparticles using Chenopodium album, where the leaf extract was successfully applied to obtain silver and gold nanoparticles in the size range of 10-30 nm. They observed not only shape selectivity, but noted that the formation of spherical nanoparticles was more favorable at higher leaf extract concentrations [172]. The synthesis of gold nanoparticles of various shapes (spherical, hexagonal and triangle) via olive leaf extract as a reducing agent has also been demonstrated. The size and the shape of gold NPs were modulated by varying the ratio of plant extract and the initial metal salt in the reaction medium [173]. The authors emphasized the role of the high phenolic content of the hot water extract of olive leaves, which helped in the reduction. They observed that the generated spherical particles were capped by phytochemicals. It is well known that plant and plant-based phytochemicals are rich in various polyphenols, flavonoids, terpenoids, aldehydes, proteins, alkaloids, acids and alcoholic compounds [71,174]. These active components are assumed to participate in the reduction of chloroauric acid to form AuNPs and serve as stabilizing agents to prevent particle aggregation. Smitha et al. achieved shape diversity, when Cinnamomum zeylanicum leaf broth was used as a reducing agent: the plant extract at lower concentrations caused formation of prism-shaped particles, while at higher concentrations spherical particles dominated [175]. Tansy fruit extract was successfully employed for the development of silver and gold nanoparticles with spherical and triangular shapes with an average size of~15 nm [176]. These nanoparticles were found to have a crystalline structure with facecentered cubic geometry verified by the XRD method. Plant extract of Pulicaria undulata (L.) was used as both reducing agent and stabilizing ligand for the rapid and green synthesis of gold, silver and gold-silver bimetallic alloy nanoparticles. These nanoparticles showed composition-dependent catalytic activity [177]. Regarding the collected data on metal nanoparticle synthesis, one thing is certain, the possibilities are endless-almost any plant, or part of it, can be put to use for producing nanoparticles; gold NPs were prepared by an environmentally friendly, one-step synthesis process using leaf extracts from coffee, mint, mango, tea, grapes, lemon, eucalyptus, neem, roses, aloe vera, tamarind, coriander and peppermint [178][179][180][181]. However, the careful analysis of the synthesis approach and the physico-chemical properties of the realized nanomaterials accentuate the attentive selection of plant material for nanoparticle synthesis, as these biocomponents are inherently responsible for the morphology, stability and biological properties of the formed NPs. Apart from the green material, pH can also influence particle production. Armendariz et al. investigated the synthesis of gold NPs using Avena sativa at different pH values [182]. It was revealed that at higher pH, small sized NPs were formed, and they suggested that aggregation occurs at lower pH. Their findings imply that variation in the pH of the reaction medium has also a defining role on the shape of the obtained particles [183,184]. Gold nanoparticles obtained by plant-mediated methods have a very particular surface with different structural and functional compounds attached to the metallic surface with varying interactions, which should be considered upon their potential biomedical applications, especially in antimicrobial settings and in human therapy [185]. Plant synthesized and capped gold nanoparticles generally exhibit biocompatibility and have a great antimicrobial and cytotoxic potential. For such applications, it is essential to produce gold NPs with preferentially small sizes but with high biostability. Compared to conventional methods, plant-mediated syntheses offer an environmentally and economically friendly alternative for the production of metal nanoparticles. Therefore, the exploration of easily accessible plants is recommended, and the development of innovative green synthesis methods that can be implemented on an industrial scale should be encouraged. However, for the safe use of green synthesized metal nanoparticles with different properties in everyday life, a comprehensive study of the nanoparticles is required before their application to assess their behavior in living systems. Acknowledging this concept, in the following chapters, the antimicrobial and anticancer properties of the green produced silver and gold nanoparticles will be summarized and discussed. Challenges Associated with Green Synthesis The potential application, fate and behavior of nanoparticles in the environment and towards living systems rely on important functional properties such as size and shape, monodispersity, surface charge, plasmonic response, medical diagnostics response and biofunctional or catalytic activity. Controlled synthesis of the designed greener products by safer processes, while maintaining nanoparticle function and efficiency, is one of the most challenging and recurrent issues to be solved for the development and spread of novel green synthesis protocols [186]. As the organisms used in nanoparticle synthesis can vary from simple prokaryotic bacterial cells to complex eukaryotes, the application of green, sustainable synthesis routes for the production of metal nanoparticles still require extensive research and innovative solutions to set a promising and sustainable trend. It can be clearly seen from the above-presented synthesis examples that these methods are still in the development stage; therefore, further optimization of these processes is required. The main limiting issue for the large scale and routine utilization of green synthesis is that these nanoparticle systems are very diverse. The experienced problems and the challenges are all associated with the varying type, quality and concentration of green extracts, the diverse ratios of the reagents, reaction conditions (time, temperature and pH) and yield and of the deficient or inadequate product characterization of the obtained particles, which render proper comparisons between nanoparticle performances fairly difficult. Further main issues relate to the control of crystal growth, size and morphology, dispersity and nanoparticle stability [187]. To date, most reports on green synthesis of metal particles using extracts of microorganisms or plants focus mainly on demonstrating the feasibility of these extracts for the nanomaterial synthesis. However, the organic compounds/biomolecules responsible for metal ion reduction are rarely identified, the exact reduction mechanisms are almost never determined and comparisons with conventionally produced nanoparticles are reported only in a few studies [70,188]. Thus, questions arise on whether and how can these methods be adopted for large-scale production of metal nanoparticles to meet industrial needs. To achieve this goal, future investigations should move toward the optimization of green reaction conditions, to the development of accurate and reliable synthesis protocols that enable the control of the above-mentioned critical factors that affect the properties of nanoparticles [189]. The proper selection of the green reducing agents is of vital importance, as the biological properties of gold or silver nanoparticles produced by microorganisms and plant extracts varies enormously depending on the biological materials used for the synthesis [69]. Plants are generally considered to be ideal candidates for nanoparticle synthesis [156]. However, for a larger scale plant-based nanoparticle synthesis, the green entity should originate from the same site, must be of the same quality, of the same composition in each synthesis round to maintain reproducibility and to guarantee the identical performance of the generated nanoparticles. This requirement is the main limitation of the scale-up manufacturing and of the widespread application of the green synthesized nanoparticles. It is evident that the major challenge is to collect the same material in high quantities, and to grow it in the same way, under the same circumstances to yield identical products. To ascertain replicability, the composition of the biomaterial should be precisely determined prior to be used in nanoparticle synthesis. The type of the green entity and its composition is crucial, as it was shown that nanoparticles of various shapes and sizes were synthesized with differing tea extracts because of the different concentration of caffeine/polyphenols, which acted as both reducing and capping agents upon synthesis [190]. The lack of standard protocols and guidelines for the characterization of the extracts is a serious hurdle in the commercial production of green nanomaterials. Optimization of the synthesis parameters is also an important aspect which should be considered [144,186]. In order to use microorganisms or plants for synthesis of metal nanoparticles on industrial scale, the yield and the production rate are important issues. Therefore, the concentrations and the ratio of the extract and the salt, the experimental conditions such as synthesis time, pH and temperature, the buffer to be used and the stirring velocity upon production need to be properly controlled and optimized. The size and the shape selectivity, as well as the concentration of the nanoparticles, can be modulated by optimizing the concentration of the green extract [154]. Furthermore, the stability and the aggregation propensity of the obtained nanoparticles can also vary depending on the molecules of the green solution. To precisely design the properties of the nanoparticles, the exact composition (qualitative and quantitative) of the green material, let it be a plant extract or a microbial lysate, should be determined. The more complex the composition of the green solution is, the more problematic the estimation of the expected nanoparticle features becomes. This is generally the biggest problem with green nanoparticle synthesis, since in most cases it is impossible or not worthwhile to determine the components of the green extract. Regarding other experimental factors, it was demonstrated that the longer the reaction time was, the larger the nanoparticle size became [191]. In addition, by increasing the stirring time, the mean particle diameter increased [192]. It was also proven that the mean particle size and the number of nanoparticles formed decreased at higher temperatures [156] The binding of metal ions to the biomolecules of the green material was shown to be pH-dependent; thus, at different pH values, diverse-tetrahedral, hexagonal platelets, rod shaped, irregular shaped-particles can be formed. Generally, smaller nanoparticles were achieved at higher pH [182]. Moreover, extraction and purification of the synthesized nanoparticles from living or non-living biological sources for further applications is still an important and challenging issue. As demonstrated, besides various plants, several microorganisms, including bacteria, fungi and yeasts, are able to produce intra-and extracellularly metal nanoparticles. In case the intracellular approach is applied, the access and the purification of the produced nanoparticles from the cells is extremely complicated. High energy consumption-related physical and/or chemical extraction methods, including heating, freezing and thawing, sonication processes or osmotic shock and several centrifugation and washing steps, should be applied to separate nanoparticles [144,193]. For example, gold nanoparticles were extracted from Lactobacillus kimchicus with ultrasonication and continuous centrifugation [194]. Importantly, these procedures are multi-step methods and have large energy and solvent demands, leading to high wastefulness. Furthermore, these methods can affect nanoparticles seriously, by modulating characteristic properties such as shape and size, and to a larger extent the surface properties of nanoparticles [195]. By their utilization nanoparticle aggregation, sedimentation and precipitation could happen, leading to undesired and uncontrolled properties and behavior upon utilization. Another approach for nanoparticle extraction is by enzymatic lysis; however, the particles outside bacteria are generally less stable than those inside the cells and serious aggregation might follow the extraction. Moreover, due to the expected costs, this purification method could not be scaled for industrial production. The extracellular type of nanoparticle formation is a more favorable method, not only because of the simplicity of purification, but also due to the advantageous production rate [196]. There is no need of downstream purification steps for the extraction, but generally, after the one-step synthesis procedure, centrifugation or filtration is used for purifying nanoparticles [197]. For example, silver nanoparticles synthesized extracellularly by Pseudomonas sp. were collected by centrifugation at 12,000× g and 25 • C for 10 min and washed three times by water to remove the unconverted metal ions or any other constituents [14]. However, it should be noted that application of these purification steps could also change characteristics such as the stability of the nanoparticles, and as a consequence, particle aggregation is not uncommon. Due to these drawbacks, most studies omit the purification step or apply only a mild filtration or centrifugation step. Nevertheless, green entities not only provide a natural capping to synthesize nanoparticles, but prevent their aggregation by providing additional stability [193]. Considering the scalability potential of the whole nanoparticle formation process, the task of how nanoparticle purification methods affect and change properties of nanoparticles also poses a major challenge. In addition, there is only a handful of evaluations regarding the complex toxicity and life cycle assessments of nanoparticles as a product. Such assessments should be carried out to estimate the costs and consequences of implementing these synthesis protocols. The main limitation of conducting these quantitative analyses of sustainability implications is the lack of information [198]. There is no experimental data on nanoparticles from all the life cycle stages, and information associated with their synthesis, mechanism, characterization, application, pathway and fate is also scarce. These assessments are not completed due to the following huge limitation: since the reduction mechanism of metal ions to nanoparticles is not yet elucidated, the reducing agent, which is a defining factor for the nanoparticle synthesis, is excluded from most life cycle analysis calculations, which led to constrained modeling and consequent false quantitative assessment of such processes [198]. It also complicates the assessment of these green synthesis methods, in that there are no uniform regulations at either national or international levels that would regulate and describe the quality and the origin of the materials and restrict the conditions of nanoparticle production, standardize and control the quality of nanoparticles formed as well as monitor their use and long-term effects. Therefore, it is a very difficult task to design, estimate and compare the behavior of the nanoparticles with different properties produced by different green entities. Based on the above-described challenges, detailed in-dept research is needed to establish green synthesis methods that are uniform, safe and economic, to determine the effects and potential toxicity of the designed and obtained nanoparticles. Characterization According to the information summarized in the previous chapter, a massive number of biological entities have been successfully applied for metal nanoparticle generation [199]. Although green syntheses are regarded as biocompatible and safe procedures, depending on the extract/microorganism used, materials with different physical, chemical, optical, electrical, biological and catalytic properties are produced [69]. In contrast to chemical methods, where usually only one reducing agent is responsible for the reduction, in green syntheses, a multitude of different molecules react with metal ions, some of these instantly, others in a lagging manner, and form interactions with the nanoparticles upon production [69]. For monitoring the factors affecting the synthesis, the success and the yield of the synthesis, nanoparticle characterization is fundamental. Such a thorough investigation, where not simply the morphology, size and shape, purity, dispersity and solubility, but also the aggregation propensity, stability and surface characteristics are assessed, would give a detailed overview about the nanoparticle features and helps to estimate the behavior and also the degree of how the nanoparticle would affect the various living systems. Unfortunately, only a few articles present a full physical, chemical and biological characterization of the particles before advertising their utilizations for different settings (see Figure 2) [70]. To gain insight into the nanoparticle characteristics, various analytical techniques can be utilized [200,201]. The most frequently applied techniques are the UV-visible (UV-VIS) spectroscopy, scanning and transmission electron microscopy (SEM, TEM), Fourier transformed infrared spectroscopy (FT-IR), powder X-ray diffraction (XRD), energy dispersive spectroscopy (EDX), atomic force microscopy (AFM), dynamic light scattering (DLS), zeta-potential measurement (ZP), thermogravimetric analysis (TGA), inductively coupled plasma mass spectrometry (ICP-MS), Raman spectroscopy and X-ray photoelectron spectroscopy (XPS) [202]. To start with, a less sophisticated but rather useful "method" during nanoparticle synthesis is the visual examination or color change test, meaning that when the reaction mixture changes its color and turns into a brownish (AgNP) or purple (AuNP) color, it is an indication of metal nanoparticle formation [199]. After the synthesis reaction, UV-Vis spectroscopy is generally used to detect the size and stability of the produced nanoparticles. The unique optical properties of metal nanoparticles allow the interaction and resonance with light causing the appearance of a characteristic surface plasmon resonance (SPR) band. A number of studies demonstrated that the absorption SPR peak of metal nanoparticles are between 200-800 nm, and are typically in the range of 400-450 nm for silver and in the range of 500-550 nm in case of gold NPs [201]. UV-VIS spectroscopic analyses are suitable for determining the size of NPs, as these exhibit a red shift in the SPR peak with increasing nanoparticle size and a blue shift for decreasing size. Many researchers use this technique to explore the effect and relationship between factors influencing synthesis such as pH, temperature, reaction time, amount of plant extract and ion concentration. Bélteky et al. published a study about different factors affecting nanoparticle aggregation and its consequences on cytotoxicity and antimicrobial activity based on UV-VIS and DLS measurements [70]. DLS studies are mainly used for ascertaining the particle size distribution and to obtain the average hydrodynamic diameter of the samples [143]. Zeta potential values give information about the stability of the nanoparticles via measuring the electric charge on the particle surface. Sujitha et al. reported that AuNPs show a lower ZP value when lower concentration of the citrus extract was used during the synthesis, whereas higher ZP values were obtained for the produced nanoparticles when higher concentration of the extract was applied. XRD is a primary analytical tool in gathering information about the crystal structures, phase identification and crystallite size. Most of the studies reported about the formation of face-centered cubic structured silver nanoparticles according to the XRD measurements performed [203]. In some cases, particles with hexagonal and cubic structures were also observed. EDX analysis could also be used to confirm the structure and purity of the synthesized nanoparticles by determining the elemental composition [143]. Microscopy-based measurements, such as SEM, TEM (high resolution, field emission) and AFM, are considered essential tools for obtaining morphology data (e.g., size and shape) from images taken of the nanoparticles [204]. Moreover, AFM provides a 3D image about the particle, and thus, every dimension, even the height of the nanomaterial, can be calculated. These microscopic techniques are also suitable to gather information about the purity, polydispersity and the surface properties of the resulting particles. This is particularly important when green synthesized nanoparticles are characterized, since several papers have shown that the formed particles were embedded or entrapped in a matrix derived from the biological entities used during the green synthesis. FT-IR measurements provide the means to examine the surface chemistry and to identify surface residues such as functional groups-often hydroxyl and carbonyl moietiesthat reside at the surface following particle production. This technique can be used to reveal which biomolecule/s (phenolics, terpenoids, glycosides, peptides, proteins and tannins) can be responsible for the reduction, capping and stabilization of nanoparticles. As a complementary technique, Raman spectroscopy can also be useful in detecting a variety of chemical species that are joined to the surface of nanoparticles during synthesis [200]. Other relevant characterization methods, such as ICP-MS, XPS or TGA could be helpful to describe the surface chemical structure of the particles, to predict precise chemical composition and the thermal stability of the obtained nanomaterials [205]. Antimicrobial Activity of Green Synthesized Gold and Silver Nanoparticles Since the beginning of the 21st century, the treatment of infectious diseases caused by multidrug resistant microorganisms has become one of the main challenges in human therapy, agriculture and in the food industry. Multidrug resistant strains are unresponsive to many antibiotics; therefore, alternative methods are required for their elimination [206]. Management of viral infections is even more problematic, because of resistance development and of the toxicity of the antiviral compounds [207]. Today, the entirety of mankind experiences that virus-caused pandemics have not only a global impact on human healthcare but also influence significantly the world economy [208]. Silver and gold have long been known as potent biocides [209]. Antibiotics in general have a selective target in the microbial cell; however, gold and silver nanoparticles exhibit a fairly broad spectrum activity. AuNPs act via destroying the cell membrane potential, or by inhibiting the binding of tRNA to the small subunit of the ribosome, thereby hindering the protein synthesis. In the presence of AuNPs, the activity of ATP synthase is impeded, which will ultimately lead to cellular ATP depletion [210]. The effect of silver nanoparticles on microbial cells can be categorized in four major areas: AgNPs interact with the cell wall and the plasma membrane, causing structural alterations and eventually the loss of the semipermeability of the membrane. After entering the cytoplasm, AgNPs induce ROS production and by binding the phosphate group of relevant molecules, it affects major signal transduction processes. The thiol preference of AgNPs leads to strong interactions with amino acids, by further disturbing the protein synthesis [211]. Based on the above-described multitude of cellular targets and mechanisms of action, the development of microbial resistance against AuNPs or AgNPs is rather unlikely; therefore, these metal nanoparticles gain considerable advantage over other types of antimicrobial agents in the long run. Significant developments in nanotechnology made it possible to generate silver and gold nanoparticles comparable in size with certain biomolecules of the living cells. These advancements were soon followed by an ever-growing interest of microbiologists for the production and utilization of these nanoparticles against pathogenic microorganisms, especially against multidrug resistant strains. However, the amplification of microbiological utilization of nanoparticles raised concerns about the induced environmental burden and urged a scientific impetus for ecofriendlier synthetic approaches to be established [36,66,114]. The biological effect-as well as the antimicrobial activity-of AgNPs and AuNPs depends on their size, morphology and concentration [56]. Nanoparticles smaller than 20 nm have a relatively large size/surface ratio; thus, they bind more efficiently to the surface of the microbial cells and penetrate easily across the cell wall and plasma membrane. Although some studies on the dependence of biological action on nanoparticle morphology revealed that triangular shaped particles are more effective than spherical forms [212], there are no data available on the morphology-dependent activity of green synthesized nanoparticles. The reducing and capping agents applied during the green synthesis can also modulate the biological activity of the nanoparticles. In our previous study, we and our collaborators synthesized AgNPs by using sodium citrate and green tea extract [65]. The size and the shape of the as-prepared particles were similar. When we tested the antibacterial and antifungal activity of these particles at the same concentrations, the green tea-synthesized AgNPs proved to be more effective than the chemically synthesized ones. These results suggest that biologically more active nanoparticles can be produced via green synthesis, even if the physical properties of the chemically and biologically synthesized NPs are very similar. The antimicrobial effects of green synthesized nanoparticles have been studied on the three main groups of microorganisms (viruses, bacteria and fungi), although very rarely on all three groups (see Figures 2 and 3, and Tables 1-3) [62,181]. Generally, the antibacterial features are tested following nanoparticle synthesis and characterization. The most severe knowledge gap is on the antiviral propensity of the generated AgNPs or AuNPs, which needs to be filled urgently. The antiviral activity of AgNPs and AuNPs is probably based on their binding to the viral surface proteins preventing their attachment to the cell surface receptors and impeding also the virus entry to the cells. The other possible antiviral mechanism involves the inhibition of the virus particle assembly inside the host cells [213]. The major reason for the lack of studies on the antiviral activity of silver and gold nanoparticles, also of those produced by green methods, is related to the technical difficulties of properly maintaining viruses for the examinations. The few studies published in the field utilized human or animal viruses, which were maintained in tissue culture or using chicken embryos and the antiviral activity of the NPs were detected by MTT assay [116], by hemagglutination tests [17] or by real-time PCR method [130]. Haggag et al. synthesized AgNPs with the aqueous and hexane extract of two plants, namely Lampranthus coccineus and Malephora luthea, and tested the activity of the particles against HSV-1 (Herpes simplex virus type-1), HAV-10 (New Avian Influenza virus subtype) and CoxB4 (Coxsackie virus B4). AgNPs produced by hexane extract of L. coccineus inhibited the action of all the three viruses while AgNPs produced by hexane extract of M. luthea inhibited the infection caused only by HAV10 and CoxB4. The antiviral activity was more effective when nanoparticles were applied before the viral infection, suggesting that AgNPs interacted with the coat protein of the viruses, preventing their entrance to the cells [116]. Gold nanoparticles prepared using garlic extract were tested and proven to be effective against measles virus probably for similar reasons as described above in Haggag's work: via binding to the viral surface proteins, AuNPs are able to inhibit the attachment of the virus particles to their receptors on the surface of the cells [130]. Avilala and Golla demonstrated that Nocardiopsis alba-synthesized AgNPs inhibited the action of Newcastle Disease Virus (NDV) [17]. Antiviral activity of the AgNPs formed by curcumin-mediated synthesis was proven against respiratory syncytial virus (RSV). Biologically synthesized AgNPs, for which medicinal plant extracts (Andrographis paniculata, Phyllanthus niruri and Tinospora cordifolia) were used, decreased the infection caused by chikungunya virus. In this study, however, it is not clear that the observed antiviral activity is owing to the reducing agents or to the AgNP itself, since the antiviral properties of these plants were previously known [214]. Unfortunately, none of the research studies examined the antiviral action of any of the generated metal nanoparticles on SARS-CoV-2, or other coronavirus strains. On the contrary to antiviral features, the antibacterial and/or the antifungal activity of biologically synthesized NPs have been tested by many research groups that are actively working on green nanoparticle synthesis and their application possibilities [34,36,62]. The most commonly used methods for this purpose are either well or disc diffusion assays. In both of these approaches, the microorganism is spread onto the surface or inoculated into the growth medium. In the case of a well diffusion assay, a well is prepared in the medium and the tested nanoparticle solution is loaded into the well, while in the disc diffusion assay, the nanoparticle solution is loaded to a filter paper disc and then placed onto the surface of the inoculated medium [56,61]. The appearance of the inhibition zone around either the well or the filter disc indicates how effective the action of the nanoparticle is. Both of these methods are cheap, quick and can be easily completed; however, the results are difficult to compare and not fully quantitative. Concerning bacteria and yeasts, the establishment of the number of colony forming units (CFU) in the presence of the nanoparticles, compared to a control sample (without NPs), is also a suitable method to describe the efficiency of the particles. This method is more laborious than the previous ones; therefore, it is used more seldom. Nevertheless, it yields a quantitative result. The growth inhibition of filamentous fungi can be studied by inoculating mycelium disc or sclerotia onto the surface of nanoparticle containing and of a control medium. After the incubation time, the diameter of the colony should be measured and compared to the control, or the number of developed sclerotia can be counted, respectively [215]. The microdilution assay is another method suitable for testing antibacterial or antifungal activities, where the growth rate of the microbes is checked in liquid medium in the presence and in the absence of nanoparticles. The turbidity of the samples is detected after the incubation time [216]. The drawback of this method is that the potential intrinsic turbidity of the nanoparticle colloid itself can influence the results. The antibacterial efficiency of various green synthesized silver nanoparticles has been studied intensively (see Figure 3, Tables 1 and 2). These studies revealed that it is a powerful agent even against multidrug resistant species, such as Pseudomonas aeruginosa, Staphylococcus aureus and Klebsiella pneumoniae. Interestingly, the susceptibility of Grampositive and Gram-negative bacteria to silver nanoparticles differs somewhat; Gramnegative bacteria are more susceptible than Gram-positive counterparts. This difference is related to the distinct structure of their cell wall. Gram-negative bacteria have a thin cell wall consisting of a peptidoglycan layer; therefore, AgNPs penetrate more easily across this structure than across the thick cell wall of the Gram-positive species [217]. Another rather valuable property of AgNPs is their biofilm-eradicating capability [218,219]. Biofilm formation is a very important virulence factor of several pathogenic bacteria and yeasts. In the biofilm, the cells are surrounded by an extracellular polymeric substance. This matrix prevents the penetration of the conventional antibiotics; therefore, matrix-embedded cells are resistant to the antibiotic treatment and can be the sources of chronic or recurrent infections. However, AgNPs are capable of diffusing in the matrix and kill the cells in the inner layer of the biofilm [218]; thus, such a biofilm-inhibiting effect of silver nanoparticles is a paramount feature that should be emphasized more, especially in preventive antimicrobial application settings. The absorption and penetration of silver nanoparticles across the biofilm matrix depends on their size as well as on the surface materials, which in case of green synthesized nanoparticles can be quite complex; therefore, during biological synthesis, the appropriate approach and capping agent has to be chosen with care. The antibacterial and antifungal effect of biologically formed gold nanoparticles is ambiguous. In some relatively extensive studies, no effect could be attributed to AuNPs [220,221]; however, in other research works, AuNPs proved to exhibit potent antibacterial and/or antifungal activity (see Figure 3, Tables 1 and 3). It can be speculated that as the interaction between the cell wall of the microorganisms and the nanoparticles is governed by electrostatic forces, it was probably the charge of these interacting surfaces that modulated the effect of AuNPs. Caudill and colleagues demonstrated that an abundant quantity of negatively charged teichoic acids in the cell wall of Gram-positive bacteria can interact with positively charged gold nanoparticles [222]. Similar observations were made for Gram-negative bacteria, where the lipopolysaccharide content of the outer membrane provides the negative charge which faces cationic AuNPs [223]. These latter examples further highlight the importance of the nature and the physicochemical features of the capping materials-derived from the green material utilized for the synthesis-which can influence the surface characteristics of the nanoparticles and thereby modulate their activity, e.g., inhibiting their attachment to the bacterial cell wall [68]. Importantly, these nanoparticle features seem to render them highly efficient agents for the elimination of clinically isolated human pathogenic microorganisms as well as of multi-resistant strains [72,119,224]. Toxicity and Anticancer Activity of Green Synthesized AuNPs and AgNPs One of the most important advantages of green synthesized AgNPs and AuNPs is their potentially enhanced biocompatibility compared to nanoparticles of the same chemical element synthesized in a classical chemical procedure. Similarly to their non-green generated counterparts, these particles could be exploited in the future in cancer therapy; however, their effects on human cells, not only on cancerous ones, needs to be evaluated. As potential therapeutic molecules, they come into contact with numerous cells of the body; thus, it is essential to investigate thoroughly the cytotoxic activity of these nanoparticles. Several chemically synthesized metallic nanoparticles display anticancer activity in vitro and in vivo as well. Green synthesis with plants or other organic materials gives the opportunity to prepare nanoparticle solutions carrying biologically active compounds coming from the applied natural extract, which might result in a modulated anticancer activity, rendering the particle more or even less potent, or toxic towards human cancer and non-cancerous cells. In fact, a large number of green synthesized AgNPs and AuNPs were proven to exhibit anticancer activity (see Figure 3, Tables 1-3), but their efficiency and the cellular effects were strongly dependent on the natural extract applied during the synthesis procedure. As a first example, AgNPs prepared using Dendropanax morbifera leaf extract displayed a marked anticancer activity against A549 lung cancer cells in vitro and induced apoptosis; on the other hand, they were not toxic to HaCaT human keratinocytes. Similarly prepared AuNPs were also non-toxic to HaCaT and A549 cells, which advocates the potential application of Dendropanax morbifera leaf extract-AuNPs for drug delivery or diagnostic purposes [77]. Herbal medicinal plants are often preferred as biological entities for nanoparticle green synthesis. Panax ginseng-mediated AuNPs were not cytotoxic to HaCaT and 3T3-L1 noncancerous cells. P. ginseng-generated AgNPs did not exhibit any significant cytotoxic effects on HaCaT cells; however, they showed rather detrimental effects for 3T3-L1 pre-adipocyte cells [75]. In a similar approach, Pérez et al. revealed that AgNPs mediated by P. ginseng were toxic to B16 murine tumor cells, but were comparatively less harmful for human dermal fibroblasts. On the other hand, P. ginseng mediated-AuNPs were non-toxic on either human fibroblast or murine cancer cells [75]. Alsalhi et al. investigated the cytotoxic activity of Pimpinella anisum-mediated AgNPs and found that toxicity was higher in colon cancer cells (HT115) than in human primary neonatal skin stromal cells (hSSCs) [83]. Despite numerous studies suggesting the biocompatibility of AuNPs, the anticancer activity of some green synthesized AuNPs was confirmed. As an example, AuNPs synthesized with Trichosanthes kirilowii caused cell cycle arrest in G 0 /G 1 phase and induced apoptosis via activating several apoptotic pathways, involving bid, bax/bcl2 and caspases in colon cancer cells [121]. Similar activities were observed on A549 cells and on HepG2 hepatocellular carcinoma cells treated with green synthesized AuNPs prepared by using Marsdenia tenacissima plant extract [122] and AuNPs synthesized with Cordyceps militaris mushroom extract [11], respectively. AuNPs obtained with Nerium oleander extract also decreased the viability of MCF-7 cells. In this case, a significant cancer cell selective activity was observed, since these green AuNPs did not affect the cell viability of primary non-cancerous lymphocytes. As the N. oleander plant extract itself caused significantly lower toxicity on both MCF-7 and non-cancerous primary lymphocytes, as compared to the AuNPs, the authors concluded that the natural extract applied during the synthesis of nanoparticles is only partly responsible for the observed biological effects of these AuNPs [123]. AuNPs synthesized by Gelidium pusillum induced apoptosis in MDA-MB-231 cancerous cells; in contrast, fewer apoptotic cells were observed in the non-cancerous HEK-293 samples, despite some DNA fragmentation occurring at a nanoparticle concentration of 150 µg/mL [140]. A very similar approach was used for biologically synthesized AuNPs obtained by the dried fruit extract of Tribulus terrestris, but in this case, the effects of two AuNPs with different sizes were examined (7 and 55 nm diameters). The study confirmed that cultures of AGS cells treated with larger sized AuNPs contained a lower number of apoptotic cells than cell cultures treated with the smaller AuNPs [124]. The cytotoxic effect of aqueous Peltophorum pterocarpum leaf extract-mediated AuNPs were tested on human normal endothelial cells (HUVEC, ECV 304). No inhibition of cell proliferation was measured, confirming the biocompatibility of these AuNPs in an in vitro system. This is one of the few studies where the nanoparticle-induced toxicity was investigated in an in vivo model as well. Nanoparticle exposure did not induce any adverse clinical signs or weight changes in C57BL6/J mice. No significant changes were detected in the levels of total glucose, urea nitrogen, transaminases and uric acid in the serum compared to the control group of mice except for triglycerides and cholesterol levels in the AuNP treated group. Moreover, no major histopathological differences were observed among the treated and untreated groups [125]. Oftentimes, when a natural extract is utilized for nanoparticle production, AgNPs as well as AuNPs are concomitantly synthesized. It is generally accepted that mostly AgNPs show high anticancer activity compared to AuNPs; however, this feature can depend enormously on the natural extract used for the nanoparticle synthesis and nonetheless on the tested cell lines. A few studies have attempted to compare the biological performance of the green synthesized metal particles and commercially available, chemically generated counterparts. Both AuNPs and AgNPs produced with rhizome of Anemarrhena asphodeloides showed anticancer activity on several tested tumor cells, while neither of them affected the viability of normal pre-adipocyte cells. Furthermore, when the effect of green synthesized AgNPs were compared to commercially available AgNPs, the green AgNPs showed higher cytotoxicity on cancer cells than commercial AgNPs. The reason behind this difference was not established, since ROS production upon the various treatments was not consistent [78]. In another study, no differences were observed between the efficacy of green synthesized (Artemisia turcomanica leaf extract) and commercial AgNPs using cancer and non-cancerous cells as both AgNPs induced apoptosis in cancer cells to the same extent [79]. A similar comparative study revealed that AgNPs synthesized with walnut green husk extract showed anticancer activity on MCF-7 cells, while it did not cause a cell viability decrease in normal L929 cells; however, commercial AgNPs decreased the viability of both cancerous and non-cancerous cells at the same level [80]. Black tea extract-mediated AgNPs generated analogous results, as they exhibited lower cytotoxicity against normal human primary fibroblasts, and high toxicity towards A2780 ovarian carcinoma cells and HCT-116 colorectal tumor cells. The components of the tea extract solution had no toxic activity on any of the tested cell lines [81]. These results suggest that green metal nanoparticles have the potential to perform better in anticancer tests than chemically synthesized counterparts, and by their utilization, a certain degree of cancer selectivity can also be achieved. Although the majority of the toxicity studies are based on in vitro experiments, which is fairly understandable, there are occasionally publications revealing the impacts of green synthesized nanoparticles on in vivo model systems. Yan He et al. performed in vitro experiments first to screen the anticancer effects of Dimocarpus longan-mediated AgNPs on lung, pancreas and prostate cancer cells. AgNPs showed great inhibitory effects on the H1299 lung cancer, but they were less effective on the VCaP prostate cancer and BxPC-3 pancreas cancer cells. Treatment of H1299 cells with these green AgNPs induced apoptosis with dose-dependent decreases of NF-κB transcriptional activities. This finding is relevant because activated NF-κB is a key regulator of programmed cell death and is associated with lung cancer progression by the transcriptional regulation of responsive genes [225]. The impact of AgNPs on lung cancer cells were linked to a suppression of bcl-2 proteins, resulting in apoptotic cell death [76,226]. Following in vitro examinations, the effect of green AgNPs was investigated in vivo on mice carrying H1299 lung tumors. AgNPs could inhibit tumor growth in mice, as significant differences in the tumor size between control and AgNP-treated group was detected. These results indicate that green AgNPs could be effective candidates for the treatment of lung cancer in vivo as a complementary to classical chemotherapy [76]. In case of Ficus religiosa-synthesized AgNPs, cancer cell viability decreased in a timeand AgNP dose-dependent manner in vitro. AgNPs synthesized by F. religiosa brought about cell death in A549 and Hep2 cells through the induction of apoptosis by increased generation of ROS and decreased levels of antioxidants. Furthermore, both the extrinsic and the mitochondrial apoptotic pathways were activated in AgNP-treated tumor cells. The following in vivo studies performed on rats revealed significant increases in the serum levels of AST, ALT, LDH, TNF-α and IL-6 after oral administration of F. religiosa-mediated AgNPs and showed accumulation of silver in liver, brain and lungs. However, the levels of these serum parameters reverted back to normal and the complete elimination of AgNPs was also observed by the end of the wash out period [84]. In recent years, hybrid/composite or core-shell bimetallic or trimetallic nanoparticle systems of gold and silver were also developed, which shifted somewhat the focus of investigations. Green synthesized Ag-Au composites give the opportunity to exploit the anticancer effects of both AgNPs and AuNPs and fine-tune the biological activities of the obtained nanomaterials. In one of these studies, the anticancer effects of starchmediated bimetallic Ag-AuNPs were investigated. Ag-AuNPs were not toxic to human dermal fibroblasts, while they significantly decreased the viability of melanoma cells. In comparison, monometallic AuNPs synthesized with starch were not toxic to either fibroblasts or melanoma cells [126]. The molecular mechanisms behind the anticancer effect of Ag-AuNP composites synthesized with Trapa natans peel extract were investigated on p53 wild type and p53 knockout cells. Ag-AuNPs induced oxidative stress in cancer cells, and caused apoptotic cell death in a ROS-mediated p53-independent way via mitochondrial damage and through activation of caspase-3. It was emphasized that ROS production must be an important factor in the mechanism of action of this green Ag-AuNP composite, because apoptosis was attenuated upon decreasing ROS levels [127]. Obviously, besides plant-mediated nanoparticle synthesis methods, there are other green, cost-effective and rapid techniques utilizing fungi and bacteria for this purpose [227,228]. In cases such as bacteria-generated metal nanoparticles, the thorough biological screening of the potential detrimental effects on living systems is mandatory. AgNPs synthesized by Escherichia fergusonii exhibit a toxic effect on MCF-7 breast cancer cells by inducing ROS generation, leading to cellular apoptosis. This study indicated that these bacteria-mediated nanoparticles could have antiproliferative effects as well [30]. Interestingly, human cervical cancer cells were highly sensitive to Pseudomonas aeruginosa-synthesized AgNPs and their proliferation capacity decreased with increasing dose of AgNPs [7]. Another Pseudomonas species was also proven to be an adequate candidate for nanoparticle synthesis. Gopinath et al. investigated the cytotoxic effects of P. putida-generated AgNPs on HEp-2 cells, which revealed that AgNPs did not affect significantly the viability of these cells. It is important to mention that the AgNP concentration used on human cells was lethal for the bacteria tested in parallel. These results indicate that the biogenic, P. putida-synthesized AgNPs are capable of displaying antibacterial activity without being harmful to tumor cells at this concentration [8]. The work of Senthil et al. led to a similar conclusion, in that the produced green (in this case plantbased, fenugreek leaves' extract) AgNPs were less detrimental to human HaCaT cells than to bacteria [82]. The cellular effects of Bacillus funiculus-mediated AgNPs were also investigated by assessing cell viability, metabolic activity and oxidative stress on MDA-MB-231 breast cancer cells. Dose-dependent cytotoxicity, activation of caspase-3 and generation of ROS were demonstrated for these AgNPs against MDA-MB-231 cells. The resulting apoptosis was further confirmed by detecting nuclear fragmentation [9]. AuNPs synthesized by Paracoccus haeundaensis BC74171 T bacteria were biologically inert on normal cells and showed slight toxicity on cancerous cells in the highest applied concentrations [14]. Cyanobacteria, such as Oscillatoria limnetica, were also utilized to produce metal nanoparticles, and the cytotoxic potential of the obtained nanomaterials were tested against human cancer cell lines. In case of HCT-116 cells, cyanobacterium-synthesized AgNPs decreased the cell viability more than in the case of MCF-7 cells via inducing apoptosis, which was represented by the morphological changes in tumor cells [10]. Since it is vital to ensure the biosafety of the metal nanoparticles before their actual utilization, the authors quite rightly examined the hemolytic potential of the obtained nanoparticles on human erythrocytes. It was demonstrated that increasing AgNP concentration could induce red blood cell lysis; however, the mode of action for inducing hemolysis was not revealed [10]. The application of fungi as reducing and stabilizing agents in the biologically synthesized AgNPs is engaging due to the production of large quantities of proteins, high yields, easy handling and low toxicity of the residues [229]. in vitro AgNPs synthesized by Agaricus bisporus showed dose-dependent toxicity on MCF-7 human breast cancer cells. In in vivo experiments, it was also demonstrated that the combination of these AgNPs and gamma radiation could induce apoptosis in Ehrlich solid tumor cells in mice via a mechanism involving caspase-3 activation. In Ehrlich solid tumor cells, this treatment combination mitigated significant superoxide dismutase and catalase activities and reduced glutathione levels, whereas it increased malondialdehyde and nitric oxide levels [12]. In another study, treatment of MDA-MB-232 breast cancer cells with Ganoderma neo-japonicum-mediated AgNPs reduced the cell viability and induced membrane leakage in a dose-dependent manner. Tumor cells exposed to these nanoparticles showed increased amounts of ROS and triggered hydroxyl radical production. In fact, the apoptotic effects of AgNPs were confirmed by activation of caspase-3 and DNA nuclear fragmentation [13]. Further Biomedical Applications of Green Synthesized AuNPs and AgNPs In the last few decades, research on efficient metal nanoparticle-based medical approaches for drug-delivery, regenerative medicine, imaging and biosensing have gathered impetus. Provided by this stimulus, alternative, mainly green synthetic methods for silver and gold nanoparticles destined intentionally for such specific purposes have received scientific attention. Ever since, several articles describing the carrier function of biological synthesized nanoparticles-especially gold nanoparticles-have been published. In one study, proteincoated AuNPs synthesized by Tricholoma crassum were found to be promising candidates for gene delivery, since green fluorescent protein (GFP) was successfully carried into mouse sarcoma cancer cells using a plasmid DNA-AuNP complex. Moreover, these AuNPs showed low hemolytic activity towards human erythrocytes, which confirms their biocompatible nature. These results indicated that green nanoparticles could be considered as potential drug delivery platforms of cancer therapeutics [230]. Mukherjee et al. also demonstrated, using in vitro systems and an in vivo mouse model, that Peltophorum pterocarpum-synthesized AuNPs are suitable and effective anticancer drug-carriers. They designed a biosynthesized AuNP-based drug delivery system, in which these particles were conjugated with Doxorubicin (Dox). in vitro Dox conjugated-AuNPs showed high antiproliferative activity against A549 lung cancer and B16F10 melanoma cells. They obtained similar results in vivo, as a significant reduction in tumor growth was observed compared to the untreated and the unconjugated-AuNP treated group. Significant amounts of conjugated AuNPs accumulated in the spleen 2 h after the treatment; however, 24 h post-injection, the tumors showed a strong tendency to accumulate high levels of Dox conjugated-AuNP. The biodistribution of the drug conjugated-AuNPs reflected the selectivity of this drug-carrier system [125]. Similarly, Patra et al. also investigated a Dox conjugated-nanoparticle-based drug delivery system. They utilized Butea monosperma synthesized AuNPs and AgNPs. in vitro the drug conjugated-AuNPs and AgNPs showed significant cell proliferation inhibitory activity towards B16F10 cells in a dose-dependent manner compared to free Doxorubicin applied at the same concentration. The enhanced anticancer effects of Dox conjugated-nanoparticles were verified by apoptosis detection [231]. Gellan gum, secreted by bacteria, was used as a reducing agent in the biosynthesis of AuNPs, followed by nanoparticle conjugation with Doxorubicin. The effects of drug-loaded AuNPs were determined on two glioma cell lines. The cytotoxicity of free Doxorubicin and Dox-conjugated AuNPs gradually increased with increasing concentrations; however, the toxic nature of Dox-loaded AuNPs was more prominent and exceeded the same features of free Doxorubicin, indicating the strong carrier potential of Dox-loaded AuNPs. Microscopic experiments demonstrated significant morphology changes and apoptotic cell death triggered by Dox-conjugated AuNPs on LN-18 and LN-229 human glioma cell lines [232]. The toxic effect of resveratrol conjugated-biosynthesized AuNPs (generated by Delftia sp. strain) was tested on A549 human lung cancer and on MCR-5 normal fibroblast cells. Resveratrol conjugated-AuNPs showed significantly higher cytotoxic activity towards A549 cells than resveratrol alone. However, no cytotoxicity was observed on non-cancerous MRC-5 fibroblasts [233]. Punica granatum synthesized AuNPs were conjugated with the chemotherapeutic agent 5-Fluorouracil (5-Fu). The problem in using this drug is its toxicity to bone marrow and to the gastrointestinal tract. To tackle this problem, Ganeshkumar et al. have developed a method to biosynthesize AuNPs functionalized with folic acid (FA) for targeted 5-Fu delivery. The rationale to functionalize nanoparticles with FA is that folic acid receptors on the cell membrane can be targeted for tumor selective drug delivery. Several liver and breast cancer cell lines are known to overexpress folate receptors; thus, the in vitro cytotoxicity of this drug delivery system was investigated on MCF-7 breast cancer cells. Higher cytotoxic effect was measured in case of 5-Fu@nanogold-FA treatment compared to 5-Fu alone or 5-Fu@AuNPs in the same concentrations. The authors found that these drug-conjugated AuNPs functionalized with FA could induce the expression of both p53 and p21 in a concentration-dependent manner in MCF-7 cells. These findings suggest that the JNK/ERK signaling pathway could be involved in p21WAF1-mediated G1-phase cell cycle arrest and growth inhibition in 5-Fu@nanogold-FA treated breast cancer cells [234]. A further study of the same authors dealt with a similar targeted drug delivery system, in which they synthesized pullulan stabilized AuNPs which were coupled with 5-Fu and folic acid (FA) again. in vitro cytotoxicity assays on HepG2 hepatocarcinoma cells revealed that 5-Fu@AuNPs-FA exhibit higher toxic activity than 5-Fu alone or 5-Fu@AuNPs, which again pointed to a conclusion that 5-Fu@AuNPs-FA could be a promising alternative carrier for targeting liver cancer [235]. Yallappa et al. utilized AuNPs synthesized by Mappia foetida to examine their applicability in targeted cancer therapy. They described that Doxorubicinloaded AuNPs conjugated with n-hydroxysuccinamide (NHS) activated folic acid (FA) showed low toxicity against Vero normal epithelial cells and high cytotoxic activity against human cancer cells (MDA-MB-231, HeLa, SiHa, Hep-G2) [236]. For diagnostic purposes, biocompatible AuNPs have to be designed and tested. Albeit green synthesized nanoparticles could hold great potential in diagnostics, their biocompatibility is strongly dependent on the natural extract applied upon synthesis. Green magnetite-gold nanohybrids (Fe(3)O(4)/Au) produced with grape seed proanthocyanidin can be suitable for MRI and CT imaging as contrast agents. The magnetite part gives superparamagnetism in MRI, while the gold part of the hybrid provides high X-ray contrast in CT. The nanocomposites are biocompatible and suitable for labeling and imaging stem cells, since nanocomposites could be internalized and accumulated in the cytoplasm of these cells [237]. Cinnamon-generated AuNPs were also shown to be suitable for in vitro and in vivo imaging. These nanoparticles are not only biocompatible, but their colloid solution is pure enough for in vivo applications. Cinnamon-AuNPs are capable of labeling cancer cells in vitro, and can be detected by photoacoustic methods; furthermore, with the help of these green synthesized AuNPs, circulating tumor cells can potentially be detected in vivo as well. Moreover, biodistribution studies revealed that cinnamon-AuNPs are mostly accumulated in the lungs, indicating their use as contrast agents targeting the lung [238]. Fluorescently labeled AuNPs prepared with Olax scandens leaf extract were aimed for both therapeutic and diagnostic purposes. The phytochemicals of Olax scandens leaf yield anti-cancer properties to the as-prepared AuNPs, and owing to fluorescent proteins provided by the green extract, these fluorescent nanoparticles enable the detection of the cancer cells [120]. Another application purpose of green synthesized AgNPs could be regenerative medicine. Nanoparticles are mostly applied in such studies because of their wound healing-inducing activity. Sanghuangporus sanghuang polysaccharide synthesized AgNPs with chitosan, forming a porous sponge structured matrix, increased wound healing via inducing wound contraction and internal tissue regeneration in damaged skin of animals and disinfected the skin surface to inhibit the growth of Escherichia coli and Staphylococcus aureus [239]. Green AuNPs synthesized with Coleus forskohlii root extract could enhance the wound closure, suppress the inflammation and induce the re-epithelization of excision in Wistar rats [240]. Green synthesized metal nanoparticles are potential biosensor candidates. The shift of the surface plasmon resonance (SPR) peak in the spectra of green synthesized nanoparticles is usually followed, which can also vary depending on the natural extract applied upon synthesis [241,242]. For example, AgNPs synthesized with neem extract exhibit high SPR, while those obtained using guava, mint or aloe leaf extracts resulted in a lower SPR peak. A study reported the capacity of green synthesized silver nanoparticles to detect harmful molecules based on SPR changes, such as different MCZ pesticides in water samples. It was found that MCZ pesticides interact with AgNPs and after UV-visible illumination of the samples, MCZ pesticides can even be damaged and aggregated via the photocatalytic actions of AgNPs [243]. Gold and silver nanoparticles are sensitive materials for the detection of pollutants and heavy metals in environmental samples as well [244,245]. As an example, AgNP colloid synthesized with onion extract was used in a highly sensitive and rapid colorimetric assay for mercury ion detection based on the localized surface plasmon resonance. [246]. Detection of mercury ions at wide pH ranges was also possible by green AgNPs synthesized with Citrus lemon fruit extract, supporting the applicability of green synthesized noble metal nanoparticles as biosensors [247]. Concluding Remarks Based on the numerous experimental data accumulated in the literature and summarized here in this review, it is evident that the field of green synthesized metal nanoparticles is expanding continuously, and every new prospective emerging on the horizon offers the possibility of finding other, more innovative ways and means to produce silver or gold nanoparticles with the exact properties needed for a specific purpose. Since the biological entities potentially applicable for green synthesis are practically endless, research has to continue to prepare, test and experiment with nanoparticles-synthesized in an eco-friendly approach, using green and renewable materials directly from nature-which exhibit unique properties and behave in the desired manner upon encountering living systems, such as human or fungal cells, bacteria or even viruses. Nevertheless, careful considerations in the selection of the green material for nanomaterial production, and more importantly, a comprehensive screening protocol of these green particles, are obligatory to predict the attitude and performance of NPs on living cells. First of all, the chemical composition of the applicable green material should be considered to estimate which biomolecules have the capacity to act as reducing or capping agents and which of them can be potentially adsorbed on the nanoparticle surface, creating a bioactive coating to interact with living cells upon action. The examples itemized above in this review clearly show that the green materials employed for the synthesis will define or at least fine-tune the chemical and physical properties and surface chemistry and thereby the biological activity of the obtained nanoparticles. After the green material is selected, and its chemical composition and its active ingredients have been regarded, all other chemicals required for nanoparticle synthesis should be attentively picked to preferentially utilize biocompatible substances and to avoid toxic chemicals, leaving only nonirritating, innocuous waste materials behind. When the nanomaterial is readily obtained, a meticulous examination of its structure and physicochemical properties has to be completed to reveal the average size, morphology, surface chemistry and other critical factors. This step is just as important as the synthesis approach, since these findings either promote the nanomaterials for biological tests or advise further optimization of the preparation protocol in case nanoparticles with undesired properties are formed. Finally, a comprehensive biological screening has to be carried out by inspecting the toxicity of the green nanoparticles on various human cell types, on Gram-negative and -positive bacteria, on a number of fungal strains; eventually, the antiviral propensity can be assessed as well. Depending on the original purpose of nanoparticle synthesis, each of these biological characterizations should be broadened by further implementing cell types and strains or even by in vivo studies and extending the technical repertoire with additional assays. We cannot stress enough the relevance of performing the outlined characterization route; otherwise, the chemical and biological profile of the obtained green nanomaterial may not be confidently trusted and adverse effects will be observed upon its application.
2021-02-11T06:19:42.617Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "1022edad7a87331d0b8a85e3c54aa0246ac61ddd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/molecules26040844", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "07b208aab0305ef2e98a67af8d8afcbdc97ad8e6", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
254636581
pes2o/s2orc
v3-fos-license
Shock impingement on a transitional hypersonic high-enthalpy boundary layer The dynamics of a shock wave impinging on a transitional high-enthalpy boundary layer out of thermochemical equilibrium is investigated for the first time by means of a direct numerical simulation. The freestream Mach number is equal to 9 and the oblique shock impinges with a cooled flat-plate boundary layer with an angle of 10{\deg}, generating a reversal flow region. In conjunction with freestream disturbances, the shock impingement triggers a transition to a fully turbulent regime shortly downstream of the interaction region. Accordingly, wall properties emphasize the presence of a laminar region, a recirculation bubble, a transitional zone and fully turbulent region. The breakdown to turbulence is characterized by an anomalous increase of skin friction and wall heat flux, due to the particular shock pattern. At the considered thermodynamic conditions the flow is found to be in a state of thermal non-equilibrium throughout, with non-equilibrium effects enhanced and sustained by the shock-induced laminar/turbulent transition, while chemical activity is almost negligible due to wall cooling. In the interaction region, relaxation towards thermal equilibrium is delayed and the fluctuating values of the rototranslational and the vibrational temperatures strongly differ, despite the strong wall-cooling. The fully turbulent portion exhibits evolutions of streamwise velocity, Reynolds stresses and turbulent Mach number in good accordance with previous results for highly-compressible cooled-wall boundary layers in thermal nonequilibrium, with turbulent motions sustaining thermal nonequilibrium. Nevertheless, the vibrational energy is found to contribute little to the total wall heat flux. (Dated: December 15, 2022) The dynamics of a shock wave impinging on a transitional high-enthalpy boundary layer out of thermochemical equilibrium is investigated for the first time by means of a direct numerical simulation. The freestream Mach number is equal to 9 and the oblique shock impinges with a cooled flat-plate boundary layer with an angle of 10 • , generating a reversal flow region. In conjunction with freestream disturbances, the shock impingement triggers a transition to a fully turbulent regime shortly downstream of the interaction region. Accordingly, wall properties emphasize the presence of a laminar region, a recirculation bubble, a transitional zone and fully turbulent region. In the entire transitional process, the recognized mechanisms are representative of the second mode instability combined with stationary streaky structures, their destabilization being eventually promoted by shock impinging. The breakdown to turbulence is characterized by an anomalous increase of skin friction and wall heat flux, due to the particular shock pattern. At the considered thermodynamic conditions the flow is found to be in a state of thermal non-equilibrium throughout, with non-equilibrium effects enhanced and sustained by the shock-induced laminar/turbulent transition, while chemical activity is almost negligible due to wall cooling. In the interaction region, relaxation towards thermal equilibrium is delayed and the fluctuating values of the rototranslational and the vibrational temperatures strongly differ, despite the strong wall-cooling. The fully turbulent portion exhibits evolutions of streamwise velocity, Reynolds stresses and turbulent Mach number in good accordance with previous results for highly-compressible cooled-wall boundary layers in thermal nonequilibrium, with turbulent motions sustaining thermal nonequilibrium. Nevertheless, the vibrational energy is found to contribute little to the total wall heat flux. I. INTRODUCTION Shock-wave boundary layer interactions (SWBLI) have been extensively investigated over the last decades due to their importance for both aeronautical and aerospace applications. The impingement of a shock wave on a fully developed boundary layer may indeed occur for several reasons and both in internal and external flow configurations. For instance, a shock wave interacting with a boundary layer can be driven by the complex geometry of the body itself (e.g. ramps, wedges) or it can impinge on the boundary layer after generating from an external body. The latter can be the case of supersonic intakes or multi-body launch vehicles, and it is the focus of the present work. The physics underlying such configuration is complex and strongly multiscale. The dynamics of high-speed compressible turbulent boundary layers becomes tightly coupled with strong gradients of the thermodynamic properties, leading to an increase of thermomechanical loads (see e.g., [1] for an overview of relevant physical processes). In hypersonic and high-enthalpy regimes, thermochemical non-equilibrium effects (i.e., chemical reactions and vibrational excitation) must be taken into account as well [2], further complicating the picture. Given the coexistence of several critical features, computational approaches based on averaged Navier-Stokes equations are unable to faithfully predict the flow field behavior, hence the necessity of performing high-fidelity spatial-and time-resolving simulations. When an incident shock impinges on a fully developed boundary layer, the latter experiences a strong adverse pressure gradient. If the shock is strong enough, a recirculation bubble occurs and the flow separates. Several additional flow features are generated by this interaction, depending on the nature of the incoming boundary layer. Laminar boundary layers have the advantage of a lower drag but are more sensitive to separation in adverse pressure gradients, resulting in wider recirculation bubbles with respect to turbulent flows. In such configurations, investigations have been carried out concerning shock-induced instabilities [3,4] as well as shock-induced transition to turbulence [5], that can be obtained when the shock angle is sufficiently high. The interaction between an oblique shock and a fully turbulent boundary layer has been the object of intensive research efforts [6][7][8][9][10][11]. One of the most remarkable results of the interaction is the amplification of turbulence downstream of the incident shock and the emergence of oscillatory motions. For strong interactions, the ensemble of the separation bubble and the shock system is subjected to unsteady motions that spread on a wide range of characteristic frequencies. For instance, an oscillatory behavior of the reflected shock has been observed for high-frequency ranges [12], representative of the most energetic turbulent scales of the incoming boundary layers. Kelvin-Helmholtz waves are also depicted, destabilizing the occurring shear layer and leading to vortex shedding. Another oscillatory motion is the so-called bubble "breathing", a low-frequency instability corresponding to enlargement and shrinkage of the bubble, observed both numerically and experimentally [1]. Studies on the topic are numerous [13][14][15], although no clear consensus on the specific source of this flow unsteadiness (e.g., upstream boundary layer fluctuations, shear layer entertainment mechanism, intermittency) has been found yet. Several parameters can affect the overall SWBLI dynamics, among which it is noteworthy mentioning the effect of non-adiabatic walls. It has been indeed found that wall cooling tends to reduce the interaction scales and the bubble size, while increasing pressure fluctuations [16,17]. More recently, attention has been paid to shock-wave/transitional boundary layer interactions [18][19][20][21][22]. These studies are meant to mimic more realistic configurations in which the boundary layer is not completely unperturbed, but may be subjected to random disturbances deriving from the external flow or naturally generated in wind tunnel facilities. Of particular interest is the high-enthalpy hypersonic regime (encountered in re-entry and low-altitude hypersonic flight problems), in which the assumption of calorically perfect gas is no longer valid and out-of-equilibrium processes can be triggered at the high temperatures induced by the intense wall friction and the strong shock waves. In such configurations, chemical dissociation and vibrational relaxation phenomena interplay with the SWBLI physics. High-enthalpy effects on turbulent flows have gained renewed attention for smooth boundary layer configurations [23][24][25][26][27], where their influence was found to be often important depending on the thermodynamic operating regime. On the other hand, most of the research on shock-wave boundary layer interactions is limited to calorically perfect gas assumptions and low-enthalpy conditions. To the authors' knowledge, the presence of high-enthalpy effects in SWBLI has only been considered by Volpiani [28] and Passiatore et al. [29], with the purpose in both cases of assessing the capabilities of numerical methods in robustly handling such severe configurations. The main objective of this study is therefore to extend the knowledge about high-enthalpy wall-bounded turbulent flows to configurations involving the impingement of shock waves. For that purpose we perform for the first time a high-fidelity numerical simulation of a shock-wave/hypersonic boundary layer interaction, in the presence fo both chemical and thermal non-equilibrium effects. The boundary layer considered in this work is excited by means of superposed freestream disturbances and is found to be in a transitional state at the location of shock impingement. The paper is organized as follows. Section II describes the governing equations and the thermochemical models used for the computation. The numerical strategy adopted and the problem setup are reported in sections III and IV, respectively. Section V presents the main results, providing a general overview of the flow dynamics, particular insights on the interaction region, inspections on turbulent statistics and a detailed analysis of the thermochemical flow field. Concluding remarks are then provided in section V. II. GOVERNING EQUATIONS The fluid under investigation is air at high-temperature in thermochemical non-equilibrium, modeled as a five-species mixture of N 2 , O 2 , NO, O and N. Such flows are therefore governed by the compressible Navier-Stokes equations for multicomponent chemically-reacting and thermally-relaxing gases, which read: In the preceding formulation, ρ is the mixture density, t the time coordinate, x j the space coordinate in the j-th direction of a Cartesian coordinate system, with u j the velocity vector component in the same direction, p is the pressure, δ ij the Kronecker symbol and τ ij the viscous stress tensor, modeled as with µ the mixture dynamic viscosity. In equation (3), E = e + 1 2 u i u i is the specific total energy (with e the mixture internal energy), q TR j and q V j the roto-translational and vibrational contributions to the heat flux, respectively; u D nj denotes the diffusion velocity and h n the specific enthalpy for the n-th species. In the species conservation equations (4), ρ n = ρY n represents the n-th species partial density (Y n being the mass fraction) andω n the rate of production of the n-th species. The sum of the partial densities is equal to the mixture density ρ= NS n=1 ρ n , NS being the total number of species. To ensure total mass conservation, the mixture density and NS−1 species conservation equations are solved, while the density of the NS-th species is computed as ρ NS = ρ − NS−1 n=1 ρ n . We set such species as molecular nitrogen being the most abundant one throughout the computational domain. As for equation (5), Y m e Vm represents the mixture vibrational energy, with e Vm the vibrational energy of the m-th molecule and NM their total number. In the same equation, Q TV = NM m=1 Q TVm represents the energy exchange between vibrational and translational modes (due to molecular collisions and linked to energy relaxation phenomena) and NM m=1ω m e Vm the vibrational energy lost or gained due to molecular depletion or production. Each species is assumed to behave as a thermally-perfect gas; Dalton's pressure mixing law leads then to the thermal equation of state: R n and M n being the gas constant and molecular weight of the n-th species, respectively, and R = 8.314 J/mol K the universal gas constant. The thermodynamic properties of high-T air species are computed considering the contributions of translational, rotational and vibrational (TRV) modes; specifically, the internal energy reads: Here, h 0 f,n is the n-th species enthalpy of formation at the reference temperature (T ref = 298.15 K), c T p,n and c R p,n the translational and rotational contributions to the isobaric heat capacity of the n-th species, computed as for monoatomic species (9) and e V n the vibrational energy of species n, given by with T V,n the characteristic vibrational temperature of each molecule (3393 K, 2273 K and 2739 K for N 2 , O 2 and NO, respectively). After the numerical integration of the conservation equations, the roto-translational temperature T is computed from the specific internal energy (devoid of the vibrational contribution) directly, whereas an iterative Newton-Raphson method is used to compute T V from e V = NM m=1 Y m e Vm . Both the heat fluxes are modeled by means of Fourier's law, q TR ∂xj , λ TR and λ V being the roto-translational and vibrational thermal conductivities, respectively. To close the system, we use the two-temperatures (2T) model of Park [30] to take into account the simultaneous presence of thermal and chemical non-equilibrium for the computation ofω n and Q TV . Specifically, the five species interact with each other through a reaction mechanism consisting of five reversible chemical steps [31]: R4 : where M is the third body (any of the five species considered). Dissociation and recombination processes are described by reactions R1, R2 and R3, whereas the shuffle reactions R4 and R5 represent rearrangement processes. The mass rate of production of the n-th species is governed by the law of mass action: where ν nr and ν nr are the stoichiometric coefficients for reactants and products in the r-th reaction for the n-th species, respectively, and NR is the total number of reactions. Furthermore, k f,r and k b,r denote the forward and backward rates of the r-th reaction, modeled by means of Arrhenius' law. The coupling between chemical and thermal nonequilibrium is taken into account by means of a modification of the temperature values used for computing the reaction rates. Indeed, a geometric-averaged temperature is considered for the dissociation reactions R1, R2 and R3 in (11), computed as T avg = T q T 1−q V with q = 0.7 [30]. Lastly, the vibrational-translational energy exchange is computed as: where t m is the corresponding relaxation time evaluated by means of the expression [32]: Here, t mn is the relaxation time of the m-th molecule with respect to the n-th species, computed as the sum of two The first term writes: where p atm = 101 325 Pa and a mn and b mn are coefficients reported in [33]. Since this expression tends to underestimate the experimental data at temperatures above 5000 K, a high-temperature correction was proposed by Park [34]: where φ mn = MmMn Mm+Mn and σ = 8R As for the computation of the transport properties, pure species' viscosity and thermal conductivities are computed using curve-fits by Blottner [35] and Eucken's relations [36], respectively. The corresponding mixture properties are evaluated by means of Wilke's mixing rules [37]. Mass diffusion is modeled by means of Fick's law: where M is the mixture molecular weight. Here, the first term on the r.h.s. represents the effective diffusion velocity and the second one is a mass corrector term that should be taken into account in order to satisfy the continuity equation when dealing with non-constant species diffusion coefficients [38]. Specifically, D n is an equivalent diffusion coefficient of species n into the mixture, computed following Hirschfelder's approximation [36], starting from the binary diffusion coefficients which are curve-fitted in [39]. III. NUMERICAL METHODOLOGY The numerical solver described in Sciacovelli et al. [40] is used for the present computation. The Navier-Stokes equations are integrated numerically by using a high-order finite-difference scheme. The convective fluxes are discretized by means of central tenth-order differences, supplemented with a high-order adaptive nonlinear artificial dissipation. The latter consists in a blend of a ninth-order-accurate dissipation term based on tenth-order derivatives of the conservative variables, used to damp grid-to-grid oscillations, along with a low-order shock-capturing term. This term is equipped with a highly-selective pressure-based sensor. For the vibrational energy equation, a sensor based on second-order derivatives of the vibrational temperature is used. Time integration is carried out using a low-storage 3 rd -order Runge-Kutta scheme. The numerical strategy has been validated for thermochemical nonequilibrium flows, including SWBLI laminar configurations [29]. IV. PROBLEM SETUP The configuration under investigation, displayed in figure 1, consists in a shock wave that impinges on a thermally and chemically out-of-equilibrium flat-plate boundary layer. We note in the following with ϑ and β the deflection angle and the shock angle, respectively. The current setup stems from the one of Sandham et al. [19], in which inflow freestream perturbations were applied on a M ∞ = 6 laminar perfect-gas boundary layer; the original case is presented [40], extended to thermochemical non-equilibrium. The inflow plane of the computational domain and the ideal impingement station are located at x 0 = 0.04 m and x sh = 0.7 m from the leading edge, respectively. The wall temperature is fixed equal to 2500 K for both the translational and vibrational temperature, and non-catalytic conditions are applied. Characteristic outflow boundary conditions are imposed at the top and right boundaries, whereas periodicity is enforced in the spanwise direction. The extent of the computational domain is (L x × L y × L z )/δ in = 800 × 80 × 60, δ in = 1.77 × 10 −3 m being the displacement thickness at the inlet, defined in equation (20). As for the discretization, a total number of N x × N y × N z = 6528 × 402 × 720 grid points is used, with constant grid size in the streamwise and spanwise directions and a constant grid stretching of 1% in the wall-normal direction, the height of the first cell away from the wall being ∆y w = 2.5 × 10 −5 m. Unless otherwise stated, in the following we will make use of a dimensionless streamwise coordinate computed asx = (x − x sh )/δ in . Figure 2 reports the pressure and temperature difference (∆T = T − T V ) isocontours of the base flow used to initialize the three-dimensional computation. The adverse pressure gradient induced by the incident shock generates a recirculation bubble, marked with a white line in the top figure. Upstream of the separation bubble, a series of compression waves occur, which then coalesce into the separation shock; the latter interacts with the incident shock that penetrates the separated flow. Downstream of the separation bubble, a reattachment shock is generated which readjusts the previously deflected flow. Globally, the characteristic features of SWBLI are not altered by high-enthalpy effects; on the other hand, such a complex dynamics strongly influences the thermochemical activity. Coherently with the inlet temperature profiles, the amount of thermal non-equilibrium before the bubble is extremely high while chemical activity is essentially negligible. The rise of the temperatures and pressure in the separation zone enhances chemical dissociation whereas the gap between the two temperatures is reduced, moving towards a quasi thermally-equilibrated state right after the recirculation bubble. The ∆T = 0 isoline, dividing the thermally under-and over-excited regions, shows that the flow is under-excited everywhere except for the freestream pre-shock region and in the recirculation bubble, where a slight vibrational over-excitation is observed. A comparison of the configurations with and without shock impingement reveals that, in the latter case, the flow remains in a state of stronger thermal non-equilibrium and quasi-frozen chemical activity throughout the entire boundary layer. Therefore, the pressure rise caused by the incident shock is responsible for a reduction of the amount of thermal non-equilibrium and an increase of the chemical activity. Additional details about the present base flow can be found in Passiatore et al. [29]. Laminar-to-turbulent transition for the three dimensional simulation is favoured by superimposing disturbances on the described base flow. Following [19], the density self-similar profile is perturbed as follows: Here, a = 5 × 10 −4 is the amplitude of the perturbation, is a function that damps the disturbances near the wall, Φ j and Ψ k are the phases that correspond to random numbers between 0 and 2π, with N J = 16 and N K = 20. The dimensionless frequencyf k is set equal to 0.02k. only after the interaction with the incident one. The angulation of the separation shock is such that it impacts the reattachment shock, differently from the base flow. Table I reports some boundary layer properties at the selected positions. Throughout the paper, the superscript "• + " denotes normalization with respect to the viscous length scale v = µ w /(ρ w u τ ), u τ = τ w /ρ w being the wall friction velocity. The boundary layer displacement thickness, the momentum thickness and the shape factor are defined as: where the subscript δ denotes the variables computed at the edge of the boundary layer. Mesh spacing in wall units, reported in the table, shows that DNS-like resolution is achieved throughout the domain. The wall-normal profiles of statistics are mostly displayed in inner semi-local units y = ρu τ y/µ, with u τ = τ w /ρ, or in outer scaling y/δ. are the Reynolds numbers based on the distance from the leading edge and on the local momentum thickness, respectively; Reτ = ρwuτ δ/µ w is the friction Reynolds number, Re δ = ρ∞u∞δ /µ∞ is the Reynolds number based on the displacement thickness and H is the shape factor. Lastly, ∆x + , ∆y + w and ∆z + denote the grid sizes in inner variables in the x-direction, y-direction at the wall and in the z-direction, respectively.x Figure 3. Instantaneous visualization of streamwise momentum in a xy-plane (top) and in a xz-plane at y/δ in = 0.5 (bottom). The y axis has been stretched for better visualization. figure 5b). In the fully-turbulent region downstream of the interaction, C f values are approximately four times larger than those registered in the laminar case, similarly to [19]. On the other hand, its evolution in the interaction region is rather different. The increase observed after reaching the global minimum, atx ≈ 0, is attributed to the reattachment. As the C f experiences a ramp-like increase, atx ≈ 40, the incident shock penetrates the boundary layer and reaches the wall, causing a sudden increase of wall friction and heating. The evolution of the two contributions of the normalized wall heat flux, defined as are reported in figure 6(a)-(b). The results are also compared with the corresponding laminar evolutions of the same quantities. The rototranslational heat flux, q TR w , follows essentially the C f distribution, with a minimum in the separation zone, a peak of almost 10 −4 and a significant overheating in the fully turbulent region with respect to the case without perturbation. Of particular interest is the trend of the vibrational heat flux. As already observed in the flat-plate boundary layer configuration investigated by Passiatore et al. [26], the latter is one order of magnitude smaller with respect to the translational-rotational one. However, thermal non-equilibrium before the interaction is so strong that the wall heats the flow from a vibrational energy standpoint (i.e., q V w is negative and the profiles of T V are monotonic, as it will be shown in section V D). For the case without perturbation, the vibrational heat flux switches to positive values in the recirculation bubble, then increases in the reattachment region and relaxes to the post shock conditions while keeping positive values. On the other hand, when perturbations are added, q V w keeps negative values almost everywhere, except in the small separation bubble. From the reattachment region onwards, its streamwise evolution is opposed to the one obtained in the laminar regime. The global increase of temperature due to the shock impingement causes strong aerodynamic heating, transferring a considerable amount of kinetic energy into internal energy, which is distributed across all the energetic modes. In the same figure, we also report the evolution of the total Stanton number, defined as: where q w = q TR w +q V w and h aw = h ∞ + 1 2 ru 2 ∞ , with r = 0.9. Note that the recovery factor has the same value previously used for high-enthalpy configurations [23,26]. The evolution of the Stanton number is in accordance with the trend of the translational contribution of the wall heat flux. The orders of magnitude after the breakdown are in accordance with results for calorically-perfect gases [16,19] and also with high-enthalpy thermally-equilibrated boundary layers [23]. Therefore, the small vibrational heat flux contribution does not affect the Stanton number distribution even in the present strong thermochemical non-equilibrium conditions. In figure 6(d) we assess the validity of the Reynolds analogy relating C f and St. The ratio C f /(2St) is expected to vary as P r 2/3 (with P r = µc p /λ), which amounts to ≈ 0.85 for classical values of the Prandtl number. In the present case, the mean Prandtl number reaches ≈ 0.9 in the near wall region and displays a nearly constant streamwise evolution (as shown in figure 6d), with variations in the recirculation region less than 1% with respect to the turbulent zone. As previously observed for other SWBLI configurations in the literature, the relation performs poorly in the interaction region and seems to slowly relax back to the expected trend afterwards. It is reasonable to suppose that C f /(2St) tends asymptotically to P r 2/3 , albeit longer computational domains would be needed to confirm its validity [42]. energy emerging in the proximity of the interaction region. The first peak of K corresponds to the shear layer at the interface with the recirculation bubble, as already pointed out, for instance, by Volpiani et al. [17]. The second peak is shifted towards the wall and corresponds to the transitional structures observed in figure 7; it is therefore peculiar to the specific case dynamics. It is also possible to note the increase of K in the near wall region atx ≈ 40, due to the impact of the shock foot on the wall. Instability mechanisms The strong adverse pressure gradient associated with the incident shock tends to amplify the boundary layer perturbations injected by the inflow disturbances, inducing transition to turbulence. Nevertheless, the standalone perturbation has a large influence on the overall flow dynamics, even before the shock impingement station and the recirculation bubble. Figure 9 reports the isocontours of the streamwise velocity perturbation at y/δ in ≈ 0.5 (top The observed dynamics further confirms that breakdown to turbulence occurs after the reattachment, coherently with the streamwise evolution of the skin friction coefficient. In the attempt to quantify the breakdown, we follow the procedure of Andersson et al. [43] for incompressible flows and we estimate the streaks amplitude as: where u BF stays for the velocity of the base flow. Evaluating their amplitude on the current highly-compressible non-equilibrium flow is of course made difficult by the fact that quantitative criteria in the literature exist only for incompressible flows; however, the streamwise evolution of streaks amplitude may help understanding their role in the transition process. The analysis is performed over 300 three-dimensional subdomains collected in runtime, spanning the extent −200 ≤x ≤ 150, 0 ≤ y/δ in ≤ 20 and 0 ≤ z/δ in ≤ 60. Figure 10 shows the streamwise evolution of A s , in the region −200 <x < 150. As already observed from the instantaneous slices, the streaks amplitude grows significantly well upstream of shock impingement, in a region subjected to zero or even slightly favourable pressure gradient; It is therefore to be uniquely ascribed to the growth of inflow perturbations. Afterwards, the impinging shock disrupts the (23). process is more difficult to detect. The coexistence of the perturbation and the incident shock makes it more difficult to distinguish the different mechanisms that non-linearly combine to induce breakdown. The instability that often dominates transition in the hypersonic flow regime is the one related to second (or Mack) mode. This two-dimensional inviscid instability arises when a region of the mean flow becomes supersonic relative to the phase speed of the instability, and is characterized by higher frequencies with respect to the first mode. In the past, both linear and weakly-nonlinear stability studies [44,45] have pointed out that wall cooling tends to stabilize first-mode instability while destabilizing the second mode, which may even become the most unstable one at lower Mach numbers. Recently, many authors have observed the presence of such instability in high-enthalpy flows as well [46][47][48][49][50]. Following the trend of the skin friction coefficient in figure 5, the flow behavior starts to deviate from the base flow self-similar solution to the impinging shock, other mechanisms emerge and the spectrum starts to fill up, both sustaining the previously emerged frequencies and highlighting new ones. The large computational cost of the simulation limits the temporal window of sample collection, and therefore no information about low-frequency unsteadiness can be provided at the moment. Whether the bubble breathing phenomenon is present even when the shock impinges on a high-enthalpy boundary layer will be the subject of future works. C. Turbulent statistics We present hereafter an overview of the main turbulent statistics at the last four stations of table I. Note that an in-depth analysis of the turbulent flow over a thermochemical out-of-equilibrium boundary layer was already performed by Passiatore et al. [26]; the results here presented share similar trends. First, the transformations of Van Driest [52], Trettel & Larsson [53] and Griffin et al. [54] for the averaged streamwise velocity are applied to the transitional and fully turbulent stations. Figure 13 shows the results only for the last two scalings, both providing better predictions than the Van Driest one. The collapse with the classical logarithmic profile is very poor forx = 38 andx = 72, confirming the purely transitional state of the boundary layer in this region. For the two last stations, these transformations fail to collapse the mean velocity profiles on the incompressible logarithmic law, as already observed by many authors [5,26,55]. It is of common agreement that the nominal Kàrmàn constant should be smaller than the classical value of ≈ 0.4 and the intercept should be greater than 5.2 at least for cooled boundary layers [23,27]. Reasonable self-similarity can be observed for the last two stations, the better collapse being obtained by the [54] transform. The Reynolds stresses, shown in figure 14, exhibit a reasonable collapse when plotted in semi-local units, also due to the small changes in Re τ for the two last streamwise positions (less than 10%). At the transitional stations, the flow is subjected to massive velocity and pressure fluctuations, two to three times larger than in the turbulent region. These lead to very large values for the turbulent and rms Mach numbers ( figure 15a and b, respectively), similar to those obtained by Passiatore et al. [26] at much larger friction Reynolds numbers. D. Thermochemical non-equilibrium Before focusing the attention on vibrational excitation, we provide some insights on chemical activity. As already observed in previous works for cooled flat-plate boundary layers at similar freestream Mach numbers [25,26], chemical activity is relatively weak and scarcely influences flow dynamics. A more significant effect can be appreciated for pseudo-adiabatic [24] and adiabatic walls. Alternatively, when the freestream temperature is important [e.g. T V , respectively. Here, the gap between the temperatures also reaches a maximum and the wall-normal location of its peak is the farthest from the wall (y/δ ≈ 0.48, compared to y/δ ≈ 0.32 in the laminar region). From this station on, the peak is rapidly shifted towards the wall in the range 0.02 < y/δ < 0.03, due to the sudden decrease of the boundary layer thickness before, and the increase in turbulent activity after. In the last two stations, the turbulent mixing efficiently redistribute the gas [as shown in 26] such that the relaxation towards equilibrium of the vibrational modes is strongly delayed, resulting in a profoundly different dynamics with respect to the base flow predictions [29]. pre-shock region, with acoustic waves trapped between the wall and the sonic line even in the recirculation bubble. The dominant frequencies and wave lengths are also found to be in accordance with the second-mode instability. Concurrently, streaky structures are formed in the initial part of the domain, but are then weakened by the shock impingement. The latter creates a much smaller separation region with respect to the unperturbed configuration. The combination of the instability mechanisms and incident shock is such that transition to turbulence is promoted only after the reattachment point. This is clearly shown by the evolution of the skin friction coefficient, which exhibits an anomalous peak due to the foot of the incident shock. The total wall heat flux follows approximately the same trend, albeit the vibrational contribution is one order of magnitude smaller its rototranslational counterpart and mainly of opposite sign. The correlation between C f and the Stanton number still stands, except in the interaction region. In the fully turbulent portion downstream the impinging shock, turbulent statistics reveal reasonable selfsimilarity and corroborate the results previously obtained for turbulent boundary layers. Thermal non-equilibrium is The correlation coefficients of the two temperatures with respect to p, u and v drastically differ and highlight the important decoupling between the internal vibrational and dynamic fields. The current study represents a first step towards the understanding of the influence of high-enthalpy effects on shocked turbulent flow configurations. Future investigations on the subject will mainly focus on three different aspects; namely, i) the characterization of possible low-frequency unsteady motions detected by considering longer integration times, ii) the exploration of different regimes, in particular taking into account higher free-stream total stagnation enthalpies, and iii) the analysis of the interaction with fully-turbulent incoming boundary layers. to as S-H by the authors. The results reported in figure 24 show a quite acceptable agreement and the transition to turbulence is well captured. In the present simulation there is a mild separation bubble with respect to the authors results. Since the separation is extremely weak, this can be attributed to the statistically averaging of the skin friction coefficient. We also show, in figure 25, the instantaneous flow field colored by of the numerical schlieren in a xy-plane and the normalized streamwise velocity in a plane parallel to the wall. The results are in accordance with the imposed perturbation and with the analyses in Sandham et al. [19].
2022-12-15T06:42:30.489Z
2022-12-13T00:00:00.000
{ "year": 2022, "sha1": "8129a8dac56e79ffabcd788cc0e724ba8f35f1ff", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "8129a8dac56e79ffabcd788cc0e724ba8f35f1ff", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
118917950
pes2o/s2orc
v3-fos-license
Increased Prevalence of Bent Lobes for Double-Lobed Radio Galaxies in Dense Environments Double-lobed radio galaxies (DLRGs) often have radio lobes which subtend an angle of less than 180 degrees, and these bent DLRGs have been shown to associate preferentially with galaxy clusters and groups. In this study, we utilize a catalog of DLRGs in SDSS quasars with radio lobes visible in VLA FIRST 20 cm radio data. We cross-match this catalog against three catalogs of galaxies over the redshift range $0<z<0.70$, obtaining 81 tentative matches. We visually examine each match and apply a number of selection criteria, eventually obtaining a sample of 44 securely detected DLRGs which are paired to a nearby massive galaxy, galaxy group, or galaxy cluster. Most of the DLRGs identified in this manner are not central galaxies in the systems to which they are matched. Using this sample, we quantify the projected density of these matches as a function of projected separation from the central galaxy, finding a very steep decrease in matches as the impact parameter increases (for $\Sigma \propto b^{-m}$ we find $m = 2.5^{+0.4}_{-0.3}$) out to $b \sim 2$ Mpc. In addition, we show that the fraction of DLRGs with bent lobes also decreases with radius, so that if we exclude DLRGs associated with the central galaxy in the system the bent fraction is 78\% within 1 Mpc and 56\% within 2 Mpc, compared to just 29\% in the field; these differences are significant at $3.6\sigma$ and $2.8\sigma$ respectively. This behavior is consistent with ram pressure being the mechanism that causes the lobes to bend. Introduction Double-lobed radio galaxies (DLRGs) are spectacular sights in the radio sky, and also are scientifically interesting because they connect processes on the ∼AU scale of a galaxy's supermassive black hole (SMBH) to the extragalactic scale (∼tens-hundreds of kpc). Such galaxies are historically divided into two classes (Fanaroff & Riley 1974): less-luminous FR I galaxies with brighter cores and fainter lobes, and more-luminous FR II galaxies with brighter lobes and fainter cores. FR I galaxies also tend to be found in optically luminous galaxies (Ledlow & Owen 1996), typically in brightest cluster galaxies (BCGs), the most luminous galaxies of all (Zirbel 1997). FR II galaxies are also found in denser environments, but preferentially in groups rather than clusters (Zirbel 1997). Bent-double radio galaxies are a subclass of DLRGs, with the angle between their two lobes bent so that they subtend less than 180 • . They are more likely to be found in high-density environments than ordinary DLRGs, and are found with roughly equal probability in clusters and groups; in total, 6% of Abell clusters host a bent-double galaxy (Blanton et al. 2001). Wing & Blanton (2011) explore the use of bent DLRGs as a way to detect galaxy clusters, and are able to associate 78% of their sample of bent DLRGs with clusters or rich groups in SDSS. This correlation with environment may or may not be causal, but there are several plausible mechanisms which may explain it. One such mechanism is ram pressure experienced by the lobes as the galaxy moves through the intragroup/intracluster medium (Miley et al. 1972;Jaffe & Perola 1973;Jones & Owen 1979). There are also other possibilities, such as collisions between outflowing lobes and other cluster galaxies (Stocke et al. 1985) or merger-induced precession of the SMBH spin axis (Merritt & Ekers 2002). Ongoing or recent mergers (Roettiger et al. 1996), or clusters with sloshing motions in their intracluster medium (e.g. Abell 2029; Paterno-Mahler et al. 2013) could also be important (Mendygral et al. 2012). Each mechanism predicts that more bent DLRGs should arise in dense environments, but there are perhaps second-order observable differences which may be used to distinguish between them (e.g. Rector et al. 1995). Regardless of which mechanism drives the relationship between bent doubles and environment, a few researchers have begun to invert the relationship, using bent doubles to probe the diffuse gas around the galaxies. Freeland et al. (2008) examined two FR I bent DLRGs, one inside a small group and the other at a projected distance of 2 Mpc from a group, and inferred intergalactic gas densities of 4 × 10 −3 cm −3 and 9 × 10 −4 cm −3 , respectively, at the locations of these galaxies (assuming the bending was caused by interaction with diffuse intergalactic gas). Freeland & Wilcots (2011) Here we extend this type of analysis to a much larger sample of bent DLRGs, using the catalog of DLRGs compiled by de Vries et al. (2006). We cross-match this sample of DLRGs with various catalogs of central galaxies massive halos, which collectively span a significant fraction of Cosmic time. With this dataset, we can study the environmental behavior of DLRGs in unprecedented detail. The structure of this paper is as follows. In Section 2, we discuss the catalogs examined in this work, the various selection criteria which were used to generate them, and the methods for cross-matching the catalogs. In Section 3 we analyze the results of this cross-matching in order to measure the environmental behavior of DLRGs and the properties of the bending. In Section 4, we interpret these results and conclude. Sample and Methods We consider in this paper the catalog of DLRGs from De Vries, Becker, and White (2006;hereafter DBW). They cross-matched 44894 quasars from the Sloan Digital Sky Survey (SDSS) Data Release 3 with the Faint Images of the Radio Sky at Twenty centimeters (FIRST; Becker et al. 1994) survey from the Very Large Array (VLA) in order to construct a very large sample of DLRGs. For each SDSS quasar, DBW examined each radio source projected within 450 , using a pairwise ranking system in order to evaluate the probability of the radio sources being lobes of the central quasar. Their ranking system favors potential sources which are closer in the sky to the central quasar and which have larger opening angles. From the DR3 sample of 44894 SDSS quasars, DBW identified 35936 candidate DLRGs. A significant fraction of these candidate DLRGs are "false positives" -quasars for which two radio sources are projected by chance in the sky such that the algorithm of DBW identifies them as potential radio lobes. DBW studied the incidence of these "false positives" and found that, for pairs of radio sources around a quasar with a projected separation of less than 90", a large majority of the candidate DLRGs are real DLRGs (especially for opening angles close to 180 • ). Candidate DLRGs with projected separations of 60"-120" are about equally likely to be real DLRGs or false positives. Based on these results as well as our own studies of these populations, we select the 780 DLRG candidates with projected lobe separations less than 90" for further study. The remaining 780 candidate DLRGs have redshifts ranging from z = 0.041 to z = 4.889, and there is no single catalog tracing large-scale structure in SDSS over such a wide range of redshifts. We therefore created a composite sample using three different catalogs spanning different redshifts. Groups and Clusters The first bin of galaxy groups and clusters spans z = 0 to z = 0.20. This entire volume is covered by a flux-complete (down to Galactic-extinction-corrected Petrosian r-magnitude of 17.77) group and cluster catalog (Tempel et al. 2014) containing 82458 groups and clusters. At higher redshift it is more difficult to identify groups and clusters using the relatively shallow SDSS photometry. Instead, we use the catalogs from the SDSS-Baryon Oscillation Spectroscopic Survey (BOSS) Data Release 10 (Ahn et al. 2014). BOSS is a spectroscopic survey of massive galaxies in the SDSS footprint. There are two sets of catalogs -LOWZ and CMASS -with slightly different photometric selection criteria. The selection criteria are designed such that both catalogs are approximately stellar-mass limited with typical log M * /M = 11.3 (Parejko et al. 2013;Guo et al. 2013). At these stellar masses, the BOSS galaxies are predominantly red and highly clustered (Guo et al. 2013), so we take them to be reasonably good tracers of large scale structure at these redshifts. We therefore select galaxies from the LOWZ catalog with 0.20 < z < 0.47 for our second redshift bin, and galaxies from the CMASS catalog with 0.47 < z < 0.70 for our third redshift bin. We placed cuts at z = 0.2, z = 0.47 and z = 0.7 to ensure there was not any double-counting between different catalogs. These bins contain 209788 and 446158 central galaxies, respectively. Together, our total list of galaxies, groups, and clusters contains 738404 systems. While these systems are primarily galaxy groups and clusters, our coordinates for the BOSS sample will refer to the bright central galaxies, and we will often refer to the galaxies for simplicity, although of course the larger group/cluster is the primary object of interest. Cross Matching We cross-match the DBW catalog of DLRGs with the galaxies in our various redshift bins. Our initial match criterion is a DLRG falling within 10 projected Mpc and 3,000 km/s of the galaxy. We compute the projected distance (impact parameter) from the measured angular separation, which we multiply by the comoving distance of the galaxy estimated from its redshift assuming the Planck Collaboration et al. (2016) cosmological parameters. The radial velocity separation cutoff of 3,000 km/s was chosen as a somewhat arbitrary upper limit on the escape velocity of a large cluster. We find that there is a steep drop-off of DLRG-group pairs at velocity separations greater than 3,000 km/s, so our results are not very sensitive to the exact cutoff employed here. Using this cross matching criterion, we found 81 DLRG -galaxy pairs. This method of cross-matching allows for the possibility of a DLRG matching with multiple galaxies, which happens in most cases (61/81). We therefore incorporate the galaxies' impact parameters, velocity separations and halo masses to estimate a relative probability for the DLRG to be associated with each matched galaxy. Under the simplest assumption of that the galaxies in a group are isotropically distributed as R −3 in 3D space (which is the NFW scaling at the Mpc scales we consider here) and that they have an isotropic and Gaussian distribution in velocity space, their projected space density will scale with impact parameter b as (b/b 0 ) −2 and their projected velocity density will scale with velocity separation σ as e −σ 2 /2σ 2 0 . We therefore define the relative probability for a DLRG to be associated with a system i as In this expression, M i is the mass of the cluster or group, and absorbs the mass dependence of b 0 and σ 0 , allowing our estimator to prefer to associate DLRGs with more massive systems. C is a normalization constant defined for each DLRG such that Σ i P i = 1, and the σ 0 appears in the denominator in order to normalize the Gaussian term to unity. We set b 0 = 300 kpc and σ 0 = 1100 km/s, which are typical values for a massive cluster, although we varied these values by ±25% and the matches were not significantly changed. For the groups with redshifts z < 0.2, we define M i from the mass estimations included in the Tempel et al. (2014) catalog, which are based on the measured velocity dispersion of the galaxies in the group and an assumed halo mass profile (Navarro et al 1997;Maccio et al. 2008). However, not many galaxies are detected in most of the groups -60% have two galaxies, and 91% have five or fewer galaxies -so these mass estimates are quite uncertain. For the groups with five or fewer detected galaxies, 71% have masses less than 1 × 10 13 M in the Tempel et al. catalog, but for such poor groups the uncertainties in measuring velocity dispersion become more important than the measurement itself, and for simplicity we institute a mass floor of 1 × 10 13 M for these poor groups. For the catalog containing groups with redshifts 0.2 < z < 0.7, we have no straightforward halo mass estimator, so we assign each system the same halo mass (the exact value drops out of equation 1, but we use 5 × 10 13 M ; van Uitert et al. 2015). For each DLRG, the galaxy with the largest P i is selected as the match. Of the 61 DLRGs with multiple potential matches, 33 are matched to the galaxy with the lowest impact parameter out of the potential candidates. In angular space, the median impact parameter is 7.7', and in physical space it is 3.3 Mpc. Verification of DLRGs We also visually inspected each of the 81 candidates using the VLA-FIRST images. Since the DLRG catalog was assembled automatically, many "false positives" are obvious by eye, with one or both lobes missing, and/or the outlying sources clearly appearing as point sources instead of radio lobes. We also performed additional quantitative tests to verify the reality of the DLRGs in our sample. First, we require that the sum of the specific luminosities from the central galaxy and the two lobes be brighter than 1 × 10 31 erg s −1 Hz −1 . This is ten times fainter than the specific luminosity separating FR II objects from FR I objects Fanaroff & Riley (1974), and serves as a test that the luminosity of the DLRG is physically plausible. No k-correction is applied for this calculation. Only five of the 81 DLRGs fail this test, and visual inspection shows that all five of these objects appear to be projections of unrelated radio point sources. Second, we require the lobes to be diffuse objects, not point radio sources, so we discard any DLRG if one or both of its lobes appear less than 2.5" in radius. This was done by visually assessing the diameter of each lobe in the VLA FIRST images. 56 of the 81 DLRGs pass this cut while 25 fail. Third, we require the DLRG to be the brightest radio source within a 1 radius region to ensure that the radio lobes are not mistakenly attributed to the central QSO. In a few cases, we disagreed with the choice of central galaxy in the DWB catalog (i.e. the central galaxy was misidentified as a lobe and vice versa; this is obvious upon visual inspection but difficult to quantify algorithmically). In these cases we manually changed the lobe placement if it was clear the original placement was incorrect and there was a clear alternative associated with the DLRG core. When there was not a clear alternative, we rejected the DLRG. When a DLRG was accepted yet needed a lobe position update, we repositioned the lobes and recalculated the angle between the DLRG lobes using these new coordinates. In most cases the change is small, but for a handful of objects we identified one of the lobes with a different radio source than DBW, which caused the bending angle to change significantly. In 13/81 cases the newly calculated angle changed by at least 10 • , but 7 of these 12 cases were rejected by one of the other tests outlined above. The six remaining DLRGs with changed angles have ID numbers of 17, 18, 22, 37, 38, and 41 in Table 1 below. All objects that fail one or more of these requirements are rejected, along with the objects that failed the visual classification. In the end, we accepted 44 DLRG-group matches from the original 81. Of these 44 DLRGs, ten are matched to a different cluster or group than the one with the smallest impact parameter, due to the M i and σ terms in equation (1). In Table 1 we present basic data for these 44 DLRGs. Projected Density of Matches The cross-matching, the heterogeneous galaxy catalogs, and the various stages of DLRG verification described in section 2.3 all introduce complicated biases in the sample selection. Modeling the sample selection in detail would require an unwieldy set of assumptions, including assumptions about galaxy evolution, halo occupation, evolution of the DLRG spectral shape and luminosity function, as well as the parameters in which we are interested like the connection between DLRGs and large-scale structure. Since this work is primarily observational, and we want to be as parsimonious with assumptions as possible, we instead construct a "control" sample with the same sample selection biases, in order to compare to the DLRG sample. To do this, we generate a set of mock coordinates for each DLRG by shifting the true coordinates in right ascension and declination by various amount ranging from 4-16 degrees, yielding a total of 28 mock DLRGs for each of the 44 true DLRGs, for a total of 1232 mock DLRGs. We perform the same cross-matching as in section 2.2 for these mock DLRGs. The Note. --List of DLRG -galaxy pairs which pass all of our selection criteria. The first column shows an identification number for the pair. The next three columns show the right ascension, declination and redshift of the DLRGs. Columns 5, 6, and 7 show the angle between the lobes of the DLRG and the +/-errors on these from visual inspection. The final three columns are the right ascension, declination and redshift of the galaxy to which the DLRG is matched. Note that in most cases the DLRG is not the central galaxy of the cross-matched system, but instead is a satellite galaxy or is outside the virial radius. -12result is a sample of central galaxies cross-matched with random positions on the sky, but obeying the same redshift distribution as our DLRG sample. In Figure 1 The control sample shows a gradual decrease in projected density as a function of impact parameter, declining by a factor of ∼ 4 over the 5 Mpc range in impact parameters covered in the plot. This decline is probably due to the algorithm (eq. 1) which favors matches with smaller impact parameters, in combination with our choice to assign the redshifts of the observed DLRGs to the random positions. From 3 Mpc < ∼ b < ∼ 5 Mpc, the data and the random sample match very well, implying that chance projections on the sky are a plausible explanation for DLRG -galaxy group/cluster pairs with these large impact parameters. For b < ∼ 2 Mpc, there is a clear increase of these pairs in our data relative to the control sample of chance projections. DLRGs are therefore significantly more likely than random positions on the sky to be found near galaxy groups and clusters. In the innermost bin, the projected density of DLRG -central galaxy pairs is ten times higher than the projected density of random position -central galaxy pairs, and based on Poisson statistics the probability of this occurring by chance is just 3 × 10 −10 . There is also a small "dip" in the observed DLRG projected density at b ≈ 2.5 Mpc, but it is not statistically significant. Our interpretation of Figure 1 is therefore that the majority of the DLRG -galaxy group/cluster matches reflect physical associations when the impact parameter is less than about 2 Mpc. For the pairs with impact parameter greater than about 2 Mpc the majority (possibly all) of these DLRG -galaxy pairs reflect chance superpositions on the sky. We speculate that there are additional groups which are missing from our catalogs and lie at much smaller projected distances from these DLRGs. We have also fit a power-law fit to the data in Figure 1 by minimizing χ 2 (which we mentioned is inappropriate for goodness of fit testing due to the oversampling, but still adequate for line fitting). This line has a slope of m = 1.9 ± 0.2 (with Σ ∝ b −m ), corresponding to a real-space decrease in density of ρ ∝ r 2.9±0.2 . However, this slope is likely too shallow, since the data include a contribution from the "background" of chance superposition as well. We estimate this background by taking the mean from b = 3 Mpc to b = 4 Mpc. Subtracting this "background", the slope steepens to m = 2.5 +0.4 −0.3 . Both of these values imply a very steep decrease with density (i.e. at least as steep as an NFW profile), and this will be discussed further below. DLRG Bending Angles In Figure 2 we present the measured angle between the lobes of each DLRG for the sample of 44 verified cross matches. For each of these 44 objects, we have drawn vertical error bars which encompass our uncertainty in the bending angle, as determined by the visual analysis in section 2.3. These are obtained by identifying edges for both lobes and computing all the possible angles which can subtend these edges; these error bars are therefore much more conservative than 1σ error bars. We have also drawn a dashed horizontal line at 170 • , which we use to approximately distinguish between "bent" and "unbent" DLRGs; note that due to the measurement Mpc (with the exception of a "dip" at around 2.5 Mpc which is not statistically significant), suggesting that all the DLRG-galaxy matches beyond this radius can be explained as random projections on the sky. For smaller impact parameters, the projected number of DLRGs matching with the central galaxy is much higher than the density of random projections that match, suggesting there is a physical correlation between DLRGs and the galaxies which trace galaxy groups and clusters in our sample. We fit the projected density within 2 Mpc with a power-law, and find a best-fit slope of m = 1.9 ± 0.2 (with Σ ∝ b −m ). We also perform a subsequent fit after subtracting the "background" at large radii and find an even In this plot, there are four DLRG-galaxy pairs, circled in red in Figure 2, whose velocity separations are all below 750 km/s and whose lobes subtend angles between 170 • and 180 • . The projected separation between the radio source and the central galaxy is also less than 0.2 Mpc for these four galaxies. We therefore hypothesize that they are the central galaxies of their respective systems. Central galaxies are not the focus of this paper. The evolution of their lobes can also be effected by buoyancy (Gull & Northover 1973, Churazov et al. 2000, Churazov et al. 2001) as well as large-scale sloshing motions in the intracluster medium, especially if the cluster is not relaxed (there is some evidence that they are preferentially associated with merging clusters; Sakelliou & Merrifield 2000). Due to the former issue, focus of this paper is on the behavior of satellite radio galaxies, and we neglect these four central galaxies in this work. We also assume the intracluster medium is quiescent; sloshing motions, if they exist, may introduce noise into our measurement. The dropoff with radius in projected density of DLRGs (discussed in the previous section) is also visible in Figure 2 (recall that the differential area increases linearly with projected radius). Based on our analysis in the previous section, the DLRG -galaxy matches with an impact parameter > ∼ 2 Mpc are consistent with being chance projections on the sky. Thus, while the three DLRG-galaxy pairs with b = 3.5 Mpc and b = 6.8 Mpc in Figure 2 (which are identified as #25 and #34 in Table 1) have radio lobes that show clear signs of bending, it is unlikely that the bending is caused by the galaxy group/cluster we have identified. As discussed in the previous section, we think that there are likely additional galaxy groups that lie closer to the DLRG but are below the detection threshold for their respective surveys. These two matches in particular lie at z = 0.437 and z = 0.638, which are near the upper ends of their respective redshift bins. We can model the expected fraction of bent DLRGs using the chance projections on the sky, which we conservatively estimate from Figure 2 using the galaxies with impact parameter of at least 2 Mpc. There are 24 such galaxies, of which 7 are bent, corresponding to an expected bent fraction of 29%. Excluding the four central galaxies in the red circle, the observed bent fraction within 1 (2) Mpc is 7/9 (9/16), corresponding to 78% (56%). The seven bent galaxies within 1 Mpc are shown in Figure 3. Keeping the total number of galaxies within 1 (2) Mpc fixed, the expected number of bent galaxies within this impact parameter is 2.42 (4.31). Assuming binomial statistics, the probability of getting at least the observed number of bent galaxies, given the expected number, is 3.4 × 10 −4 (5.6 × 10 −3 ). These probabilities indicate that the null hypothesis (DLRG bending being uncorrelated with the central galaxy) should be rejected at 3.6σ (2.8σ). We therefore conclude that the bending is correlated with the proximity of these DLRGs to the center of a nearby galaxy group or cluster. Discussion and Conclusions One of the results is that the density of DLRGs is declining more rapidly with radius than the density of galaxies in a typical cluster, which follows an NFW profile. The projected density of DLRGs has a power-law slope in radius of 2.5 +0.4 −0.3 , or a space density decline of r −m , where m = 3.3 − 4.0 which can be compared to the density of a NFW profile in the outer part of a cluster or group, where m = 2.5 − 3. There may be a few reasons for DLRGs to be more concentrated than the ensemble of galaxies. One aspect is that the central dominant galaxy can be quite massive and the probability of it being a DLRG is enhanced relative to normal galaxies. Another factor is that the luminosity from the radio jet and radio lobes can be lower in the outer parts of the cluster because of the lower density. Two important characteristic sizes of the radio structure scale as n −1/2 : the recollimation of the jet (Alexander 2006); and the larger size when the lobes are in pressure balance with the surrounding medium (Komissarov & Falle 1998). With these larger sizes, both the relativistic electron density and the magnetic field within the jets and lobes are likely lower, so the emissivity is less. This is shown from simulations by Hardcastle & Krause (2013, where the luminosity in lower density regions (due to steeper density laws for the ambient cluster medium) can be an order of magnitude less. Lower luminosity lobes would be detected less frequently in flux-limited samples, so DLRGs at large radii from the center my exist but be undetected in the samples that we used. Another aspect that we examined was the degree of bending as a function of distance from the cluster center. Under the assumption that ram pressure is responsible, the ram pressure force is proportional to nσ 2 , where σ is the galaxy velocity dispersion of a cluster and n is the ambient gas density. The ambient density decreases rapidly, typically as r −2 to r −3 in a cluster (e.g Bahcall & Lubin 1994), while the velocity dispersion has a very slow decline (Zhang et al. 2011). The acceleration of the lobes is proportional to the ram pressure divided by the lobe mass, and if we assume that the lobe mass is independent of location in the cluster and that the lobe size is predicted to increase as n −1/2 (then the area goes as n −1 ), the acceleration is proportional to nσ 2 × n −1 /M lobe ∼ constant. If the lifetime of DLRGs is independent of position in the cluster, the distance bent should be about the same, on average. However, the DLRGs at large radii are expected to be longer, so as the bending angle is the displacement by ram pressure divided by the length of the DLRG, the ones furthest from the center should have smaller bending angles. This expectation of smaller bending angles with distance is consistent with the data for b < ∼ 2 Mpc, but it is not proven by our data set. Many more DLRGs would be needed to carry out this statistical test. As optical surveys become deeper, cluster and group catalogs will become much more complete. This should reduce the "background" of DLRGs whose associated cluster is not detected, enabling a much more precise measurement. This may help to constrain such parameters as the pressure in the lobes and the degree of density fluctuations in the intracluster medium, as well as definitively establishing the existence or non-existence of bent DLRGs outside of larger virialized halos. With a larger sample, it should be possible to study other interesting physics as well, such as the covering fraction of intercluster filaments beyond the virial radius. Acknowledgements We would like to thank Wim de Vries for sending us the list of DLRGs from their analysis, as well as Phillip Hughes and Eugene Churazov for helpful discussions and comments. We would like to acknowledge the Undergraduate Research Opportunities Program (UROP) at the University of Michigan as well. We thank the referee for a helpful report which improved the quality of the paper. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. Figure 2 -i.e. the seven DLRGs whose lobes subtend an angle less than 170 • and who have an impact parameter less than 1 Mpc. Clockwise from top left, these objects have identification numbers 36, 19, 38, 41, 4, 40, and 37 in Table 1. In each image, the small, thick red circle is the location of the quasar at the DLRG core. The larger, thinner red circles are the approximate locations of the DLRG lobes used to estimate the bending angle. These positions were placed by hand based on the automatic estimated from the DBW catalog. SDSS-III is managed by the Astrophysical Research The red lines from the core through the center of the lobes were used to calculate the angle and the cyan and green lines to calculate the error on the angle. The yellow lines are 30" and each image uses a logarithmic stretch to better show the faint structures.
2017-11-25T21:06:20.000Z
2017-11-25T00:00:00.000
{ "year": 2017, "sha1": "91d866fbd21a0f2e844c6a4564a9da299ff145dc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1711.09290", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "91d866fbd21a0f2e844c6a4564a9da299ff145dc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
3504670
pes2o/s2orc
v3-fos-license
Dampened STING-Dependent Interferon Activation in Bats Compared with terrestrial mammals, bats have a longer lifespan and greater capacity to co-exist with a variety of viruses. In addition to cytosolic DNA generated by these viral infections, the metabolic demands of flight cause DNA damage and the release of self-DNA into the cytoplasm. However, whether bats have an altered DNA sensing/defense system to balance high cytosolic DNA levels remains an open question. We demonstrate that bats have a dampened interferon response due to the replacement of the highly conserved serine residue (S358) in STING, an essential adaptor protein in multiple DNA sensing pathways. Reversing this mutation by introducing S358 restored STING functionality, resulting in interferon activation and virus inhibition. Combined with previous reports on bat-specific changes of other DNA sensors such as TLR9, IFI16, and AIM2, our findings shed light on bat adaptation to flight, their long lifespan, and their unique capacity to serve as a virus reservoir. In Brief Bats co-exist with a large variety of viruses, and infection-derived cytosolic DNA could result in heightened DNA sensing and overactivation. Xie et al. show that STING-dependent IFN activation is dampened in bats due to the replacement of the highly conserved and functionally important serine residue S358. Bats are uniquely the only flying mammals and are found to have a positively selected oxidative phosphorylation pathway as a result of an increased metabolic capacity (Shen et al., 2010). Byproducts of oxidative metabolism and stress are known to cause DNA damage, resulting in the escape of self-DNA from the nucleus, mitochondria, or lysosomes into the cytoplasm (Barzilai et al., 2002). Bats have been increasingly linked to deadly viruses such as severe acute respiratory syndrome (SARS), Ebola virus, and henipaviruses (Wynne and Wang, 2013). Although most of these zoonotic viruses are RNA viruses, bats also harbor a variety of DNA viruses (Brook and Dobson, 2015). In addition, it is known that infection of RNA viruses can also result in cytosolic DNA due to intracellular damage (Ryan et al., 2016). Infection-derived cytosolic DNA as well as self-DNA is known to trigger robust immune responses, leading to inflammasome activation and type I interferon (IFN) induction (Schlee and Hartmann, 2016). While it is accepted that overactivation of either inflammasome or IFN can cause autoimmune diseases (Peckham et al., 2017), it is unknown how bats, while naturally maintaining a high burden of viruses and the oxidative stressors of flight, are able to regulate the response against stimulatory sensing of cytosolic DNA to avoid overactivation of innate immune pro-inflammatory pathways. In humans, the DNA sensors of the innate immune system include AIM2 and IFI16 in inflammasome assembly (Hornung et al., 2009;Kerur et al., 2011), and TLR9, IFI16, DDX41, LSM14A, and cGAS in IFN expression (Latz et al., 2004;Li et al., 2012;Sun et al., 2013;Unterholzner et al., 2010;Zhang et al., 2011). Among these cytosolic sensors, cGAS was identified as the universal and essential DNA sensor that produces cyclic GMP-AMP (cGAMP) in response to DNA stimulation (Sun et al., 2013), which in turn binds to and activates stimulator of IFN genes (STING; also known as MITA, ERIS, and MPYS), the essential adaptor protein in multiple DNA sensing pathways (Ishikawa and Barber, 2008;Jin et al., 2008;Sun et al., 2009;Zhong et al., 2008). Following STING activation, TBK1 is recruited to STING, leading to the subsequent phosphorylation of STING and IRF3 by TBK1. This ultimately triggers the type I IFN response. Point mutation of either phosphorylation site (S358 or S366) of STING to alanine significantly impaired its ability to activate downstream IFNs (Liu et al., 2015;Tanaka and Chen, 2012;Zhong et al., 2008). There are limited studies on bat DNA sensors despite the belief that bat cells are likely to be more at risk of cytosolic DNA exposure. A recent comparative genomics study showed that the most positively selected genes of bats seemed to be concentrated in the DNA damage checkpoint pathway and innate immunity (Zhang et al., 2013). One of these genes encodes the inflammasome sensor NLRP3. More strikingly, the entire PYHIN gene family, including AIM2 and IFI16, is lost in all bat genomes sequenced so far, implying a dampened DNA-triggered inflammasome response (Ahn et al., 2016). Bats have been shown to have a contracted type I IFN locus and different expression patterns of type III IFNs compared with those in human and mouse (Zhou et al., 2011(Zhou et al., , 2016. Also, TLR9 seems to be under greater positive selection in bats compared with other mammals (Escalera-Zamudio et al., 2015). Taken together, these findings suggest that bats may have evolved to adopt a DNA sensing and IFN response mechanism in adaptation to flight, which is sufficiently different from terrestrial mammals. In this context, we hypothesized that cytosolic DNA, whether it is flight-induced or infection-derived, imposes strong selective pressures on the bat DNA sensors, resulting in a functionally dampened sensing mechanism and downstream IFN production to avoid overactivation on a regular basis during normal flight and/or co-existence with viruses. As STING is increasingly being recognized as the central molecule in the cytosolic DNA sensing pathway, we conducted a comprehensive sequence and functional analysis of bat STING. Sequence alignment of all available bat STING (from a total of 30 different species) with ten major non-bat mammalian STING revealed a key difference: while the S358 residue is absolutely conserved among all known non-bat mammalian STING, none of the bat STING retain the S358. Instead, this residue has been replaced by a variety of different residues at this position, including N, H, F, Y, P, D, and R ( Figure 1). As the S358 phosphorylation site is critical for downstream IFN activation in humans and other mammals (Liu et al., 2015;Tanaka and Chen, 2012;Tsuchida et al., 2010;Zhong et al., 2008), this key residue change strongly suggests a weakened bat STING in the context of IFN activation. To test this, we compared the functional difference in the induction of IFNs between bat and mouse by cGAMP. Splenocytes from three individual Rhinolophus sinicus (Rs) and three laboratory mice were stimulated by cGAMP. Rs has been Table S1. See also Figure S1 and Table S1. reported to be the reservoir host of the lethal severe acute respiratory coronavirus (SARS-CoV) (Ge et al., 2013). In contrast to a strong induction of IFNb and IRF7 (an IFN-stimulated gene [ISG]) in mouse, transfection of cGAMP induced a much lower level of IFNb and IRF7 mRNA in Rs bats from qPCR analysis ( Figure 2A). As controls, both poly I:C and Sendai virus treatment resulted in a comparable induction level of IFNb and IRF7 in both cells ( Figure 2A). The qPCR results were corroborated by RNA high-throughput sequencing (RNA-seq). As shown in Figure 2B, a number of mouse ISGs were strongly upregulated upon cGAMP treatment, whereas the upregulation of bat ISGs was much less, both in number and fold change. Between the two Rs bats, there were subtle differences in ISG induction, which was not unexpected considering wild-caught outbred bats were used in this study. Phylogenetic analysis showed bat STING clustered with known mammalian STING ( Figure S1A). qPCR analysis of mRNA levels in a range of Rs, Myotis davidii (Md), and Pteropus alecto (Pa) primary organs revealed an expression pattern not dissimilar to that found in mouse: STING was found to be expressed in a variety of tissues in bats, with highest levels in spleen and lung ( Figure S1B). It can thus be concluded that phylogenetic divergence or difference in gene expression patterns between bats and mice is unlikely to be responsible for the observed reduction in STING-mediated IFN production. We then examined whether the dampening of bat STING function by the change in residue 358 is common in other bats. In HEK293T cells, which lack endogenous cGAS and STING expression (Sun et al., 2013), STING from three representative bats, Rs, Pa, and Md, and human was overexpressed together with human cGAS and IFNb promoter plasmids. Although polymorphism has been observed in human STING, a previous study indicated that the three variants (RGR, AQ, and HAQ) have different but comparable ability in IFN induction activities depending on the experimental conditions (Yi et al., 2013). In this study, we have confirmed their observation and used the AQ variant for further studies. Mutant human STING S358A significantly reduced STING-induced IFNb production as reported previously (Zhong et al., 2008). Conversely, mutant bat STING X358S (X = N, H, or D) significantly restored their ability for IFNb induction ( Figure 2C). To exclude the possibility that the human cell system may affect bat STING function, we repeated the experiment in PakiT03, a Pa bat cell line that expresses reasonable amount of cGAS but a very low level of STING (Zhou et al., 2016). The pattern was essentially identical to that observed in HEK293T, with wild-type bat STING showing dampened induction of IFN and ISGs compared with the X358S mutants ( Figure S2A). We also tested this dampening function by cGAMP. HEK293T cells stably expressing wild-type (D358) or mutant S358 Rs STING were stimulated with cGAMP in digitonin permeabilization solution. The S358 STING induced significantly higher IFN ( Figure 2D). These results suggest that the S358 replacement is mainly responsible for dampened STINGdependent IFN activation with cGAS co-expression or cGAMP stimulation. It is proposed that bat's exceptional ability to host viruses with few or no clinical disease is likely the result of an intricate balance between the host immune system and virus infection (Schountz, 2014;Wynne and Wang, 2013). We hypothesized that the dampened STING-IFN responses could be partially responsible for that intricate balance. In assessing the effect of different STING on herpes simplex virus (HSV) infection in PakiT03 cells, it was found that the wild-type human STING was about 2.5-fold more effective in blocking HSV replication than the S358A mutant. However, this was reversed for bat STING, in which the wild-type bat STING was less effective than the mutant STING X358S with a reduction of approximately 3-, 2.5-, and 2-fold, respectively, for Md, Pa, and Rs STING ( Figure 2E). In human STING, residues S366 and S358 are important for IRF3, but not TBK1, binding and activation (Tanaka and Chen, 2012). To understand the detailed mechanism of dampened STING-dependent IFN activation, we investigated whether this bat-specific S358 replacement universally affects IRF3 and TBK1 activation. When HEK293T cells were transfected with human or bat STING-expressing plasmids, phosphorylation of IRF3, but not TBK1, was markedly higher in cells transfected (1 mg/mL), or infected with SeV (100 hemagglutinin units/mL). Six hours later, the induction of IFNb and IRF7 genes was determined by qPCR. Primers can be found in Table S2. (B) Transcriptome next-generation sequencing of splenocyte RNAs. The differentially expressed genes (DEGs) were analyzed by RSEM at FDR (false discovery rate) < 0.05. The ISG in the DEG sets of mice and Rs bat are listed. Fold change is indicated in color from 0 to 110. (C) Restoration of STING function by introducing S358 in bat STING. HEK293T cells were cotransfected with STING, cGAS, IFNb promoter firefly luciferase, and renilla luciferase plasmids. Luciferase activity was determined 24 hr posttransfection. The blots showing protein levels can be found in Figure S2. (D) cGAMP treatment of HEK293T stably expressing various STING. Cells stably expressing the indicated proteins were transfected with IFNb promoter firefly luciferase and renilla luciferase plasmids. Six hours later, cells were permeabilized in digitonin buffer with or without 1 mg/mL cGAMP. Luciferase activity was determined 16 hr after treatments. (E) PaKiT03 cells were transfected with indicated STING plasmids followed by infection with HSV-luciferase at MOI = 0.1 at 24 hr post-transfection. At 24 hr post-infection, HSV replication was determined by luciferase activity. Data from (A), (C), (D), and (E) are presented as the means ± SD, n = 3, **p < 0.01, ***p < 0.001 (Student's t test). For (C) and (D), data represent fold change according to wells transfected with empty vector (set as 1). WT, wild-type; mt, mutant; Hs, Homo sapiens; Md, Myotis davidii; Pa, Pteropus alecto; Rs, Rhinolophus sinicus; pIC, poly I:C. See also Figure S2 and Table S2. with S358 STING ( Figure S2B). Similar findings were observed in bat PakiT03 cells ( Figure S2C), which eventually contributed to a different downstream IFN response and in turn the observed difference in modulating HSV replication. Taken together, these results demonstrated that while bat STING maintained its antiviral defense similar to human STING, the dampening likely contributed in part to the long-term co-existence of bats and viruses. We hypothesized that excessive exposure to cytosolic DNA in bat cells during flight and/or viral infection would pose a strong natural selection pressure to reduce activation of bat DNA sensors. In this report, we have provided genetic and functional data to support this hypothesis. We have demonstrated that bat STING is less active in IFN induction and pinpointed residue 358 as the key site of difference between bat and human STING. Experimentally, we have demonstrated the replacement of the S358 residue in different bat STING resulted in dampened downstream IFN production via IRF3 phosphorylation. To our knowledge, this is the most conclusive experimental demonstration of a key innate defense pathway that is functionally different between bats and non-bat mammals with implications that bats are more effective in peaceful co-existence with a large number of viruses. There is abundant evidence that bats harbor more viruses per species than other mammals (Brook and Dobson, 2015;Luis et al., 2013;Olival et al., 2017). Infection-derived DNA can act as activators of DNA sensors such as cGAS and subsequently STING (Schlee and Hartmann, 2016). Constitutive activation of STING could cause severe autoimmune diseases, such as vascular and pulmonary syndrome or Aicardi-Goutieres syndrome (Barber, 2015), while some highly pathogenic viruses such as SARS-CoV are known to induce excessive inflammation eventually leading to human death (Channappanavar et al., 2016). The replacement of the serine residue at position 358 in every known bat STING is highly significant considering it is absolutely conserved in all other mammals. Our data support that the S358 replacement in bat STING dampened but did not fully diminish the functionality of STING. The nature of the weakened, but not entirely lost, functionality of STING may have profound impact for bats to maintain the balanced state of ''effective response'' but not ''over response'' against viruses. A similar finding was observed for bat IFNa, which are less in number but are constitutively expressed without stimulation (Zhou et al., 2016). Above all, this discovery helps to further our understanding of the complex mechanisms in which bats fine-tune innate defense responses against insult by viral, bacterial, or host cytosolic DNA. CONTACT FOR REAGENT AND RESOURCE SHARING Further information and requests for reagents may be directed to and will be fulfilled by the Lead Contact, Peng Zhou (peng.zhou@ wh.iov.cn). Ethics Statement All animal experiments were approved by the Institutional Animal Ethical Committee of Wuhan Institute of Virology, Chinese Academy of Sciences (Serial number: WIV05201603). Bat and Mouse Experiments Wild-type adult BALB/c mice between ages of 8 to 10 weeks were purchased from Beijing Vital River and cared at a specific pathogen free (SPF) facility. Adult Myotis davidii and Rhinolophus sinicus captured from Taiyi cave (Xianning, China) and Pteropus alecto bats trapped in Southern Queensland, Australia were euthanized and dissected directly. Mice and bats were used without gender preference. Splenocytes of bats and mice were prepared as previously described (Zhou et al., 2011). Briefly, spleen cell were obtained by pressing spleen tissue through a cell strainer using a syringe plunger and splenocytes were collected with Lymphoprep (Axis-Shield) following manufacturer's instructions. Viruses and Cell Lines HEK293T cells were maintained in DMEM + 10% FCS (Gibco). Bat PakiT03 cells were maintained in DMEM/F-12 + 10% FCS (Gibco). Splenocytes of bats and mice were maintained in RPMI-1640 + 10% FCS (Gibco). All cells were cultured at 37 C in 5% CO 2 . Cell lines were tested free of mycoplasma contamination and authenticated by microscopic morphologic evaluation. Sendai virus (SeV) Cantell strain was propagated in 10-day-old embryonated SPF chicken eggs at 37 C for 48 h. The HSV expressing luciferase was generously provided by Chun-Fu Zheng at Institutes of Biology and Medical Sciences, Soochow University, China. Plasmid Construction and Transfection The STING sequence of Homo sapiens was amplified from HA-STING plasmid, which was generously provided by Yan-Yi Wang, Wuhan Institute of Virology, CAS, China. Myotis davidii, Pteropus alecto and Rhinolophus sinicus STING sequences were amplified from cDNA of corresponding bat spleen tissues and cloned into pCAGGS with C-terminal S-tag. STING of Rhinolophus sinicus was also cloned into pQCXIH with C-terminal GFP-tag. Various mutants were generated using the QuikChange site-directed mutagenesis kit (Stratagene). (see Table S2 for primers). Plasmids were verified by sequencing before transfection using lipofectamine 3000 (Thermo) following manufacturer's instructions. Virus Infection and Quantification PakiT03 cells were infected with HSV-luciferase at MOI of 0.1 at 37 C for 1 hr. Cells were then washed with warm D-hanks and cultured in complete DMEM/F-12. At 24 hr post infection, the cells were washed by cold PBS and HSV-luciferase quantified by Luciferase Assay System (Promega) according to the manufacturer's instructions. HEK293T Cells Stably Expressing STING Construction To generate retroviral vectors for transduction of Rhinolophus sinicus STING into HEK293T cells, GP2-293 Packaging Cells were plated at 6-well plate overnight at the density of 4 3 10 5 /ml and transfected with 1.5 mg pQCXIH-R.sinicus. STING and 1.5 mg pVSV-G using lipofectamine 3000 (Thermo). At 6 hr post transfection, the media was replaced with fresh media. At 48 hr post transfection, the supernatants containing the retrovirus was collected, filtered through a 0.45 mm filter, and used to infect HEK293T cells. At 72 hr post infection, the transduced HEK293T cells were selected with 10 mg/ml hygromycin. Dual Luciferase Assay Plasmids with optimized amount (100 ng pCAGGS-STING; 200 ng pcDNA3.1-cGAS; 100 ng IFNb-luc and 10 ng pRL-Tk, internal control from Promega) were transfected into HEK293T cells in 24-well plates. 24 hr later, luciferase was determined by Luciferase Assay System (Promega). The ratio of firefly to renilla luciferase signal was calculated and then normalized to the wells transfected with empty pCAGGS vector. Quantification of Gene Expression by qRT-PCR Total RNA was extracted using the RNeasy Mini Kit (QIAGEN), followed by cDNA reverse transcription using PrimeScript RT Master Mix (Takara). Gene expression was determined by SYBR Premix Ex Taq II (Tli RNaseH Plus) (Takara) on StepOnePlus system. Primers were listed in Table S2. Western Blot HEK293T or PakiT03 Cells were washed with cold PBS for two times, then lysed by1% NP-40 buffer supplemented with complete protease cocktail inhibitor (Roche) for 30 min on ice. Cell lysates were mixed with SDS loading buffer and denatured at 95 C for 5min. Equal amounts of denatured lysates were subjected to SDS-PAGE and transferred to PVDF membrane. The membranes were blocked with 5% BSA for 1 hr. The following primary antibodies were used at 1:1000 dilution: anti-phospho-IRF3 (CST, 4947S), anti-phospho-TBK1 (CST, 5483S), anti-TBK1 (Abcam, ab40676), anti-IRF3 (proteintech, 11312-1-AP). And the following primary antibodies were used at 1:3000 dilution: anti-S tag (Abcam, ab184223), anti-b actin (proteintech, 60008-1-Ig). After an overnight incubation with primary antibodies, the membranes were washed with TBS supplemented with 0.1% Tween-20 three times and then incubated with HRP conjugated goat anti-rabbit or goat anti-mouse secondary antibody diluted in TBST. Membranes were then washed three times and exposed using SuperSignal West Femto substrate (Thermo Scientific). Splenocytes Stimulation and RNA Sequencing 2',3'-cGAMP and poly I:C (Invivogen) were transfected into splenocytes at 1mg/ml with lipofectamine 3000 or infected with 100 hemagglutinin units (HAU)/ml of SeV. 6 hr later, RNA was extracted and gene expression was determined by RT-qPCR. RNA-seq was conducted with 150-bp paired-end reads on an Illumina HiSeq2000 sequencer. Sequences and NGS Data Analysis STING sequences of Eidolon helvum, Pteronotus parnellii and Rhinolophus ferrumequinum were predicted by Genewise using Pteropus alecto STING protein as reference. Other STING sequences were either downloaded from Genbank or assembled with transcriptome data, in which case the reads of STING were picked out by SRA-blast using annotated Genbank bat STING (Eptesicus fuscus, Myotis davidii, etc) as query and assembled by Seqman (Lasergene). The Genbank or SRA accession numbers are listed in Table S1. The alignment of STING was generated by MEGA4 and edited by Genedoc. For the analysis of RNA-Seq data of splenocytes, read counts were calculated by RSEM, and differential expression analysis was conducted with DESeq at FDR (false discovery rate) < 0.05. QUANTIFICATION AND STATISTICAL ANALYSIS Immunoblot Band Quantitation Quantification of band intensities was performed using ImageJ (version 1.50i). Statistical Analysis Data analyses were performed using GraphPad Prism 6.0 software. All data are shown as mean ± SD. Statistical analysis was performed using student's t test with two tailed, 95% confidence. P values less than 0.05 were considered statistically significant. The ''n'' represents the number of animals, cells and experimental replicates carried out, and was specified in the figure legends. DATA AND SOFTWARE AVAILABILITY The accession number for the RNAseq data of splenocytes treatment by cGAMP reported in this paper is NCBI Short Read Archive database, SRA: PRJNA393936. The accession numbers for the Eidolon helvum, Rhinolophus ferrumequinum and Pteronotus parnellii STING nucleotide sequences are GenBank: MF174844-MF174846.
2018-04-03T00:44:55.195Z
2018-02-22T00:00:00.000
{ "year": 2018, "sha1": "c1023149c1c40eecc61eb40bd5bb514cd11504a4", "oa_license": "elsevier-specific: oa user license", "oa_url": "http://www.cell.com/article/S1931312818300416/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "9129655b267d42439d31e6e53ab9c38177fdf990", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Biology", "Medicine" ] }
118722671
pes2o/s2orc
v3-fos-license
Phenomenological aspects of 10D SYM theory with magnetized extra dimensions We present a particle physics model based on a ten-dimensional (10D) super Yang-Mills (SYM) theory compactified on magnetized tori preserving four-dimensional ${\cal N}=1$ supersymmetry. The low-energy spectrum contains the minimal supersymmetric standard model with hierarchical Yukawa couplings caused by a wavefunction localization of the chiral matter fields due to the existence of magnetic fluxes, allowing a semi-realistic pattern of the quark and the lepton masses and mixings. We show supersymmetric flavor structures at low energies induced by a moduli-mediated and an anomaly-mediated supersymmetry breaking. Introduction The standard model (SM) of elementary particles is a quite successful theory, consistent with all the experimental data obtained so far with a great accuracy. There are, however, many free parameters, which can not be determined theoretically, making the model less predictable. Among these parameters, especially, Yukawa coupling constants seem to be awfully hierarchical in order to explain the observed masses and mixing angles of the quarks and the leptons. It is argued that some flavor symmetries are helpful to understand such a hierarchical structure. (See, for a review, Ref. [1].) Another interesting possibility is a quasi-localization of matter fields in extra dimensions, where the hierarchical couplings are obtained from the overlap integral of their localized wavefunctions [2]. It is also suggested that the former flavor symmetries are realized geometrically as a consequence of the latter wavefunction localization in extra dimensions [3,4] 1 . The SM does not describe gravitational interactions of elementary particles that could play an important role at the very beginning of our universe. Superstring theories in tendimensional (10D) spacetime are almost the only known candidates that can treat gravitational interactions at the quantum level. These theories possess few free parameters and potentially more predictive than the SM. Supersymmetric Yang-Mills (SYM) theories in various spacetime dimensions appear as low energy effective theories of superstring compactifications with or without D-branes. Thus, it is an interesting possibility that the SM is embedded in one of such SYM theories, that is, the SM is realized as a low energy effective theory of the superstrings. In such a string model building, how to break higher-dimensional supersymmetry and to obtain a chiral spectrum is the key issue. String compactifications on the Calabi-Yau (CY) space provide a general procedure for such a purpose. However, the metric of a generic CY space is hard to be determined analytically, that makes the phenomenological studies qualitative, but not quantitative. It is quite interesting that even simple toroidal compactifications but with magnetic fluxes in extra dimensions induce chiral spectra [6,7] in higher-dimensional SYM theories. The higherdimensional supersymmetry such as N = 4 in terms of supercharges in four-dimensional (4D) spacetime is broken by the magnetic fluxes down to 4D N = 0, 1 or 2 depending on the configuration of fluxes. The number of the chiral zero-modes is determined by the number of magnetic fluxes. A phenomenologically attractive feature is that these chiral zero modes localize toward different points in magnetized extra dimensions. The overlap integrals of localized wavefunctions yield hierarchical couplings in the 4D effective theory of these zero modes, that could explain, e.g., observed hierarchical masses and mixing angles of the quarks and the leptons [8]. Furthermore, higher-order couplings can also be computed as the overlap integrals of wavefunctions [9]. A theoretically attractive point here is that many peculiar properties of the SM, such as the 4D chirality, the number of generations, the flavor symmetries [3,10,11] and potentially hierarchical Yukawa couplings all could be determined by the magnetic fluxes. Moreover if the 4D N = 1 supersymmetry remains, a supersymmetric standard model could be realized below the compactification scale that has many attractive features beyond the SM, such as the lightest supersymmetric particle as a candidate of dark matter and so on. In our previous work [12], we have presented 4D N = 1 superfield description of 10D SYM theories compactified on magnetized tori which preserve the N = 1 supersymmetry, and derived 4D effective action for massless zero-modes written in the N = 1 superspace. We further identified moduli dependence of the effective action by promoting the Yang-Mills (YM) gauge coupling constant g and geometric parameters R i and τ i to a dilaton, Kähler and complex-structure moduli superfields, which allows an explicit estimation of soft supersymmetry breaking parameters in the supersymmetric SM caused by moduli-mediated supersymmetry breaking. The resulting effective supergravity action would be useful for building phenomenological models and for analyzing them systematically. Motivated by the above arguments, in this paper, we construct a particle physics model based on 10D SYM theory compactified on three factorizable tori T 2 × T 2 × T 2 where magnetic fluxes are present in the YM sector. We search a phenomenologically viable flux configuration that induces a 4D chiral spectrum including the minimal supersymmetric standard model (MSSM), based on the effective action written in N = 1 superspace. For such a flux configuration that realize a realistic pattern of the quark and the lepton masses and their mixing angles, we further estimate the sizes of supersymmetric flavor violations caused by the moduli-mediated supersymmetry breaking. The sections are organized as follows. In Sec. 2, a superfield description of the 10D SYM theory is briefly reviewed based on Ref. [12], which allows the systematic introduction of magnetic fluxes in extra dimensions preserving the N = 1 supersymmetry. Then, we construct a model that contains the spectrum of the MSSM, in which the most massless exotic modes are projected out due to the existence of the magnetic fluxes and a certain orbifold projection in Sec. 3. In Sec. 4, we numerically search a location in the moduli space of the model where a realistic pattern of the quark and the lepton masses and their mixing angles are obtained. Then, assuming the moduli-mediated supersymmetry breaking, we estimate the magnitude of the mass insertion parameters representing typical sizes of various flavor changing neutral currents (FCNC) in Sec. 5. Sec. 6 is devoted to conclusions and discussions. In Appendix A, the Kähler metrics and the holomorphic Yukawa couplings are exhibited for the MSSM matter fields in the 4D effective theory. 2 The 10D SYM theory in N = 1 superspace Based on Ref. [12], in this section, we review a compactification of 10D SYM theory on 4D flat Minkowski spacetime times a product of factorizable three tori T 2 × T 2 × T 2 and a superfield description suitable for such a compactification with magnetic fluxes in each torus preserving 4D N = 1 supersymmetry. The geometric (torus) parameter dependence is explicitly shown in this procedure, which is important to determine couplings between YM and moduli superfields in the 4D effective action for chiral zero-modes. The 10D SYM theory is described by the following action, where g is a 10D YM gauge coupling constant and the trace is performed over the adjoint representation of the YM gauge group. The 10D spacetime coordinates are denoted by X M , and the vector/tensor indices M, N = 0, 1, . . . , 9 are lowered and raised by the 10D metric G M N and its inverse G M N , respectively. The YM field strength F M N and the covariant derivative D M are given by for a 10D vector (gauge) field A M and a 10D Majorana-Weyl spinor field λ, respectively. The spinor field λ satisfies 10D Majorana and Weyl conditions, λ C = λ and Γλ = +λ, respectively, where λ C denotes a 10D charge conjugation of λ, and Γ is a 10D chirality operator. The 10D spacetime (real) coordinates X M = (x µ , y m ) are decomposed into 4D Minkowski spacetime coordinates x µ with µ = 0, 1, 2, 3 and six dimensional (6D) extra space coordinates y m with m = 4, . . . , 9. The zeroth component µ = 0 describes the time component. The 10D vector field is similarly decomposed as A M = (A µ , A m ). The 10D background metric is given by where η µν = diag(−1, +1, +1, +1). Because we consider a torus compactification of internal 6D space y m by identifying y m ∼ y m + 2 and the 6D torus is decomposed as a product of factorizable three tori, T 2 × T 2 × T 2 , the extra 6D metric can be described as where each entry is a 2 × 2 matrix and the diagonal submatrices are expressed as for i = 1, 2, 3. The real and the complex parameters R i and τ i determine the size and the shape of the ith torus T 2 , respectively. The area A (i) of the ith torus is determined by these parameters as The complex coordinates z i for i = 1, 2, 3 defined by are extremely useful for describing the action in 4D N = 1 superspace, where the corresponding complex vector components A i are defined by In the complex coordinate, the torus boundary conditions are expressed as z i ∼ z i + 1 and z i ∼ z i + τ i , and the metric is found as h ij = 2 (2πR i ) 2 δ ij = δ ij e i iēj j satisfying 2h ij dz i dzj = g mn dy m dy n = ds 2 6D , where e i i = √ 2 (2πR i ) δ i i is a vielbein, and the Roman indices represent local Lorentz space. The Italic (Roman) indices i, j, . . . (i, j, . . .) are lowered and raised by the metric h ij and its inverse h¯i j (δ ij and its inverse δ¯i j ), respectively. The above N = 1 vector and chiral multiplets, V and φ i , are expressed by vector and chiral superfields, V and φ i , respectively as where θ andθ are Grassmann coordinates of 4D N = 1 superspace. The 10D SYM action (1) can be written in the N = 1 superspace as [13] The functions of the superfields, K, W and W α , are given by where ǫ ijk is a totally antisymmetric tensor satisfying ǫ 123 = 1, and D α (Dα) is a supercovariant derivative (its conjugate) with a 4D spinor index α (α). The term K WZW represents a Wess-Zumino-Witten term which vanishes in the Wess-Zumino (WZ) gauge. The equations of motion for auxiliary fields D and F i lead to The condition D = F i = 0 determines supersymmetric vacua. A trivial supersymmetric vacuum is given by A i = 0 where the full N = 4 supersymmetry as well as the YM gauge symmetry is preserved. In the following, we select one of nontrivial supersymmetric vacua where magnetic fluxes exist in the YM sector, and construct a particle physics model with a semi-realistic flavor structure of (s)quarks and (s)leptons caused by a wavefunction localization of chiral matter fields in extra dimensions due to the effect of magnetic fluxes. The model building We consider the 10D U(N) SYM theory 2 on a supersymmetric magnetic background where the YM fields take the following 4D Lorentz invariant and at least N = 1 supersymmetric VEVs, Here N ×N diagonal matrices of Abelian magnetic fluxes and those of Wilson-lines are denoted, respectively, as The magnetic fluxes satisfying the Dirac's quantization condition, M ǫ jkl e k k e l l ∂ k A l = 0, with D and F i given by Eqs. (2) and (3), respectively. One of the consequences of nonvanishing magnetic fluxes is the YM gauge symmetry breaking. If all the magnetic fluxes M with a N a = N and all the fluxes M On the N = 1 supersymmetric toroidal background (4) with the magnetic fluxes (5) as well as the Wilson-lines satisfying Eq. (8), the zero-modes (V n=0 ) ab of the off-diagonal elements (V ) ab (a = b) of the 10D vector superfield V obtain mass terms, while the diagonal elements (V n=0 ) aa do not. Then, we express the zero-modes (V n=0 ) aa , which contain 4D gauge fields for the unbroken gauge symmetry a U(N a ), as On the other hand, ) ba has no zero-mode solution, yielding a 4D supersymmetric chiral generation in the ab-sector [12]. The opposite is true for M (j) ab > 0 and M (i) ab < 0 yielding a 4D chiral generation in the ba-sector. Therefore, we denote the zero-mode (φ n=0 j ) ab with the degeneracy N ab as where I ab labels the degeneracy, i.e. generations. We normalize φ I ab j by the 10D YM coupling constant g. For more details, see Ref. [12] and references therein. Three generations induced by magnetic fluxes We aim to realize a zero-mode spectrum in 10D SYM theory compactified on magnetized tori, that contains the MSSM with the gauge symmetry SU(3) C × SU(2) L × U(1) Y and three generations of the quark and the lepton chiral multiplets, by identifying those three with degenerate zero-modes of the chiral superfields φ I ab j . For such a purpose, we start from the 10D U(N) SYM theory with N = 8 and introduce the following magnetic fluxes where 1 N is a N × N unit matrix, and all the nonvanishing entries take different values from each other. These magnetic fluxes break YM symmetry as We consider the case that this is further broken down to where all the nonvanishing entries take different values from each other. The gauge symmetries SU(3) C and SU(2) L of the MSSM are embedded into the above unbroken gauge groups as A combination of the magnetic fluxes, that yield three generations from the zero-mode degeneracy and also the full-rank Yukawa matrices from the 10D gauge interaction as we will see later, are found as where the supersymmetry conditions (6) and (7) are satisfied by In this model, chiral superfields Q, U, D, L, N, E, H u and H d carrying the left-handed quark, the right-handed up-type quark, the right-handed down-type quark, the left-handed lepton, the right-handed neutrino, the right-handed electron, the up-and the down-type Higgs bosons, respectively, are found in φ I ab i as where the rows and the columns of matrices correspond to a = 1, . . . , 5 = C, C ′ , L, R ′ , R ′′ and b = 1, . . . , 5 = C, C ′ , L, R ′ , R ′′ , respectively, and the indices I, J = 1, 2, 3 and K = 1, . . . , 6 label the zero-mode degeneracy, i.e., generations. Therefore, three generations of Q, U, D, L, N, E and six generations of H u and H d are generated by the magnetic fluxes (11) that correspond to The zero entries of the matrices in Eq. (13) represent components eliminated due to the effect of chirality projection caused by magnetic fluxes. Because some vanishing fluxes are inevitable in Eq. (11) in order to realize three generations of quarks and leptons with the Yukawa coupling matrices of the full-rank, some of M (i) ab become zero in Eq. (14), that causes certain massless exotic modes Ξ (r) ab as well as massless diagonal components Ω (r) a , i.e., the so-called open string moduli, all of which feel zero fluxes. These exotics are severely constrained by many experimental data at low energies. In the following, we show that most of the massless exotic modes can be eliminated if we consider a certain orbifold projection on r = 2, 3 tori, that is, a sort of magnetized orbifolds [15]. Exotic modes and Z 2 -projection Three generations of quarks and leptons are generated in the first torus r = 1 by the magnetic fluxes (11). The number of the degenerate zero-modes (generations) is changed by the orbifold projection [15]. We assume the T 6 /Z 2 orbifold where the Z 2 acts on the second and the third tori r = 2, 3 in order to eliminate only the exotic modes without affecting the generation structure of the MSSM matter fields realized by the magnetic fluxes (11). Then, the Z 2 transformation of 10D superfields V and φ i is assigned for ∀ m = 4, 5 and ∀ n = 6, 7, 8, 9 as where P is a projection operator acting on YM indices satisfying P 2 = 1 N . The φ 2 and φ 3 fields have the minus sign under the Z 2 reflection, because those are the vector fields, A i (i = 2, 3) on the Z 2 orbifold plane. Note that the orbifold projection (15) respects the N = 1 supersymmetry preserved by the magnetic fluxes (11), because the Z 2 -parities are assigned to the N = 1 superfields V and φ i . For the matter profile (13) caused by the magnetic fluxes (11), we find that the following Z 2 -projection operator, removes most of the massless exotic modes Ξ The matter contents on the orbifold T 6 /Z 2 is found as where I, J = 1, 2, 3 and K = 1, . . . , 6 label the generations as before. There still remain massless exotic modes Ξ That is one of the open problems in the T 6 /Z 2 magnetized orbifold model. In the following phenomenological analyses, these exotic modes are assumed to become massive through some nonperturbative effects or higher-order corrections, so that they decouple from the low-energy physics. Due to the orbifold projection (15), nonvanishing Wilson-line parameters in Eq. (10) are possible 3 only in the first torus r = 1. We denote differences of these nonvanishing Wilson-line parameters as whose numerical values are determined later phenomenologically. Anomalous U (1)s and the hypercharge Finally in this section, we discuss the U(1) gauge fields and their charges in the low energy spectrum. As shown above, most of the exotic matter fields become massive (some of them are assumed) on the orbifold background, and then the low energy spectrum of this model is the MSSM-like matters with additional pairs of up and down type Higgs doublets. The gauge group is given by SU(3) C × SU(2) L × U(1) 5 and we denote each U(1) X charge as Q X for X = a, b, c, d, e. The particle contents and their gauge charges are summarized in Table 1. As is well known, there are two non-anomalous local and global U(1) symmetries in the MSSM that we denote U(1) Y and U(1) B−L , respectively, where Y represents the hypercharge and B (L) is the baryon (lepton) charge. In our model the U(1) B−L is a local symmetry, The U(1) hypercharge Q Y can be given by where α is an arbitrary number. It is easy to check that these three U(1) symmetries, U(1) Y , U(1) B−L and U(1) D , are anomaly-free for both the mixed U(1) and non-Abelian gauge groups. As for U(1) D gauge group, there is no charged chiral matter under this gauge group, so the U(1) D gauge field can decouple. In the following, we assume that the U(1) B−L is spontaneously broken at a high energy scale. There is another U(1) gauge symmetry which has a property of Peccei-Quinn symmetry U(1) PQ , whose charges for matter and Higgs fields are −1/2 and +1, respectively. This U(1) PQ symmetry prohibit the so-called µ-term. However the U(1) PQ symmetry as well as the remaining fifth U(1) symmetry are anomalous, and then we assume all the gauge fields of the anomalous U(1)s become massive via, e.g., the Green-Schwarz mechanism [17], and decouple from the low-energy physics. Then, it is interesting to survey the possibility of the dynamical generation of the µ-term, although we just assume its existence in the following phenomenological analysis. Flavor structures of (s)quarks and (s)leptons In this section we show that a semi-realistic pattern of quark and lepton mass matrices are realized at a certain point of the (tree-level) moduli space in our model. The hierarchical structure of the Yukawa couplings is achieved by the wavefunction localization of the matter fields in extra dimensions, whose localization profiles are completely determined by the magnetic fluxes (14). More interestingly, if we embed the 10D SYM theory, which is the starting point of our model, into 10D supergravity, the flavor structure of the superparticles induced by the moduli-mediated supersymmetry breaking is also fully determined by the wavefunction profile. Therefore the supersymmetric flavor structure of the model can be analyzed based on the effective supergravity action derived through a systematic way proposed in Ref. [12]. The 4D effective action with the N = 1 local supersymmetry is generally written in terms of 4D N = 1 conformal supergravity [18] as where K, W and f a are the effective Kähler potential, the superpotential and the gauge kinetic functions, respectively, as functions of light modes as well as the moduli, and the chiral superfield C plays a role of superconformal compensator. Here and hereafter, we work in a unit that the 4D Planck scale is unity. The MSSM sector in the 4D effective theory The effective Kähler potential K, superpotential W and gauge kinetic functions f a (a = 1, 2, 3) for the MSSM sector of our model on the T 6 /Z 2 magnetized orbifold at the leading order are found in the 4D effective action (17) as [12] where Q I and Φ m symbolically represents the MSSM matter and the moduli chiral superfields, respectively, the subscript r = 1, 2, 3 labels the rth two-dimensional torus T 2 among the factorizable three tori T 2 ×T 2 ×T 2 , and traces of the YM-indices are implicit. The explicit expressions of the moduli Kähler potential K (0) (Φm, Φ m ), the matter Kähler metrics Z We assume a certain mechanism of moduli stabilization and supersymmetry breaking that fixes VEVs of moduli superfields, Note that these VEVs determine 10D parameters g, A (i) and τ i as [7] Re s = g −2 and then S, T r and U r are called the dilaton, the Kähler moduli and the complex structure moduli, respectively, in the 4D effective theory. In the following analyses, numerical values of the moduli VEVs as well as the Wilson-lines are scanned phenomenologically. Quark and lepton masses and mixings First we analyze the flavor structure of the SM sector in our model. Canonically normalized Yukawa couplings between three generations of the quarks or the leptons and six generations of the Higgs doublets are calculated by where Y (Q) IJ represents the superspace wavefunction coefficient ofQĪQ J in the superspace action, which is related to the Kähler metric as respectively, that is, where V and those of the geometric moduli (20) as well as the Wilson-line parameters (16), yield a semi-realistic pattern of the quark and the charged lepton masses as well as the CKM matrix at the electroweak (EW) scale shown in Table 2. The normalization factors N Hu = 1/ √ 2.7 2 + 1.3 2 and N H d = 1/ 2(0.1 2 + 5.8 2 ) in Eq. (24) are factorized just for convenience later in Eq. (27). Here, we assume some nonperturbative effects [22] and/or higher-dimensional operators that effectively generate supersymmetric mass terms, Because VEVs of these Higgs fields shown in Eq. (24) generate a semi-realistic pattern of the quark and the lepton masses and their mixing angles, here we consider the case that the supersymmetric mass parameters µ KL are aligned in such a way that are satisfied with unitary matrices U H u,d , whereK,L = 1, 2, . . . , 6 label the supersymmetric mass eigenstates diagonalizing µ KL , and the VEVs H K u,d represent those shown in Eq. (24). In this case, five of the six Higgs doublets other than HK =1 u,d decouple from the light modes due to the heavy supersymmetric masses µK =1 . In the following, the numerical value of the µ-parameter µ ≡ µK =1 is determined so that the EW symmetry is broken successfully yielding the observed masses of the W and the Z bosons, and then the masses and the mixing angles of quarks and leptons shown in Tables 2 and 3 (11), the Wilson-lines and the VEVs of the moduli (25) and the Higgs fields (24). The experimental data [21] are also shown. All the mass scales are measured in the unit of GeV. select the overall magnitudes of t r so that the compactification scale, i.e., the mass scale of the lightest Kaluza-Klein mode becomes as high as M GUT , and their ratios are defined to preserve supersymmetry conditions (12). Here, the running of the parameters from M GUT to the EW scale is evaluated by the one-loop renormalization group (RG) equations of the MSSM. From Table 2, we find that the observed hierarchies among three generations of quarks and charged leptons are realized even with the above non-hierarchical VEVs of fields (24) and (25). It is quite interesting and suggestive that the complicated flavor structure of our real world could be realized at a certain point in the (tree-level) moduli space of the 10D SYM theory, whose action is simply given by Eq. (1) at the leading order in a rigid limit. In addition, if we assume some nonperturbative effects [22] or higher-dimensional operators effectively generate Majorana masses 4 for the right-handed neutrino N J in the superpotential such as that yields a semi-realistic pattern of the neutrino masses and the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) lepton mixing matrix [20] at the EW scale shown in Table 3. Table 3: Numerical values of the neutrino masses (m ν 1 , m ν 2 , m ν 3 ) as well as the absolute values of the elements in the PMNS matrix V PMNS at the EW scale, evaluated at the same sample point in the moduli space as in Table 2 but with the Majorana masses (29). The experimental data [21] are also shown. All the mass scales are measured in the unit of GeV. Soft supersymmetry breaking terms The low energy features of the superparticles in our model are governed by soft supersymmetry breaking parameters, namely, the gaugino masses M a , the scalar masses (m 2 Q )Ī J and the scalar trilinear couplings A those are the lowest components of the chiral superfields Q I in the θ and theθ expansion. Note that only the direction withK = 1 remains light in the Higgs sector of H K u and H K d . The so-called B-term also appears as the soft supersymmetry breaking term. In the following, its value is determined numerically such that the EW symmetry is broken successfully. The explicit moduli dependence of the Kähler and the superpotential (18) in the MSSM sector allows us to determine moduli-mediated contributions [23] to the soft supersymmetry breaking parameters (induced by nonvanishing F -components of S, T r and U r ) as well as the anomaly-mediated one [24] (induced by a nonvanishing F -component of C). These contributions are summarized as [25] where γ Q J is the anomalous dimension of Q J , and F m represents F -components of moduli superfields, that is, while C 0 and F C are the lowest and the θ 2 components of C, respectively, in the θ and theθ expansion. Here, we fix the dilatation symmetry by C 0 = exp(K| θ=θ=0 /6) that corresponds to the Einstein frame. In the following, we study phenomenological aspects of our model at low energies, in the case that the above soft parameters are dominated by the moduli-and the anomaly-mediated contributions and the other contributions (such as the gauge-mediated one that is further model dependent) are negligible, by assuming a certain moduli stabilization and a supersymmetry breaking mechanism outside the MSSM sector that cause such a situation. Phenomenological aspects at low energies It has been found that the three generations of quarks and leptons are obtained from the degeneracy of chiral zero-modes due to the magnetic fluxes (11), yielding consequently the six-generations of up-and down-type Higgs doublets. Furthermore, a semi-realistic pattern of the quark and the charged lepton masses and the CKM mixings can be realized as shown in Tables 2 at a certain point in the moduli space of the 10D SYM theory where the numerical values of the Higgs and the moduli VEVs as well as the Wilson-line parameters are given as shown in Eqs. (24) and (25). The undetermined parameters so far are supersymmetry breaking order parameters F m = {F S , F T r , F U r } and F C mediated by moduli and compensator chiral multiplets Φ m = {S, T r , U r } and C, respectively. As a representative scale of the supersymmetry breaking M SB , we refer the F -component of the dilaton superfield S, and define ratios, Here, we assume that CP phases of F S , F T r , F U r and F C are the same, and R T r , R U r and R C are real. Then, there is no physical CP violation due to supersymmetry breaking terms. Otherwise, there would be a strong constraint on CP violation in the soft supersymmetry breaking terms. As shown in Eq. (18), the gauge kinetic functions depend on only the dilaton superfield S at the tree level in the (leading order) effective supergravity action 5 . Then, the gaugino masses shown in Eq. (30) are determined by F S and F C at the compactification scale independently of R T r and R U r . On the other hand, the lower bound on the gluino mass M 3 860 GeV is found from the recent LHC data [27]. In the following we analyze phenomenological features of our model for M SB = 1 TeV satisfying the above condition. By varying the ratios R T r , R U r and R C , we show phenomenological aspects of our model, especially, typical sizes of the flavor violations caused by superparticles. Hereafter, we neglect all the Yukawa couplings except for those involving only the third generations, y and y (E) 33 , for a numerical performance, when we evaluate soft parameters and their RG running. We also reevaluate accordingly the RG runnings of Yukawa couplings in this approximation that was not adopted in the analysis of quark and lepton masses and mixings. Supersymmetric flavor violations In models with a low-energy supersymmetry breaking, the flavor violations such as FCNCs caused by superparticles are severely constrained by the experiments. As measures of such supersymmetric flavor violations in our model, we adopt so-called mass insertion parameters [28], those we define as (δ (Qy) L,R are given in Eq. (22), the matrices (a (Qy) ) IJ ≡ y LR,LL,RR ) IJ as a function of R U 1 evaluated at the same sample point in the moduli space as in Table 2 with the fixed values of R U r =1 = 0.9, R T r = 1 and M SB = 1 TeV. (δ (E) LR ) 12,21 restricting FCNCs that enhances µ → eγ transitions [28], is very severe as shown in Fig. 4. From this figure, we find that the value of F U 1 is severely restricted, that is, the amount of supersymmetry breaking mediated by the complex-structure (shape) modulus of the first torus, U 1 , must be extremely small. That is expected from the fact that only the U 1 distinguishes the flavors (the differences between the wavefunction profiles of chiral matter fields on the first torus) as can be seen in the expressions of Yukawa couplings (21). On the other hand, all the other moduli S, T r , U r =1 can mediate sizable supersymmetry breaking without conflicting with the experimental data concerning supersymmetric flavor violations. These flavor violations become smaller for larger values of M SB due to the decoupling effect of the superparticles. A typical superparticle spectrum We show a typical superparticle spectrum at the EW scale by varying R C with fixed values of M SB , R U r , R T r and tan β in Fig. 5. The supersymmetry breaking scale is again fixed as M SB = 1 TeV. Because the value of R U r=1 is severely constrained as shown in Fig. 4, an allowed small value R U 1 = −0.05 is chosen, while the ratio R U r =1 and R T r does not affect the spectrum so much and then R U r =1 = 0.9 and R T r = 1 (r = 1, 2, 3) are adopted here. As mentioned previously, the µ-parameter is fixed in such a way that the EW symmetry is broken successfully yielding the observed masses of W and Z bosons. Curves describing some soft scalar masses in Fig. 5 are terminated at R C ∼ 1.6, because the EW symmetry is not broken successfully with R C 1.6. A mediation mechanism of supersymmetry breaking, which is a sizable mixture of the modulus and the anomaly mediation, namely R C ∼ O(1), is called the mirage mediation [29]. Especially, the mass spectrum with R C ∼ 1.6 in our model, where the gaugino masses and scalar masses respectively degenerate at the TeV scale, resembles that of the TeV scale mirage mediation model [30]. It is pointed out in this model that the notorious fine-tuning between supersymmetric and supersymmetry breaking parameters in the MSSM is dramatically ameliorated. As for the lightest superparticle in the above spectrum, we find that it is a neutralino. The eigenvalues of the neutralino and the chargino masses measured in the unit of GeV are listed Table 2 So far, we have considered the scenario with a low-energy supersymmetry breaking, and selected a small value R U 1 = −0.05 to be consistent with the experimental data concerning supersymmetric flavor violations. If we consider the case with larger values of M SB , with which the flavor violations become smaller, the values of R U 1 can reside in much wider region. However, there are other two factors restricting the values of R U 1 besides those from FCNCs. One is related to the success of the EW symmetry breaking, and the other is related to obtaining non-tachyonic masses. We show the R U 1 dependence of the masses of sfermions with R C = 1.5 and M SB = 1 TeV in Fig 6. In the figure, some curves are terminated at R U 1 ∼ ±0.2 because the EW symmetry is not broken successfully for |R U 1 | 0.2, as the situation in Fig 5. We find that R U 1 has to be in the range R U 1 < 0.2, where we obtain non-tachyonic masses. With other values of R C and M SB , it is possible that the non-tachyon condition is more severe than the other. In some typical cases with (R C , M SB / TeV) = (0, 1), (0, 10),(1.5, 1) and (1.5, 10), we also find that the allowed region of the ratio R U 1 , where the EW symmetry is broken successfully and non-tachyonic masses are obtained, is roughly R U 1 < 0.2. That has to be in mind, especially when one considers larger values of M SB . LL,RR ) IJ as a function of R U 1 evaluated at the same sample point in the moduli space as in Table 2 with the fixed values of R U r =1 = 0.9, R T r = 1 and M SB = 1 TeV. Finally, we comment on the Higgs sector. In our model, there are some possibilities to obtain the mass of the lightest CP-even Higgs boson m h ∼ 125 GeV, which is indicated from the recent observations at the LHC [31]. First of all, as is well known, we can easily realize m h ∼ 125 GeV with M SB ∼ 10 TeV. The supersymmetric flavor violations are much smaller in this case than those we have studied above for M SB = 1 TeV, and the bound on R U 1 from the FCNCs disappears. Then, in this case |R U 1 | < 0.2 is suggested. The second possibility is that, we can consider the next-to MSSM in some extensions of our model where m h ∼ 125 GeV could be realized with a low scale supersymmetry breaking M SB ∼ 1 TeV (see for review e.g. Ref. [32]). In this case, the supersymmetric flavor violations and the superparticle spectrum estimated above can be applied straightforwardly. Some analyses of such an extended Higgs sector in the TeV-scale mirage mediation models are performed in Ref. [33]. Besides these two, there is one more interesting possibility. Although we have worked on the 10D SYM theory in this paper, it would be straightforward to extend our model to SYM theories in a lower-than-ten dimensional spacetime, or even to the mixture of SYM theories with a different dimensionality. For example, in type IIB orientifolds, our model will be adopted not only to the magnetized D9 branes (a class of which is T-dual to intersecting D6 branes in IIA side), but also to the D5-D9 [34] and the D3-D7 brane configurations with magnetic fluxes in the extra dimensions. An interesting possibility is that the SU(3) C and the SU(2) L gauge groups of the MSSM originate from different branes with a different dimensionality, and then the moduli-dependence of the gauge kinetic functions are different by the gauge groups, that can cause nonuniversal gaugino masses at the tree level in the effective supergravity action. The situation may allow m h ∼ 125 GeV just within the MSSM with a low scale supersymmetry breaking without a severe fine-tuning [35]. Even in this case the same flavor structures in the MSSM sector would be realized as those in the 10D model presented in this paper, if these two branes share a single magnetized torus T 2 of the same structure as the first torus (r = 1) in our 10D model. Furthermore, the mixed brane configurations may allow an introduction of the supersymmetry-breaking branes sequestered from the visible sector, which coincide with the flavor structure derived in this paper. The model Table 2 with the fixed values of R U r =1 = 0.9, R T r = 1 and M SB = 1 TeV. The horizontal dashed lines in the lower panels represent a typical value of the experimental upper bound restricting FCNCs that enhances µ → eγ transitions [28]. building based on such mixed brane configurations will be reported in separate papers [36]. Conclusions and discussions We have constructed a three-generation model of quark and lepton chiral superfields based on a toroidal compactification of the 10D SYM theory with certain magnetic fluxes in extra dimensions preserving a 4D N = 1 supersymmetry. The low-energy effective theory contains the MSSM particle contents, where the numbers of chiral generations are determined by the numbers of the fluxes they feel, and the most massless exotics can be projected out by a combinatory effect of the magnetic fluxes and a certain orbifold projection. We find that a semi-realistic pattern of the quark and the charged lepton masses and the CKM mixings is realized at a certain sample point in the (tree-level) moduli space of the 10D SYM theory, where the VEVs of the six Higgs doublets and of the geometric moduli as well as the Wilson-line parameters take reasonable numerical values without any hierarchies. In addition, it has been shown that a semi-realistic pattern of the neutrino masses and the PMNS mixings can be achieved at the same point of the moduli space, if we assume the existence of certain effective superpotential terms (28), those would be induced by nonperturbative effects and/or higher-order corrections. We have assumed the existence of such nonperturbative effects Table 2 with fixed values of R U 1 = −0.05, R U r =1 = 0.9, R T r = 1 and M SB = 1 TeV. and/or higher-order corrections making the remaining massless exotics heavy enough and also generating effectively the neutrino Majorana mass term (28) as well as the mu-term (26). Further studies are required to find the concrete origin of these effects. Thanks to the systematic way of the dimensional reduction in a 4D N = 1 superspace proposed in Ref. [12], the soft supersymmetry breaking parameters induced by the moduli-mediated supersymmetry breaking are calculated explicitly. Because the flavor structures of our model are essentially determined by the localized wavefunctions of the chiral zero-modes, the 4D effective theory possesses flavor dependent holomorphic Yukawa couplings and flavor independent Kähler metrics for the MSSM matter fields. Under the assumption that the moduli-mediated low scale supersymmetry breaking dominates the soft supersymmetry breaking terms in the MSSM, we estimated the size of supersymmetric flavor violations by analyzing the mass insertion parameters governing various FCNCs, scanning supersymmetry breaking order parameters mediated by the dilaton, the geometric moduli and the compensator chiral superfields in the 4D N = 1 effective supergravity. The most stringent bound comes from the µ → eγ on the size of the F -term in the chiral Table 2 with fixed values of R C = 1.5, R U r =1 = 0.9, R T r = 1 and M SB = 1 TeV. multiplet of the complex structure modulus of the first torus where the SM flavor structure is generated via the wavefunction localization. The result provides a strong insight into the mechanism of moduli stabilization in our model. For instance, a mechanism of the moduli stabilization proposed by Ref. [37] would be suitable, that predicts vanishing F -terms of the complex structure moduli [25,38] at the leading order. Therefore, it would be interesting to study a mechanism of the moduli stabilization and the supersymmetry breaking at a Minkowski minimum [37] by minimizing the moduli and the hidden-sector potential generated by some combinations [39] of nonperturbative effects and a dynamical supersymmetry breaking [40]. In our model, there are some possibilities to realize the mass of the lightest CP-even Higgs boson to be consistent with the recent observations at the LHC [31]. As mentioned at the end of Sec. 5, especially, it is very interesting to consider the D5-D9 [34] and the D3-D7 brane configurations with magnetic fluxes in the extra dimensions. With such brane configurations, we will be able to build more realistic models in which we can study concretely the Higgs sector as well as the supersymmetry-breaking sector, the mechanism of moduli stabilization and so on. Even in this case the same flavor structures would be realized as those in the 10D model presented in this paper, if these two branes share a single magnetized torus T 2 of the same structure as the first torus (r = 1) in our 10D model. The model building based on such mixed brane configurations will be reported in separate papers [36]. We have studied on the tree-level 4D effective theory of massless modes. Recently, massive modes were studied in Ref. [41]. They may have phenomenologically important effects on 4D effective theory. For example, the Kähler potential, superpotential and gauge kinetic functions would have threshold corrections due to massive modes and such corrections may affect the soft supersymmetry breaking terms. Thus, it is important to study such effects, although that is beyond our scope of this paper. where Q L = {Q, L}, Q R = {U, D, N, E} and Q H = {H u , H d }, and the Wilson-line parameters ζ Q L , ζ Q R and ζ Q H are defined in Eq. (16). On the other hand, in the superpotential, the holomorphic Yukawa couplings of chiral matters as functions of moduli are given by
2012-11-27T11:11:15.000Z
2012-11-19T00:00:00.000
{ "year": 2013, "sha1": "ef57063ba98de4e4b509dc69842c291768471717", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1211.4317", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ef57063ba98de4e4b509dc69842c291768471717", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
249236812
pes2o/s2orc
v3-fos-license
Molecular Identification of Cervical Microbes in HIV-Negative and HIV-Positive Women in an African Setting Using a Customized Bacterial Vaginosis Microbial DNA Quantitative PCR (qPCR) Array ABSTRACT Bacterial vaginosis (BV) is a common polymicrobial vaginal disorder that is associated with sexually transmitted infections (STIs), including HIV. Several studies have utilized broad-range 16S rRNA gene PCR assays with sequence analysis to characterize cervicovaginal bacterial communities of women with healthy and diseased conditions. With the high burden of BV and STIs among African women, there is a need for targeted PCR assays that can rapidly determine the true epidemiological profile of key cervical microbes, including BV-associated bacteria, and a need to explore the utility of such assays for microbiological diagnosis of BV. Here, we used a taxon-directed 16S rRNA gene quantitative PCR (qPCR) assay to examine the prevalences and determinants of specific cervical microbes among African women with and without HIV infection. Cervical samples were collected using a cytobrush from 162 women (aged ≥30 years) attending a community-based clinic in Eastern Cape, South Africa. The samples were screened for specific microbes (i.e., STIs, emerging sexually transmitted pathogens [pathobionts], and BV-associated bacteria) using a customized bacterial vaginosis microbial DNA qPCR array. Statistical analyses were performed using GraphPad Prism v6.01. Chi-square/Fisher’s exact tests were used to evaluate the determinants associated with specific cervical microbes. Only 145 women had any detectable microbes and were included in the analysis. Lactobacillus iners (62.8%) and specific BV-associated bacteria, namely, Gardnerella vaginalis (58.6%), Atopobium vaginae (40.7%), and the pathobiont Ureaplasma parvum (37.9%), were the most prevalent microbes. Hierarchical clustering analysis revealed that 42.8% of the women (62/145) had a diverse array of heterogeneously distributed bacteria typically linked to BV. Women with detectable Lactobacillus species, specifically Lactobacillus crispatus and Lactobacillus jensenii, and to a lesser extent L. iners, had very low prevalence of BV-associated bacteria. Although the cumulative burden of STIs/pathobionts was 62.8%, Chlamydia trachomatis (3.4%), Neisseria gonorrhoeae (4.8%), and Trichomonas vaginalis (4.8%) were detected at low rates. HIV infection was associated with the presence of STIs/pathobionts (P = 0.022) and L. iners (P = 0.003). Prevalent STIs/pathobionts were associated with having multiple partners in the past 12 months (n ≥ 2, P = 0.015), high number of lifetime sexual partners (n ≥ 3, P = 0.007), vaginal sex in the past month (P = 0.010), and decreasing age of women (P = 0.005). C. trachomatis was associated with increasing age among HIV-positive women (P = 0.016). The pathobiont Ureaplasma urealyticum was inversely associated with age of women in the whole cohort (P = 0.018). The overall prevalence of STIs/pathobionts was high and was associated with HIV infection and sexual behavior. Our study helps us to understand the epidemiological trend of STIs and pathobionts and highlights the need to understand the impact of sexual networks on STI and pathobiont transmission and prevention among women in an African setting. IMPORTANCE Bacterial vaginosis (BV), whose etiology remains a matter of controversy, is a common vaginal disorder among reproductive-age women and can increase the risk for sexually transmitted infections (STIs). African women bear a disproportionately high burden of STIs and BV. Using a targeted quantitative PCR (qPCR) assay, a customized bacterial vaginosis microbial DNA qPCR array, we examined the prevalences and determinants of key cervical microbes, including BV-associated bacteria and emerging sexually transmitted pathogens (pathobionts) among women of African descent aged between 30 and 75 years. High-risk behaviors were associated with a higher prevalence of STIs/pathobionts, suggesting the need to better understand the influence of sexual networks on STI and pathobiont transmission and prevention among women. Our molecular assay is important in the surveillance of BV-associated bacteria, pathobionts, and STIs as well as diagnostic microbiology of BV. Furthermore, our research contributes to a better understanding of the epidemiology of STIs and pathobionts in Africa. infection was associated with the presence of STIs/pathobionts (P = 0.022) and L. iners (P = 0.003). Prevalent STIs/pathobionts were associated with having multiple partners in the past 12 months (n $ 2, P = 0.015), high number of lifetime sexual partners (n $ 3, P = 0.007), vaginal sex in the past month (P = 0.010), and decreasing age of women (P = 0.005). C. trachomatis was associated with increasing age among HIV-positive women (P = 0.016). The pathobiont Ureaplasma urealyticum was inversely associated with age of women in the whole cohort (P = 0.018). The overall prevalence of STIs/pathobionts was high and was associated with HIV infection and sexual behavior. Our study helps us to understand the epidemiological trend of STIs and pathobionts and highlights the need to understand the impact of sexual networks on STI and pathobiont transmission and prevention among women in an African setting. IMPORTANCE Bacterial vaginosis (BV), whose etiology remains a matter of controversy, is a common vaginal disorder among reproductive-age women and can increase the risk for sexually transmitted infections (STIs). African women bear a disproportionately high burden of STIs and BV. Using a targeted quantitative PCR (qPCR) assay, a customized bacterial vaginosis microbial DNA qPCR array, we examined the prevalences and determinants of key cervical microbes, including BV-associated bacteria and emerging sexually transmitted pathogens (pathobionts) among women of African descent aged between 30 and 75 years. High-risk behaviors were associated with a higher prevalence of STIs/pathobionts, suggesting the need to better understand the influence of sexual networks on STI and pathobiont transmission and prevention among women. Our molecular assay is important in the surveillance of BV-associated bacteria, pathobionts, and STIs as well as diagnostic microbiology of BV. Furthermore, our research contributes to a better understanding of the epidemiology of STIs and pathobionts in Africa. KEYWORDS HIV, cervical microbes, bacterial vaginosis (BV), sexually transmitted infection (STI), emerging sexually transmitted pathogen (pathobiont), African women, bacterial vaginosis microbial DNA qPCR array C ommon wisdom is that a preponderance of Lactobacillus species (specifically Lactobacillus crispatus, Lactobacillus jensenii, Lactobacillus gasseri, and Lactobacillus iners) defines a healthy cervical and vaginal (cervicovaginal) microbiota (1,2). Lactobacilli are thought to reduce the risk of genital tract infections and syndromes by employing antipathogenic mechanisms such as production of antimicrobial compounds (e.g., lactic acid), immunomodulation, and competitive exclusion through adherence to cervicovaginal epithelial cells (2). Among the lactobacilli, L. crispatus and L. iners are regarded as the most and least protective, respectively (2,3). L. iners can occur in healthy, transitional, and dysbiotic microbiota (2). There is evidence for ethnic variations in cervicovaginal microbiota, with Lactobacillus-dominated microbiota being less common in women of African descent than in non-African women (1,4,5). In sub-Saharan Africa, including South Africa, L. iners-dominated cervicovaginal microbiota are the most prevalent among microbiota with lactobacilli dominance (3,(6)(7)(8). Imbalances of vaginal microbiota can lead to bacterial vaginosis (BV), a polymicrobial disorder characterized by loss of lactobacilli concomitant with an overgrowth of coccobacilli that include Gardnerella vaginalis and other anaerobic bacteria (9). BV can be diagnosed using Nugent scoring or Amsel's criteria (10,11), with the former considered the gold standard (10). Nugent scoring is based on Gram staining of vaginal smear followed by identification of bacterial morphotypes (lactobacilli and BV-associated bacteria) and scoring of microflora abnormality (0 to 3: normal microflora, 4 to 6: intermediate/mixed vaginal microflora, and 7 to 10: BV) (10). On the other hand, Amsel's criteria rely on the presence of at least three of following four clinical findings (signs or symptoms) to define BV: high vaginal pH (.4.5), homogeneous white/gray discharge, a fishy odor following addition of 10% potassium hydroxide to vaginal fluid (positive "whiff test"), and clue cells (presence of exfoliated squamous epithelial cells with adherent coccobacilli) on wet mount (12). Amsel's criteria are moderately reproducible and may inaccurately diagnose BV due to lack of time or expertise (10). Both Nugent scoring and Amsel's criteria do not show detailed information of BV-associated bacteria. Molecular-based methods, such as quantitative real-time PCR targeting specific BV-associated bacteria have augmented BV diagnosis and enabled us to examine BV microflora at an unprecedented resolution (10,11,13,14). Such methods are highly accurate indicators of BV since they have improved test characteristics for BV diagnostics (13)(14)(15). Therefore, these methods may be useful when used in conjunction with microscopic and clinical methods to diagnose BV and determine women at high risk for recurrent BV (16). Thus, there is a need to optimize and use molecular methods to better understand the microbiology of BV. BV is the most frequently reported vaginal syndrome among reproductive-age women, with a global prevalence of 23 to 29% in the general population (17). Sub-Saharan African women, especially from southern Africa, have high rates of BV, with the rates differing geographically (18,19). For example, the estimates of BV in South Africa range from 34 to 58%, with high rates (58%) reported in Cape Town and rural KwaZulu-Natal (18). Risk factors for BV include sexual behavior (19,20), hormonal contraception, ethnicity, and use of intravaginal hygiene products (21), to mention a few. BV has been implicated in several adverse clinical and reproductive health outcomes such as increased risk of acquiring sexually transmitted infections (STIs) (22) that range from Chlamydia trachomatis, Trichomonas vaginalis, and Neisseria gonorrhoeae to HIV infection (22,23). Positive associations of BV and BV-associated bacteria with HIV infection have been documented (24)(25)(26). BV-associated bacteria are known to increase the HIV-1 viral replication and shedding in HIV target cells (27). Furthermore, BV-associated bacteria such as G. vaginalis and cervicovaginal microbiota with paucity of lactobacilli have been associated with disruption of epithelial barriers (28) and human papillomavirus (HPV) infection (29)(30)(31). Infection with persistent high-risk HPV (HR-HPV) types causes cervical cancer (32), the leading cancer affecting women aged 15 to 44 years in southern Africa (33). Both HIV and HPV infections are highly prevalent in South Africa (33)(34)(35)(36). Hence, considering the high burden of BV, its recalcitrance to antibiotic therapy (8), and its association with STIs, accurate approaches for diagnosis and effective treatment are essential. Knowledge of epidemiological trends of BV and STIs may inform policy decisions concerning their prevention and control strategies. Owing to the high burden of BV among women in Africa, there is a need for more epidemiological data on the prevalences of BV-associated microbes, especially using targeted PCR assays, in order to determine the true prevalences of these microbes and assess their utility as diagnostic markers. Until now, the relationship of BV-associated bacteria and STIs with HIV remains poorly understood, with some authors finding no statistically significant associations between BV and individual STIs detected through Nugent scoring and quantitative PCR (qPCR), respectively (37). In a recent study (38), we employed a multiplex PCR-based STD direct flow chip assay to investigate the prevalence of 12 STIs and emerging sexually transmitted pathogens (pathobionts, commensal bacteria with pathogenic potential) among rural women (aged $30 years) with a high burden of HR-HPV (32.2%) and HIV-1 (38.5%) recruited from a rural community-based clinic in Eastern Cape (South Africa). These microbes included C. trachomatis (serovars L1 to L3 and serotypes A to K), herpes simplex virus (types I and II), T. vaginalis, N. gonorrhoeae, Treponema pallidum, Haemophilus ducreyi, Mycoplasma hominis, Mycoplasma genitalium, and ureaplasmas (Ureaplasma urealyticum or Ureaplasma parvum). Whereas the study found the overall prevalences of STIs (22.9%) and pathobionts to be high (83.9%), it was only limited to 12 microbes and underexplored BVassociated microbes. We therefore aimed to use a customized molecular method, bacterial vaginosis microbial DNA qPCR array, to screen for a wide range of microbes (n = 38), mostly BV-associated bacteria and pathobionts in cervical samples from our previously highlighted study. In addition, participant characteristics associated with STI as well as STIs and pathobionts associated with HIV infection were investigated. RESULTS Demographic characteristics of study participants. The final analysis was done on 145 participants (89.5%, 145/162). A total of 17 (10.5%) samples were excluded from the analysis for the following reasons: (i) no bacterium and protozoon included in the assay was detected and/or (ii) any control included in the assay had failed. The description of the 145 participants finally included in the study is summarized in Table 1. While the age of the women ranged from 30 to 75 years, their median age (43 years) was that of perimenopausal age. More than a third (37.9%) of the women were HIV-positive. Of these, 99.0% were on antiretroviral drugs. A small proportion (11.7%) of the women had abnormal cervical cytology ( The distribution of the above-described baseline characteristics according to the age group of the women is tabulated in the supplementary information (see Table S1 at https://doi.org/10.6084/m9.figshare.19714483). In addition, this table statistically compares the variables of women aged 30 to 39 years with those aged 40 to 49 and over 50 years. Overall, a significantly higher proportion of women aged 30 to 39 years had high-risk behaviors such as multiple sexual partners in the past 1 year, lifetime sexual partners, higher frequency of vaginal sex in the last 1 month, and so on, compared to their counterparts. Prevalence of the cervical microbes. Of the 38 microbes that we sought to identify, 36 were detected. Clostridium sordellii and Finegoldia magna were not detected. The prevalences of the cervical microbes detected in our cohort are shown in Fig. 1. The most frequently detected microbes were L. iners (62.8%, 91/145) and the common BV-associated bacteria, G. vaginalis (58.6%, 85/145), Atopobium vaginae (40.7%, 59/ 145), U. parvum (37.9%, 55/145), Leptotrichia amnionii (31.7%, 46/145), and Sneathia sanguinegens (29.7%, 43/145). The least frequently detected microbes, with a prevalence of #5%, included Corynebacterium aurimucosum, Prevotella intermedia, Shuttleworthia satelles, Streptococcus intermedius, etc. The cumulative prevalence of STIs and pathobionts was 62.8% (91/145). The prevalences of C. trachomatis, N. gonorrhoeae, and T. vaginalis were 3.4% (5/145), 4.8% (7/ 145), and 4.8% (7/145), respectively. All pathobionts, including U. parvum (37.9%, 55/ 145), the most prevalent, were detected at considerably varied rates (Mycoplasma hominis: 29.0% [42/145], U. urealyticum: 8.3% [12/145], and M. genitalium: Hierarchical clustering of the detected cervical microbes. We observed that the samples clustered according to the presence of cervical microbes (Fig. 2), although the algorithm (gap statistic) did not converge toward an optimal solution in 10 iterations. The heatmap also shows the distribution of the detected microbes across the women. Of the 145 women, 42.8% (62/145) had a diverse array of heterogeneously distributed bacteria (e.g., G. vaginalis, A. vaginae, L. amnionii, and S. sanguinegens). Women with detectable Lactobacillus spp., specifically L. crispatus and L. jensenii, and to a lesser extent L. iners, had very low prevalence of BV-associated bacteria. Interestingly, 42.9% Association of Lactobacillus species with HIV status and age group. Owing to the documented inverse correlation between a woman's reproductive stage and the levels of Lactobacillus spp. (39), as well as the roles of these Lactobacillus spp. in HIV infection (3,40), we investigated whether there was any association of Lactobacillus spp. with the women's age group and HIV infection in our cohort. The associations of Lactobacillus spp. with the age group of women are summarized in Table 2. About three-quarters (74.5%, 108/145) of the women in our cohort had any detectable Lactobacillus spp. We noted that the prevalences of any detectable Lactobacillus spp. (i.e., genus Lactobacillus) and L. iners significantly decreased with increasing age group (P = 0.019 and P = 0.005, respectively). None of the other individual Lactobacillus spp. varied by the age group of the women. When we stratified our cohort by HIV status, we found that there was no difference in the prevalence of any Lactobacillus spp. between women with and without HIV infection (odds ratio [OR]: 1.9 [95% confidence interval (CI): 0.8 to 4.4], P = 0.113; Table 3). However, HIV-positive women were less likely to have detectable L. crispatus than HIV-negative women (OR: 0.3 [95% CI: 0.1 to 0.8], P = 0.015). The prevalence of Associations of STIs and pathobionts with HIV status and age group. Next, we assessed the associations of STIs and pathobionts with HIV status. A great proportion of women (62.8%, 91/145) were positive for any STI and/or pathobiont. The prevalence of any STI and/or pathobiont was significantly higher in HIV-positive women than in HIV-negative women (OR: 2.3 [95% CI: 1.1 to 4.9], P = 0.022; Table 4). Neither the detection rates of the individual STIs nor those of the pathobionts were associated with HIV infection. Finally, we also attempted to evaluate the distribution of the STIs and pathobionts according to the age group of women. We noted that the cumulative prevalence of STIs and pathobionts significantly decreased as the age of women increased (P = 0.005; Fig. 3). An analogous observation was seen regarding U. urealyticum (P = 0.018; Fig. 3). There were no significant associations between the other microorganisms, including T. vaginalis, and age category. All these observations remained unchanged among women with normal cytology (any STI/pathobiont: P = 0.015, U. urealyticum: P = 0.014; see Fig. S1 at https://doi .org/10.6084/m9.figshare.19714483). A trend of an inverse relationship between the prevalence of M. hominis and age was observed among women with normal cytology (P = 0.085; see Fig. S1 at the URL mentioned above). We did not explore the distribution of the STIs and pathobionts among women with abnormal cytology because of the small number of subjects in this subgroup (n = 17, 58.9% of which were positive for any STI/pathobiont). Further analyses showed that the cumulative prevalence of STIs and pathobionts decreased with increasing age among HIV-negative women (P = 0.034; Fig. 4a), but not among HIV-positive women (P = 0.387; Fig. 4b). The prevalence of C. trachomatis significantly increased with increasing age, but only in HIV-positive women (P = 0.016; Fig. 4b). DISCUSSION With the growing consensus recognizing the adverse sequalae associated with BV, there is need for more data on the prevalence of cervicovaginal BV-associated microbes, especially in regions burdened with STIs. These data may have applications in BV diagnostics, routine surveillance, and public health decision-making. Our study is the first of its kind to employ a customized molecular technique to examine the prevalences and determinants of selected cervical microbes among women with and without HIV infection in rural Eastern Cape, South Africa. The pattern of the most commonly detected cervical bacteria in our study (e.g., L. iners and the BV-associated bacteria G. vaginalis, A. vaginae, L. amnionii, and S. sanguinegens) is congruent with previous reports (13,31,41), including those that have used qPCR assays (14,15,42). A small cohort study on a population of predominately African-American women at high risk for STIs, noted that the most prevalent vaginal species as detected by qPCR was L. iners (42). The aforesaid BV-associated bacteria are known to occur in high prevalences and abundances in women with BV (8,13,14,37,41,42). A study that used a multiplex real-time PCR assay to examine BV among 151 women (mostly Dutch Caucasians) aged 18 to 62 years found that G. vaginalis and A. vaginae were remarkably more common in women with BV than in women without BV (G. vaginalis: 96% versus 27%, A. vaginae: 87% versus 6%, respectively) (14). This observation was echoed in a subsequent study of 37 women (median age: 26 years), mostly African American, clinically diagnosed as having either normal, intermediate, or BV microfloras. By utilizing targeted qPCR assays, this particular study noted that the prevalences of BV-associated bacteria, including G. vaginalis, increased with severity of BV (42). Among the BV-associated bacteria, G. vaginalis is regarded as the most virulent (43). It has the capacity to displace the protective lactobacilli from the vaginal epithelial cells (44) and is probably partly responsible for BV treatment failure (8). It can also form an adherent robust biofilm on the vaginal epithelium, thereby allowing other opportunistic pathogens to colonize the genital econiche (43). It is believed that such changes may lead to the establishment of BV (44). Omics approaches for studying microbiota have revealed that BV and cervicovaginal microbiota with G. vaginalis dominance have unique functional signatures (28,45), which may compromise cervicovaginal epithelial integrity barrier to infections (28). It is no wonder that specific detectable BV-associated bacteria (e.g., S. sanguinegens) are associated with genital inflammation (40), increased HIV risk (25,40), and HR-HPV infection (31). We did not detect C. sordellii and F. magna, which are extremely unusual bacteria in the female genital tract, including those of African women (4). The cumulative prevalence of Lactobacillus spp. (75%) in our study was lower than in other culture-independent studies of African cohorts (85 to 100%) (3,6,31,46). Perhaps this could be due to disparities in study population and methodology, including performances of qPCR versus 16S rRNA gene amplicon sequencing. Similar to culture-independent studies of South African cohorts (6,31,46), we further found that L. iners was the most prevalent Lactobacillus spp. This observation is in good agreement with studies that have employed qPCR assays to study cervicovaginal bacteria (14,15,37,42). Culture-based studies have reported a lower prevalence of L. iners than molecular-based studies. For example, a smallcohort culture-based study on reproductive-age South African women without HIV infection observed more than 2-fold lower prevalence of L. iners than in our study (27% versus 63%) (47). Of course, this difference emanates mainly from the higher sensitivity of DNA technologies and their inherent ability to detect DNA from both viable and nonviable cells (48), difficulty in culturing L. iners (47), and differences in the study population (including HIV status and sample size). The observed low prevalence of the other common cervicovaginal lactobacilli is not surprising since it mirrors previous reports using culture (47) and targeted PCR assays (14,42) and high-throughput amplicon sequencing of the hypervariable region of 16S rRNA gene (6,31,46). Women of African descent are less frequently colonized by these Lactobacillus spp. than Caucasian women and Asian women (13). Instead, most of them have microbiota characterized by high diversity contributed by an array of non-Lactobacillus spp. (1,3,5). The composition of such microbiota could be governed by host ethnicity or genetics (1,5,46) and sexual behavior (7,20,49) among other factors. We noted that older (perimenopausal, menopausal, and postmenopausal) women in our study had a significantly lower prevalence of any detectable Lactobacillus spp. and L. iners than that of younger (premenopausal) women. This qualitative observation is consonant with quantitative analyses of lactobacilli on premenopausal versus postmenopausal women (26,39,50), further confirming that the composition of cervicovaginal microbiota is impacted by reproductive aging (26). There is a general opinion that the mechanism of estrogen action on glycogen synthesis, which varies by a woman's reproductive stage, is responsible for this. As a woman ages, the levels of glycogen diminish in tandem with estrogen levels. Lactobacillus spp. depend on cell-free glycogen, produced by vaginal epithelial cells, as their main carbon source (50). So, reduction of estrogen levels causes thinning of the glycogen vaginal epithelial layer, thereby resulting in low levels of lactobacilli (39). Also, cell-free glycogen levels have been negatively correlated with vaginal pH (51), which seems to be higher in postmenopausal women than in premenopausal women. High vaginal pH may create a favorable growth environment for opportunistic pathogens, potentially increasing the risk of acquiring STIs. Hence, estrogen replacement therapy may be used to restore Lactobacillus colonization in women with deficient lactobacilli (e.g., postmenopausal women), thereby maintaining cervicovaginal health and protecting such against genital tract infections (50). Molecular Identification of BV-Associated Bacteria Microbiology Spectrum We further found that detectable L. crispatus and L. iners were negatively and positively associated with prevalent HIV infection, respectively. While this resonates with previous cross-sectional studies of HIV-infected and HIV-uninfected women, there are still mixed findings on the relationship of L. iners with prevalent HIV (3,52). Despite this, a review of literature on lactobacilli as biomarkers for vaginal health suggests that cervicovaginal microbiota colonized with L. crispatus are more protective against HIV infection than those colonized with L. iners (2). One likely explanation for this is the fact that L. crispatus produces considerably higher levels of D-lactic acid than L. iners does. It has been demonstrated that relatively high levels of D-lactic acid is a potent antimicrobial product that reduces permissiveness of cervicovaginal mucus to HIV infection (2). We also posit that the positive association of L. iners with prevalent HIV infection in our cohort could be due to the cooccurrence of BV-associated bacteria, some of which have been associated with HIV acquisition (25,40). It is thought that L. iners is a transitional species that can facilitate the shift of cervicovaginal microbiota to healthy or dysbiotic (including BV) state (2). Since there is paucity of data on STI and pathobiont burden among South African women with and without HIV infection in rural Eastern Cape, we investigated the prevalences and risk factors of these microbes. The overall prevalence of both STIs and pathobionts in our present study is at least 2.5-fold higher than we previously reported using the STD direct flow chip assay (38), thus reflecting differences in sensitivity of the assays (including the cycle threshold [C T ] cutoff value that we chose) and maybe sample selection. Compared to older HIV-negative women, younger HIV-negative women had a higher prevalence of STIs/pathobionts. The most plausible explanation for this is either a higher prevalence of STIs/pathobionts among sexual partners or high-risk behaviors among younger women (53). The latter was particularly apparent in our study (see Table S1 at https://doi.org/10.6084/m9.figshare.19714483). Hence, there is a need to understand the importance of the sexual network in the transmission of STIs/ pathobionts. We further observed a positive association between STIs/pathobionts and HIV infection. An intimate association of HIV with STIs (54, 55) and specific pathobionts (25,40) has been published and is believed to be contributed by persistent high-risk sexual behaviors (54,55), such as increased likelihood of engaging in condomless anal and vaginal intercourse among HIV-positive individuals unaware of their (HIV) status (56). This in turn increases the risk of STI/HIV transmission in the community. These findings reiterate the need to intensify STI/HIV testing and counselling services, treatment of STI/HIV, and advocating for behavioral change (e.g., correct and consistent condom use). We noted that the prevalences of the individual STIs were low (each ,5%) yet both consistent (38) and inconsistent (53,55) with previous reports on South African cohorts. This consistency is certainly because a large proportion of our present cohort was part of our previous study (38) of the STD direct flow chip assay. This low prevalence could be attributed to the fact that the women were older (perhaps with low-risk behaviors) and maybe receiving STI treatment. The increase in prevalent Chlamydia infection with increasing age among HIV-positive women in our study could be a result of either (i) immunobiological factors (i.e., low paucity of Lactobacillus spp., less acidic vaginal pH, and/or altered immune responses, including reduced levels of interferon gamma [IFN-g] and interleukin-17 [57], as women age) or (ii) sexual behavior of the partner (since women aged $40 years reported a smaller number of new sexual partners). The latter explanation suggests that biomedical and structural (sociobehavioral) interventions targeting both sexual partners might have an impact on the reduction of the burden of STI. The distribution pattern of the pathobionts in our present study was comparable to the results of the STD direct flow chip assay (38), with ureaplasmas (largely contributed by U. parvum) and M. hominis being the most prevalent. In agreement with published data (58), we found that the detectability of U. urealyticum was associated with younger age of the women. This may be directly linked to differences in sexual behavior. While expert opinions on the relevance of cervicovaginal pathobionts on reproductive health remain unclear and divided, a few observational studies have suggested that these pathobionts could be markers or symbionts of BV microflora (1,59) and opportunistic pathogens associated with Chlamydia infection (58) and clinical AIDS stage (60). Thus, further studies are needed to investigate the roles of these pathobionts in urogenital health in order to determine if their routine screening and treatment are warranted. The present study has limitations that include the reliance on self-reported sexual histories and practices and vaginal discharge histories, which might have resulted in bias during collection of participant information and analysis. Second, we lacked information on BV status and other factors that could have been possible confounders in our study. Third, the customized bacterial vaginosis microbial DNA qPCR array that we used allowed us to detect only a limited number of cervical microbes. Moreover, we used an approach of the assay that does not provide the microbial loads, which could have been useful in examining correlations between cervical microbes and could be a better predictor of disease than qualitative data. We also believe that the C T cutoff value for positivity used in our assay could have limited the detection of some cervical microbes in some of the women. Lastly, our participants were recruited from a community-based clinic of the rural Eastern Cape Province in South Africa. Therefore, this may somewhat restrict the generalizability of our study results. Notwithstanding these limitations, our study highlights the potential of a customized bacterial vaginosis microbial DNA qPCR array for analysis of selected key BV-associated microbes, including pathobionts and STIs in the genital econiche. Conclusion. Using a customized bacterial vaginosis microbial DNA qPCR array, we identified the microbial taxonomic profiles of cervical samples from women (aged $30 years) with and without HIV infection attending a rural community-based clinic in South Africa. While the prevalence of individual STIs (C. trachomatis, N. gonorrhoeae, and T. vaginalis) was low (each ,5%), we noted that the cumulative burden of STIs and emerging sexually transmitted pathogens (M. genitalium, M. hominis, U. parvum, and U. urealyticum pathobionts) was high (over 60%) and that this was strongly associated with HIV infection and sexual behavior. This demonstrates the need to understand the epidemiological trend of STIs and pathobionts as well as the patterns of sexual networks and their impact on STI and pathobiont transmission, prevention, and control among different demographic groups. MATERIALS AND METHODS Study design, study population, and sample collection. A total of 162 women were selected from a cross-sectional study conducted between September 2017 and June 2018. These were part of the study participants recruited from a community-based clinic of the rural Eastern Cape Province in South Africa as previously described (35,38). A majority (87.7%) of these women had previously been included in our qualitative molecular diagnostic study (38) that utilized the STD direct flow chip assay. All the 162 participants were either attending cervical cancer screening or visiting the clinic for any other reasons. Inclusion criteria included women aged $30 years. The exclusion criteria included women ,30 years old. Participants who had undergone hysterectomy were not eligible to participate in the parent and present studies. Women were requested to test for HIV if they were not aware of their HIV status or if their HIV status was not documented on their health card. Women received counselling prior to and after testing for HIV using a rapid test (Alere Determine HIV-1/2 Ag/Ab Combo; Alere, Waltham, MA, USA). Since the parent study was designed for HPV and cervical cancer screening, cervical samples were collected from women who voluntarily agreed to participate in the study using a cytobrush, stored in a Digene specimen transport medium (Qiagen, Germantown, MD, USA) and kept at 280°C until DNA extraction. Cytobrush samples are known to be more suitable for sampling the transformation zone of the cervix-since the cytobrush has a greater exfoliative ability than swab samples. As a result, they can provide a reliable and robust sampling tool for cervicovaginal microbiota evaluation (61). DNA extraction. DNA was extracted from each cervical sample (400 mL) using the MagNA Pure compact nucleic acid isolation kit (Roche Diagnostics, Mannheim, Germany) on an automated MagNA Pure compact machine, in accordance with the manufacturer's recommendations. DNA was eluted in 100 mL elution buffer followed by quantification of the DNA using a NanoDrop spectrophotometer ND-1000 (Inqaba Biotec, Pretoria, South Africa). The purified DNA was stored at 220°C until microbial analysis using the bacterial vaginosis microbial DNA qPCR array. Identification of microbes using a customized bacterial vaginosis microbial DNA qPCR array. To screen the microbes in the cervical samples, we used a customized bacterial vaginosis microbial DNA qPCR array (CBAID0085RE; Qiagen, Germantown, MD, USA). The Bacterial vaginosis microbial DNA qPCR array is a PCR amplification-based method designed to target and identify bacterial 16S rRNA gene and fungal rRNA gene sequences. The conventional assay can identify 42 bacteria as well as a protozoon and Molecular Identification of BV-Associated Bacteria Microbiology Spectrum fungi (from the Aspergillus and Candida genera). We customized our array to contain assays for 42 microbes of interest (Table 6; 37 bacteria, 1 protozoon, and 4 fungi), which were selected based on observations from our previous 16S rRNA gene amplicon surveys (7,31) and a priori knowledge of their relationships with BV. While our customized assay could also identify fungal species, we did not examine any of them since we used a DNA extraction method that was not appropriate for their detection. The only STIs examined were C. trachomatis, N. gonorrhoeae, and T. vaginalis, whereas emerging sexually transmitted pathogens (pathobionts) consisted of M. genitalium, M. hominis, U. parvum, and U. urealyticum. The status "possible" (in the last column of Table 6) means that based on our previous 16S rRNA gene amplicon surveys (7, 31), we cannot conclusively report that the microbe in question (in the column labeled "Gene Symbol") was detected-since we did not achieve species-level classification. The deepest taxonomic level achieved for these microbes (with "possible" "status") was genus. Thus, for such microbes, their specific identities are unknown. The status "not applicable" indicates that the feature in question was not assessed in our previous studies. Each sample, including a no-template control (NTC) and an in-house mock community, was run alongside the following controls: pan bacterium 1, pan bacterium 3, Hs/Mm.GAPDH, and positive PCR control (PPC). The host GAPDH control detects the presence of human genomic DNA. The inclusion of "pan bacteria" served as a control to determine the presence of bacteria in a sample. The PPC was used to test for the presence of inhibitors in the sample and/or the efficiency of the PCR. The NTC was utilized to monitor any potential nucleic acid contamination (in the reagents or acquired during the experimental procedures) and primer-dimer formation that could yield false-positive results. The mock community was used to validate our assay. This mock community was DNA from a cervical sample whose expected taxonomic profile was previously estimated using 16S rRNA gene amplicon sequencing (31). In the aforenamed 16S rRNA metagenomic sequencing, all samples were run concurrently with the Human Microbiome Project (HMP) mock communities: HM-782D (even concentration) and HM-783D (staggered concentration) (BEI Resources, Manassas, VA, USA). Thus, we had a clue to the taxonomic landscape of the all the samples. Real-time PCR was performed using a 100-ng genomic DNA. This was added to ready-to-use 2Â master mix ROX (Qiagen, Germantown, MD, USA) and microbial DNA-free water. A total of 10 mL of the reaction mixture was aliquoted into each well of a plate containing predispensed primers and hydrolysis probes. Thermal cycling was performed on a QuantStudio 12K Flex real-time PCR detection system (Thermo Fisher, Singapore) as follows: an initial PCR activation step at 95°C for 10 min, followed by 40 cycles of 15 s at 95°C for denaturation and 2 min at 60°C for annealing and extension. The raw data were first transformed using universal custom PCR array patch 141512, a data analysis patch. The cycle threshold (C T ) values were analyzed to identify the presence or absence of microbes using the custom PCR array template Excel file v2.0 available through Qiagen's GeneGlobe Data Analysis Center (https:// geneglobe.qiagen.com/za/analyze). A C T value of ,34 and $34 was regarded as positive and negative, respectively, for the microbe in consideration. This was based on the C T value (40) for the NTC (which was run in every plate) and the lower C T value (6) set for a positive call. Our final analyses included only samples that were positive for any of the microbes and had all the controls passed. Data analysis: statistical associations and hierarchical clustering. Statistical analyses were done using Prism v6.01 (GraphPad Software, Inc., San Diego, CA, USA). Participants' categorical variables were summarized as percentages and frequencies, while continuous variables were expressed as medians with interquartile ranges (IQRs) at the 25th and 75th percentiles. Comparison of the variables between women aged 30 to 39 years and 40 to 49 years as well as those over 50 years old were computed using chi-square/Fisher's exact tests, with statistical significance at a two-tailed P value of ,0.05. The chisquare test was applied only if the expected frequencies in a 2 Â 2 contingency table were $5 or if at least 80% of the cells in a 2 Â 3 contingency table had an expected frequency of $5 and no cell had an expected frequency ,1; otherwise, Fisher's exact test was applied. Chi-square/Fisher's exact tests were also used to compute the association of cumulative prevalence of STIs and pathobionts with HIV status and demographic and sexual behavior. Along with this, we tested the association of pathobionts and Lactobacillus spp. with HIV as well as the association between Lactobacillus spp. and the age group of women. A two-tailed P value of ,0.05 was considered statistically significant. Odds ratios (ORs) with corresponding 95% confidence intervals (CIs) were used to estimate the magnitude of associations. To identify whether the women could be grouped according to the binary data (presence and absence of cervical microbe), we performed an unsupervised hierarchical clustering. Pairwise scores between samples were computed based on the Jaccard dissimilarity index using the vegan R package v2.5 (62). Clustering was based on the average neighbor linkage method. The gap statistic method (63) was used to estimate the optimal number of clusters, with 10 as the k.max (the maximum number of clusters to consider) and 500 as the bootstrapping value. The largest gap statistic is usually considered the optimal number of clusters. Ethics approval. The study was approved by the Human Research Ethics Committee (HREC) of the University of Cape Town, South Africa (HREC reference 615/2017). The study was described to all the eligible participants, and written informed consent was obtained. Data availability. Data are available upon reasonable request. The deidentified data are owned by the partner institutions. Requests for data utilization should be sent to the corresponding author. ACKNOWLEDGMENTS We extend our sincere appreciation and gratitude to the community clinic staff members, Virginia Maqoga and Luviwe Lutotswana, and the women who kindly agreed to participate in the study. We are also indebted to the Centre for Proteomic and Genomics Research ([CPGR]; https://www.cpgr.org.za/ in Cape Town, South Africa) for their molecular services. Special thanks go to Aubrey Shoko, the RT-PCR platform manager at CPGR. We gratefully acknowledge Adrian Brink, Head of the Division of Medical Microbiology, University of Cape Town (South Africa), for his technical advice on STIs and pathobionts. Lastly, we give credit to the two reviewers for the Microbiology Spectrum journal for their detailed and useful comments on the earlier version of this article and to Stephen Kamuli, a doctoral candidate in the Cardiovascular Genetics Laboratory (the Hatter Institute for Cardiovascular Research in Africa), University of Cape Town, for timely and selflessly availing remote resources to us (despite our short notice) in order to address the reviewers' comments. O The content is solely the responsibility of the authors and does not necessarily represent the official views of the institutions affiliated with the authors or funding entities. We declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
2022-06-02T06:22:58.600Z
2022-06-01T00:00:00.000
{ "year": 2022, "sha1": "db2f7aee75b7f5f05c6f78252b6fabee2aab689f", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ASMUSA", "pdf_hash": "0e9ad1b3ff5ea3188af5e9856452b45a8103f22f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
267566265
pes2o/s2orc
v3-fos-license
Non-Invasive Survey Techniques to Study Nuragic Archaeological Sites: The Nanni Arr ù Case Study (Sardinia, Italy) : The Italian territory of Sardinia Island has an enormous cultural and identity heritage from the Pre-Nuragic and Nuragic periods, with archaeological evidence of more than 7000 sites. However, many other undiscovered remnants of these ancient times are believed to be present. In this context, it can be helpful to analyze data from different types of sensors on a single information technology platform, to better identify and perimeter hidden archaeological structures. The main objective of the study is to define a methodology that through the processing, analysis, and comparison of data obtained using different non-invasive survey techniques could help to identify and document archaeological sites not yet or only partially investigated. The non-invasive techniques include satellite, unmanned aerial vehicle, and geophysical surveys that have been applied at the nuraghe Nanni Arr ù , one of the most important finds in recent times. The complexity of this ancient megalithic edifice and its surroundings represents an ideal use case. The surveys showed some anomalies in the areas south–east and north–east of the excavated portion of the Nanni Arr ù site. The comparison between data obtained with the different survey techniques used in the study suggests that in areas where anomalies have been confirmed by multiple data types, buried structures may be present. To confirm this hypothesis, further studies are believed necessary, for example, additional geophysical surveys in the excavated part of the site. Introduction Sardinia island has an enormous cultural and identity heritage from the Pre-Nuragic (3200-2700 BC) and Nuragic (up to the 2nd century AD) periods.Nuragic Civilization was a complex society characterized by a highly evolved political system, economic organization, and religious beliefs.Nuraghes, the best-known monuments of this era, are stone structures typified by truncated conical circular towers.It is estimated that there are about 5000 nuraghes, more than 550 domus de janas, 60 holy wells, 500 giants' graves, 200 villages, and 110 menhirs [1,2] (Figure 1).However, many other pieces of evidence are believed to be present on the island. The objective of this research is to propose a methodology to help outline hidden archaeological structures and explore underrepresented areas of Sardinia.This objective is not easy to solve by means of a standard methodology: in the same site there may be monuments relating to different periods, the geomorphological characteristics of many Nuragic areas represent, in several situations, an obstacle to the use of some survey methodologies, not always allowing on-site investigations, and a huge complexity of Nuragic artifacts of different types may require different study techniques. methodologies, not always allowing on-site investigations, and a huge complexity of Nuragic artifacts of different types may require different study techniques.This paper describes the study carried out within the ArchaeoSardinia Project, financed by the Sardinian Government and part of the NuraghEO Project, financed by a voucher from the EU-funded Open Clouds for Research Environments project (OCRE).The main objective is to define a methodology that supports the study of archaeological sites of the Nuragic period, through the pervasive use of remote sensing technologies such as satellites, Unmanned Aerial Vehicles (UAV), and geophysical surveys.The chosen method to achieve this goal at the Nanni Arrù site consists of processing, analysis, and the comparison of data obtained using various non-invasive survey techniques and the implementation of an IT Platform for data management, archaeological site documentation, and results publication.The proposed methodology is also expected to be systematically applied to other territories. The approach arises out of the awareness that excavation activities can be difficult in terms of effort and costs; satellite and aerial images and geophysical surveys can be useful to assess in advance the need for excavation activity for a specific area.In addition, phenomena such as ground subsidence can prevent access to buried structures and even damage the emerging ones; satellite SAR (Synthetic Aperture Radar) images may be useful to highlight the structures that are affected by such phenomena.In addition to the capability of operating under any weather conditions, the advantageous properties offered by SAR include wide-swath to spotlight coverage, kilometer to sub-meter spatial resolution, historical and present-day temporal coverage, longer to shorter wavelength, and monthly to daily revisiting time if collecting time series [3].These properties have made the use of SAR technology attractive for various archaeological applications, some examples in [4][5][6][7]. Remote sensing techniques are widely adopted for archaeological prospecting: discovering, monitoring, preservation, and documentation [8][9][10][11].This paper describes the study carried out within the ArchaeoSardinia Project, financed by the Sardinian Government and part of the NuraghEO Project, financed by a voucher from the EU-funded Open Clouds for Research Environments project (OCRE).The main objective is to define a methodology that supports the study of archaeological sites of the Nuragic period, through the pervasive use of remote sensing technologies such as satellites, Unmanned Aerial Vehicles (UAV), and geophysical surveys.The chosen method to achieve this goal at the Nanni Arrù site consists of processing, analysis, and the comparison of data obtained using various non-invasive survey techniques and the implementation of an IT Platform for data management, archaeological site documentation, and results publication.The proposed methodology is also expected to be systematically applied to other territories. The approach arises out of the awareness that excavation activities can be difficult in terms of effort and costs; satellite and aerial images and geophysical surveys can be useful to assess in advance the need for excavation activity for a specific area.In addition, phenomena such as ground subsidence can prevent access to buried structures and even damage the emerging ones; satellite SAR (Synthetic Aperture Radar) images may be useful to highlight the structures that are affected by such phenomena.In addition to the capability of operating under any weather conditions, the advantageous properties offered by SAR include wide-swath to spotlight coverage, kilometer to sub-meter spatial resolution, historical and present-day temporal coverage, longer to shorter wavelength, and monthly to daily revisiting time if collecting time series [3].These properties have made the use of SAR technology attractive for various archaeological applications, some examples in [4][5][6][7]. The presence of buried archaeological remains influences vegetation and soil, it may modify ground color in the presence of both vegetation and bare soil.The so-called 'crop marks' are useful anomalies that occur because of differential growth of vegetation; in fact, the presence of buried structures may negatively influence the growth of vegetation, while the presence of moats may positively influence it [12][13][14][15]. Satellite images have been widely used by archaeologists to detect crop and soil marks through the use of various algorithms or the calculation of specific vegetation indices.However, the detection and interpretation of these marks are difficult tasks due to the crops' phenological variations [16][17][18].Moreover, crop and soil marks may not be easily detected in satellite images when observed over different periods of time and at different spatial and spectral resolutions.It is therefore useful to integrate the analysis of satellite images with the analysis of UAV images and geophysical data. Sardinia represents a particularly interesting case both for the extent and variety of the Nuragic and pre-Nuragic archaeological heritage, which has yet to be fully explored. Moreover, the archaeology of the island of Sardinia provides an opportunity to explore interactions among local Nuragic people and colonizers who frequented the shores of Sardinia in the Iron Age.Those interactions were centered around earlier Bronze Age nuraghi, where communities experienced increasing connections with other populations as detailed by [25]. The complexity of this context led to the adoption and combination of the tools described below. Materials and Methods The site chosen to test the methodology is the Nuraghe Nanni Arrù, one of the most important finds in recent times.The site has been declared to be of special archaeological interest by the decree n. 83 of 2 July 2018 and, according to the Legislative decree n. 42 of 22 January 2004, it is submitted to all the prescribed protection measures. The Nuraghe Nanni Arrù is located in the municipality of Quartucciu, roughly 4 km from the southern seacoast and 17 km east of Cagliari, the capital of the Sardinia region.It was constructed in the Bronze Age roughly between the 14th and the 10th BCE.Archeological evidence from other similar Nuraghe suggests likely long-term habitation outside the tower walls, at least through the early Roman period if not the late Roman period. The site was discovered in the early 1990s; some brief and discontinuous excavations were carried out between the years 1994 and 2000.Excavations have identified a four-lobed fortified structure consisting of a central keep with four towers, three of which leaning against the keep and a larger one in the opposite direction, in order to leave room for a large courtyard.The excavations have also partially brought to light other towers, for a total of ten rooms, and traces of the surrounding village (Figure 2).It is estimated that the lower parts of the Nuraghe are still covered by ground and extend for more than two meters below the surface.The intense anthropic activity that is practiced up to the present day makes it nearly impossible to identify with certainty the extent of the original settlement, which is assumed to be consistent with both the size of the basalt stone tower complex and its wide chronological range of occupation. The complex history of the origins of the Nuraghe Nanni Arrù and its determinative influence on the surrounding landscape represents an ideal use case to apply a cloud- The intense anthropic activity that is practiced up to the present day makes it nearly impossible to identify with certainty the extent of the original settlement, which is assumed to be consistent with both the size of the basalt stone tower complex and its wide chronological range of occupation. The complex history of the origins of the Nuraghe Nanni Arrù and its determinative influence on the surrounding landscape represents an ideal use case to apply a cloud-based multi-scalar investigation methodology that includes: • Multitemporal/multisensor processing and analysis of satellite images; • Multisensor processing and analysis of UAV data; • Processing and analysis of geophysical data; • Data management and publication through the ArchaeoSardinia Platform. The choice of the different remote sensing techniques has been dictated by the particularities of the Nanni Arrù site.The surveys carried out and the reason for the choice are described below. The first step comprises a visual analysis of aerial and Very High Resolution (VHR) satellite images from Google Earth covering a period of time between 1955 to 2019 (Figure 3).This preliminary analysis highlights three possible areas with interesting features [26].The intense anthropic activity that is practiced up to the present day makes it nearly impossible to identify with certainty the extent of the original settlement, which is assumed to be consistent with both the size of the basalt stone tower complex and its wide chronological range of occupation. The complex history of the origins of the Nuraghe Nanni Arrù and its determinative influence on the surrounding landscape represents an ideal use case to apply a cloudbased multi-scalar investigation methodology that includes: • Multitemporal/multisensor processing and analysis of satellite images; • Multisensor processing and analysis of UAV data; • Processing and analysis of geophysical data; • Data management and publication through the ArchaeoSardinia Platform. The choice of the different remote sensing techniques has been dictated by the particularities of the Nanni Arrù site.The surveys carried out and the reason for the choice are described below. The first step comprises a visual analysis of aerial and Very High Resolution (VHR) satellite images from Google Earth covering a period of time between 1955 to 2019 (Figure 3).This preliminary analysis highlights three possible areas with interesting features [26].The choice of the area's delimitation was mainly dictated by the occurrence of differences in terrain color, evident in Google Earth images repeated in different years and seasons as explained in [26]. Multitemporal/Multisensor Processing and Analysis of Satellite Images 2.1.1. Calculation of Indices Related to the Environmental Condition Seeking indicators of the presence of possible buried structures we calculate various indices related to the environmental conditions (vegetation state, temperature, and soil moisture) using a multi-sensor data fusion approach, mainly based on the Planet Fusion Monitoring product.The data fusion algorithm implements a methodology to enhance, harmonize, intercalibrate, and fuse cross-sensor data streams.Based on the CubeSat-Enabled Spatio-Temporal Enhancement Method (CESTEM) [27], it leverages publicly accessible multispectral satellites (i.e., Sentinel, Landsat, MODIS) to work with higher spatial and temporal resolution data provided by Planet's Dove CubeSats.Planet Fusion Monitoring ingests data from multiple sensors with differing radiometry, quality, and resolution characteristics in order to produce an entirely new dataset that inherits the best traits from each sensor, i.e., temporal and spatial resolution of PlanetScope and spectral resolution of Sentinel2/Landsat8/9. In this study, the data fusion algorithm is applied using CubeSat and Sentinel 2 images.The CubeSat data have a high spatial resolution of 3.7 m with daily frequency and a small number of bands, 4 RGB and NIR bands.Sentinel2 data have a spatial resolution of 10/20 m with weekly frequency and 12 spectral bands. The indices related to the environmental conditions are carried out over a period of 48 months (from April 2018 to June 2022). The following Table 1 lists formulas, brief descriptions, and calculation tools for each chosen index. Planet Fusion algorithm Modified Soil Adjusted Vegetation Index 2 It is used as a variant to extend the application limits of NDVI to areas with a high presence of bare soil.The spatial resolution of the Sentinel 2 images makes crop marks difficult to determine, the indices obtained using the Planet Fusion algorithm are more appropriate.For this reason, only these indices are taken into account. Multi-Temporal Interferometry (MTI) Processing of Synthetic Aperture Radar (SAR) Images to Detect and Monitor Changes in the Earth's Surface around the Site Differential Interferometric Synthetic Aperture Radar (DInSAR) is a remote sensing technique able to measure and monitor displacement of the Earth's surface over time using radar images.An evolution of the DInSAR technique is the use of multiple images of the same area to derive a time series of displacement.These techniques are known as Multitemporal Interferometry.On the other hand, SAR Multitemporal Interferometry data processing aims to obtain Persistent Scatterers (PS) and Distributed Scatterers (DS) for a given area of interest.PS represents points and DS areas with a high phase of stability. In this study, the Multitemporal Interferometry technique is implemented within the Rheticus Displacement platform, an automatic cloud-based geoinformation service platform for land and infrastructure monitoring, implemented by Planetek Italia. SAR data acquired by the Italian Space Agency's COSMO-SkyMed constellation, consisting of the first (CSK) and second-generation (CSG) satellites, have been processed (Table 2). The algorithm SPINUA [28] was used to identify PS and DS.The Multitemporal Interferometry technique was used to assess land movements that could prevent access to buried structures and even damage the emerged ones. The choice to use Cosmo-Skymed data was dictated by the high spatial (3 m) and temporal (4 days) resolution. Multisensor Processing and Analysis of UAV Data The employed UAV is a quadcopter manufactured in 2022 whose sensors (multispectral, thermic, and optical) have been mounted according to the survey to be carried out to appropriately investigate the site. By means of the multispectral sensor Micasense RedEdge-M, which captures images through five bands, three indices were calculated: NDRE (Normalized Difference Red Edge) used to calculate the chlorophyll content in the plant canopy, NDVI, and MSAVI2 whose descriptions are specified in Table 1.The latter two have been compared with the same indices obtained through satellite data. Acquisition, Processing, and Analysis of Geophysical Data The initial idea was to carry out an integrated geophysical survey with frequency domain electromagnetics (FDEM) and employing ground penetrating radar (GPR) methods, with the aim to assess any changes in the electrical and electromagnetic characteristics attributable to the presence of archaeological structures.Nevertheless, after an initial site inspection, it was decided to abstain from it due to the roughness of the measurement surface which degrades GPR performance.The GPR survey has been replaced with the Electric Resistivity Tomography (ERT).This method, widely used in archeology, identifies geometries that could be attributable to buried archaeological structures. The aim of the geophysical surveys was to measure lateral changes in the electrical and electromagnetic characteristics of the shallow subsurface which could hint at the presence of archaeological structures, walls in particular, or buried archaeological objects.The following Figure 4 shows the two lines along which initial electric resistivity tomography was conducted (pink and light blue circles), three follow-up ERT lines (black, blue, and red circles), and the surface areas in which FDEM measurements were carried out. Geomatics 2024, 4, FOR PEER REVIEW 7 Edge) used to calculate the chlorophyll content in the plant canopy, NDVI, and MSAVI2 whose descriptions are specified in Table 1.The latter two have been compared with the same indices obtained through satellite data. Acquisition, Processing, and Analysis of Geophysical Data The initial idea was to carry out an integrated geophysical survey with frequency domain electromagnetics (FDEM) and employing ground penetrating radar (GPR) methods, with the aim to assess any changes in the electrical and electromagnetic characteristics attributable to the presence of archaeological structures.Nevertheless, after an initial site inspection, it was decided to abstain from it due to the roughness of the measurement surface which degrades GPR performance.The GPR survey has been replaced with the Electric Resistivity Tomography (ERT).This method, widely used in archeology, identifies geometries that could be attributable to buried archaeological structures. The aim of the geophysical surveys was to measure lateral changes in the electrical and electromagnetic characteristics of the shallow subsurface which could hint at the presence of archaeological structures, walls in particular, or buried archaeological objects.The following Figure 4 shows the two lines along which initial electric resistivity tomography was conducted (pink and light blue circles), three follow-up ERT lines (black, blue, and red circles), and the surface areas in which FDEM measurements were carried out. Geoelectric Data Acquisition The direct current geoelectric method is a geophysical remote sensing technique starting from intensity measurements of electric current, potential differences, and relative distances between electrodes placed at the ground surface.A classical geophysical inverse problem arises on attempts to estimate the local electrical resistivity for a depth range of underlying subsurface points, from the measured properties.The standard way Geoelectric Data Acquisition The direct current geoelectric method is a geophysical remote sensing technique starting from intensity measurements of electric current, potential differences, and relative distances between electrodes placed at the ground surface.A classical geophysical inverse problem arises on attempts to estimate the local electrical resistivity for a depth range of underlying subsurface points, from the measured properties.The standard way to tackle this problem is with ERT. The tomography software, applied in Section 3.2 to create two-dimensional resistivity cross sections, so-called profiles (see Sections 3.3.1 and 3.3.3),simplifies the threedimensional physical phenomena to two dimensions. Five geoelectrical surveys have been conducted in two successive steps.In the first step, two profiles (ERT_48_A and profile ERT_48_B) have been acquired, with the aim of determining average resistivity values, necessary for the design of the FDEM survey described in the following section; one design parameter for instance, the ideal measurement time for a single data point, increases with increasing ground resistivity.Both geoelectric resistivity profiles were carried out with 48 electrodes, at a distance of 2 m, using a quadrupole configuration of Dipole-Dipole type. The configuration was chosen to maximize the sensitivity for horizontal variations of the electrical resistivity along the direction of the acquisition profile.The following Table 3 shows the geometrical details, namely spacing between electrodes, UTM coordinates, elevation at the beginning and end of the profile, and the quadrupole type of each profile.In the second step, which took place after the first of two electromagnetic survey campaigns was carried out and its results were analyzed, three profiles (ERT_72_A, ERT_72_B, and ERT_72_C) were acquired in order to verify electromagnetic anomalies.The profiles consisted of 72 electrodes, with a distance of 0.5 m, using the quadrupole configuration of the Dipole-Dipole type.The following Table 4 shows the geometrical details and quadrupole type of each profile.Apparent resistivity data have been obtained with the IRIS SyscalPro ® instrument shown in Figure 5, with ten physical channels programmed with Dipole-Dipole and Wenner-Schlumberger sequences.Moreover, for each quadripole, the recorded data, i.e., apparent resistivity was obtained by averaging multiple measurement cycles, consisting of alternating injections of positive and negative current, from a minimum of three to a maximum of six, setting a maximum threshold of the standard deviation (or quality factor Q) equal to five percent. Geomatics 2024, 4, FOR PEER REVIEW 9 To georeference and process the electrical resistivity tomography lines, the (planimetric and altimetric) position of each electrode was determined using a Trimble 5800 GPS receiver with differential correction to ensure decimetric precision positioning of the points. Frequency Domain Electromagnetic Data Acquisition Inductive electromagnetic methods are geophysical means that allow the electrical resistivity of soils to be estimated based on an electromagnetic effect, i.e., the induction of a secondary magnetic field, measurable on the surface of the soil, produced by a primary electromagnetic field generated at the surface of the soil. Electromagnetic conductivity meters, or electromagnetic meters, are designed to work under low induction number conditions, measuring the electromagnetic response of the soil directly as an apparent electrical conductivity in mS/m. The phase component of the ratio between the primary and secondary fields, also known as in-phase, provides information on the type of conductive material (metallic or non-metallic) and working conditions (validity of the low induction number approximation).It is generally expressed in ppt (parts per thousand) and assumes non-zero values when the inductive characteristics of the subsurface are not negligible and the induction number becomes high due to a high conductivity (usually greater than a few tens of mS/m). Electromagnetic data have been obtained with two instruments: the Mini-Explorer electro-magnetometer by GF Instruments and the GEM2 electro-magnetometer by Geophex (Figure 6).The duration of each cycle was set to 500 ms, with a constant voltage at the current electrodes (A and B) equal to 200 V. Finally, before taking the measurement, the insertion of the electrodes into the ground was completed with particular care in order to obtain the most homogeneous electrode-ground contact resistance, generally less than 2 kΩ. To georeference and process the electrical resistivity tomography lines, the (planimetric and altimetric) position of each electrode was determined using a Trimble 5800 GPS receiver with differential correction to ensure decimetric precision positioning of the points. Frequency Domain Electromagnetic Data Acquisition Inductive electromagnetic methods are geophysical means that allow the electrical resistivity of soils to be estimated based on an electromagnetic effect, i.e., the induction of a secondary magnetic field, measurable on the surface of the soil, produced by a primary electromagnetic field generated at the surface of the soil. Electromagnetic conductivity meters, or electromagnetic meters, are designed to work under low induction number conditions, measuring the electromagnetic response of the soil directly as an apparent electrical conductivity in mS/m. The phase component of the ratio between the primary and secondary fields, also known as in-phase, provides information on the type of conductive material (metallic or non-metallic) and working conditions (validity of the low induction number approximation).It is generally expressed in ppt (parts per thousand) and assumes non-zero values when the inductive characteristics of the subsurface are not negligible and the induction number becomes high due to a high conductivity (usually greater than a few tens of mS/m). Electromagnetic data have been obtained with two instruments: the Mini-Explorer electro-magnetometer by GF Instruments and the GEM2 electro-magnetometer by Geophex (Figure 6). The Mini-Explorer instrument has three receiving coils distant from the transmitting coil 0.32, 0.71, and 1.18 m; the working frequency is 30 kHz.As a result, each measurement provides medium conductivity and phase values for three different investigation depths.By rotating the instrument by 90 • along the horizontal axis, the orientation of the coils can be set to vertical (high depth range) or horizontal (low depth range).The latter is only recommended if no other instrument with smaller coil spacings is available.We used the vertical coil configuration which corresponds roughly to investigation depths of about 0.5, 1, and 1.8 m. when the inductive characteristics of the subsurface are not negligible and the induction number becomes high due to a high conductivity (usually greater than a few tens of mS/m). Electromagnetic data have been obtained with two instruments: the Mini-Explorer electro-magnetometer by GF Instruments and the GEM2 electro-magnetometer by Geophex (Figure 6).The GEM2 instrument is a multi-frequency instrument (usable in the range between 30 Hz and 93 kHz) that works with coils distant 1.66 m from each other.For this work, five frequencies were used whose values, 5825 Hz, 15,325 Hz, 32,025 Hz, 57,025 Hz, and 80,225 Hz, were determined based on the electrical resistivity values obtained with the electrical tomographies acquired along the lines ERT_48_A and ERT_48_B. With both instruments, data were acquired in two successive campaigns covering three areas surrounding the nuraghe, namely area A in the south-west, area B in the south-east, and area C in the north-east. The second campaign targeted the extension of area B towards the south-east in order to cover potential structures that had been identified during the analysis of the data of the first campaign together with the satellite data.The acquisition was conducted along variable length profiles spaced one meter apart in continuous measurement mode with a frequency of four measurements per second, corresponding to an approximately in-line spacing of four measurements per meter. Both instruments were transported at a height of 30 cm from the ground surface.With the Mini-explorer, 42,000 apparent conductivity data and as many phase data (ratio between the in-phase component of the secondary field and the primary field) were acquired, for a total profile length of approximately 10.5 km; with the GEM2, instead, for each of the five frequencies, 45,000 data of the quadrature ratio of the electromagnetic fields and as many of the in-phase ratios of the fields were acquired, for a total length of 10.5 km.Using instruments with a differential GPS (GPS Trimble 5800), for all measurement points the UTM geographical coordinates were also measured, which made it possible to georeference both the data and the results. Data processing involved mainly two steps, namely the corrections for the nonlinearity of the response of the instrument and for the working height [29] and the data calibration [30].The data were first corrected according to the procedures described in [29] and then calibrated according to the procedure described in [30]. Data Management and Publication Data management is carried out by an IT platform consisting of a Back-End and a Front-End. The microservices are: • ArchaeoSardinia PosgreSQL which handles vector-type GIS data via the PostGIS extension; • ArchaeoSardinia Geoserver which enables the publication of GIS data.It provides user interfaces to manage the vector and raster data publication; • ArchaeoSardinia OpenAtlas which enables archaeological metadata management and publication (e.g., documentation of UAV survey and data post-processing methods).OpenAtlas is used to document the various information layers produced during archaeological surveys and to assist a WebGIS viewer in configuring the related maps. A raster layer representing a vegetation index managed in the platform and then linked to the investigated site via an Event/Activity (CIDOC-CRM class E5/E7) may be considered as an example.Another example is the classification of entities using Types (class E55) that have been created on the basis of Sardegna Cultura and Nurnet information.The Types characterize all the features present in the site (for instance, "Nuraghe", "Complex Nuraghe", "Complex Nuraghe Bastion", and so on). A screenshot of the ArchaeoSardinia-OpenAtlas archaeological data documentation module is shown in the following Figure 8. OpenAtlas is used to document the various information layers produced during archaeological surveys and to assist a WebGIS viewer in configuring the related maps. A raster layer representing a vegetation index managed in the platform and then linked to the investigated site via an Event/Activity (CIDOC-CRM class E5/E7) may be considered as an example.Another example is the classification of entities using Types (class E55) that have been created on the basis of Sardegna Cultura and Nurnet information.The Types characterize all the features present in the site (for instance, "Nuraghe", "Complex Nuraghe", "Complex Nuraghe Bastion", and so on). A screenshot of the ArchaeoSardinia-OpenAtlas archaeological data documentation module is shown in the following Figure 8. Data publication is completed via WebGIS https://gislab.crs4.it/archaeowebgis/(accessed on 15 December 2023).The viewer enables the consultation of the information layers obtained from the surveys at the Nanni Arrù site and is available for use on a wide variety of devices including smartphones and tablets.It is possible to compare data from drone surveys and geophysical methods by managing the opacity of the same even at high zoom levels. In the following Figure 9, there is a screenshot of the ArchaeoSardinia WebGIS.Data publication is completed via WebGIS https://gislab.crs4.it/archaeowebgis/(accessed on 15 December 2023).The viewer enables the consultation of the information layers obtained from the surveys at the Nanni Arrù site and is available for use on a wide variety of devices including smartphones and tablets.It is possible to compare data from drone surveys and geophysical methods by managing the opacity of the same even at high zoom levels. In the following Figure 9, there is a screenshot of the ArchaeoSardinia WebGIS.Data publication is completed via WebGIS https://gislab.crs4.it/archaeowebgis/(accessed on 15 December 2023).The viewer enables the consultation of the information layers obtained from the surveys at the Nanni Arrù site and is available for use on a wide variety of devices including smartphones and tablets.It is possible to compare data from drone surveys and geophysical methods by managing the opacity of the same even at high zoom levels. Results In the following Figure 9, there is a screenshot of the ArchaeoSardinia WebGIS. Results Figure 9. Screenshot of the ArchaeoSardinia WebGIS. Results This section considers the results of interpreting the data obtained from the different instruments used for the surveys. Analysis of Satellite Images As mentioned in Section 2.1.1,only indices obtained using the Planet Fusion algorithm have been taken into account.In particular, the trend of NDVI and MSAVI2 indices over the period April 2018-June 2022 was examined.The MSAVI2 index is used as a variant to extend the application limits of NDVI to areas with a high presence of bare soil.A tool to visually analyze the two indices' trends has been implemented.It shows the daily trend over the period April 2018-June 2022. The following Figure 10 shows a screenshot of the tool that can be accessed at https: //gislab.crs4.it/archaeosardinia/sat.html (accessed on 15 December 2023). Analysis of Satellite Images As mentioned in Section 2.1.1,only indices obtained using the Planet Fusion algorithm have been taken into account.In particular, the trend of NDVI and MSAVI2 indices over the period April 2018-June 2022 was examined.The MSAVI2 index is used as a variant to extend the application limits of NDVI to areas with a high presence of bare soil.A tool to visually analyze the two indices' trends has been implemented.It shows the daily trend over the period April 2018-June 2022. The following Figure 10 shows a screenshot of the tool that can be accessed at https://gislab.crs4.it/archaeosardinia/sat.html (accessed on 15 December 2023).The analysis of these indices appears to show anomalies that are repeated over the years and suggest the possible presence of buried structures in areas north-east and south-east of the Nanni Arrù excavated part.As an example, some of the NDVI and MSAVI2 indices are shown in Figure 11, with an indication of the areas showing anomalies.The analysis of these indices appears to show anomalies that are repeated over the years and suggest the possible presence of buried structures in areas north-east and southeast of the Nanni Arrù excavated part.As an example, some of the NDVI and MSAVI2 indices are shown in Figure 11, with an indication of the areas showing anomalies. rithm have been taken into account.In particular, the trend of NDVI and MSAVI2 indices over the period April 2018-June 2022 was examined.The MSAVI2 index is used as a variant to extend the application limits of NDVI to areas with a high presence of bare soil.A tool to visually analyze the two indices' trends has been implemented.It shows the daily trend over the period April 2018-June 2022. The following Figure 10 shows a screenshot of the tool that can be accessed at https://gislab.crs4.it/archaeosardinia/sat.html (accessed on 15 December 2023).The analysis of these indices appears to show anomalies that are repeated over the years and suggest the possible presence of buried structures in areas north-east and south-east of the Nanni Arrù excavated part.As an example, some of the NDVI and MSAVI2 indices are shown in Figure 11, with an indication of the areas showing anomalies.As described in Section 2.1.2,in this study both distributed and persistent radar targets were analyzed and made available for consultation through the Rheticus Displacement service.The distribution of PS and DS in the surrounding area of the Nanni Arrù site is shown in Figure 12.The PS/DS ground motion map shows that the Nuraghe Nanni Arrù is stable in the monitored time interval (January 2018-May 2023). As described in Section 2.1.2,in this study both distributed and persistent radar targets were analyzed and made available for consultation through the Rheticus Displacement service.The distribution of PS and DS in the surrounding area of the Nanni Arrù site is shown in Figure 12.The PS/DS ground motion map shows that the Nuraghe Nanni Arrù is stable in the monitored time interval (January 2018-May 2023) Multispectral Sensor It can be observed that there is a clear lush and cistus vegetation inside some towers and similar cistus are present in an eastern area of the excavated site (Figure 13); in fact, there is no tower visible, but there could be one buried under the ground. As emerged from the historical series of satellite data, these plants appear to have higher vigor than the others nearby. Initial Electrical Resistivity Tomography As described in Section 2.3.1, the direct current geoelectric method was used to acquire resistivity data along the profile lines ERT_48_A and ERT_48_B, shown in Figure4.For the 2D inversion of the experimental data, the ResIPy codes [31] and the commercial software Res2Dinv ® [32] were used.Before proceeding with the data inversion, each pseudo section was carefully edited using filtering to eliminate particularly noisy data. Analysis from Drone Surveys Multispectral Sensor It can be observed that there is a clear lush and circular cistus vegetation inside some towers and similar cistus are present in an eastern area of the excavated site (Figure 13); in fact, there is no tower visible, but there could be one buried under the ground.ment service.The distribution of PS and DS in the surrounding area of the Nanni Arrù site is shown in Figure 12.The PS/DS ground motion map shows that the Nuraghe Nanni Arrù is stable in the monitored time interval (January 2018-May 2023) Multispectral Sensor It can be observed that there is a clear lush and circular cistus vegetation inside some towers and similar cistus are present in an eastern area of the excavated site (Figure 13); in fact, there is no tower visible, but there could be one buried under the ground. As emerged from the historical series of satellite data, these plants appear to have higher vigor than the others nearby. Initial Electrical Resistivity Tomography As described in Section 2.3.1, the direct current geoelectric method was used to acquire resistivity data along the profile lines ERT_48_A and ERT_48_B, shown in Figure4.For the 2D inversion of the experimental data, the ResIPy codes [31] and the commercial software Res2Dinv ® [32] were used.Before proceeding with the data inversion, each pseudo section was carefully edited using filtering to eliminate particularly noisy data.As emerged from the historical series of satellite data, these plants appear to have higher vigor than the others nearby. Initial Electrical Resistivity Tomography As described in Section 2.3.1, the direct current geoelectric method was used to acquire resistivity data along the profile lines ERT_48_A and ERT_48_B, shown in Figure 4.For the 2D inversion of the experimental data, the ResIPy codes [31] and the commercial software Res2Dinv ® (https://landviser.com/software/res2dinv/,accessed on 14 December 2023) [32] were used.Before proceeding with the data inversion, each pseudo section was carefully edited using filtering to eliminate particularly noisy data.This operation was carried out using ProSys ® (https://www.altech-ads.com/products/Prosys-OPC/,accessed on 14 December 2023), the software for operating the SyscalPro ® instrument. The resulting resistivity profiles are depicted in Figure 14 and were utilized for the design of the FDEM investigations; namely, for the selection of the instrument and the determination of the acquisition parameters, such as height of the instrument above ground surface and operating frequencies. This operation was carried out using ProSys ® , the software for operating the SyscalPro ® instrument. The resulting resistivity profiles are depicted in Figure 14 and were utilized for the design of the FDEM investigations; namely, for the selection of the instrument and the determination of the acquisition parameters, such as height of the instrument above ground surface and operating frequencies. Both tomography results show electrical resistivity values between approximately 4 Ωm and 50 Ωm.Based on these values, electromagnetic data have been simulated for a wide range of potential acquisition parameters using the FDEMtools 3.0 software [32].From these simulations, it was found that the site's electrical characteristics were optimal for both FDEM instruments, MiniExplorer, and GEM2.Furthermore, based on the simulation, the optimal working height was determined to be 30 cm, and the operating frequencies of GEM2 were 5825 Hz, 15,325 Hz, 32,025 Hz, 57,025 Hz, and 80,225 Hz. Electromagnetic Resistivity and In-Phase Maps The apparent resistivity maps obtained from the data collected using the MiniExplorer and GEM2 instruments are depicted in Figures 15 and 16.Both maps show two significant anomalies (see blue circles) in an estimated depth range of 0 to 1.8 m, which deserve closer attention.Both tomography results show electrical resistivity values between approximately 4 Ωm and 50 Ωm.Based on these values, electromagnetic data have been simulated for a wide range of potential acquisition parameters using the FDEMtools 3.0 software [32].From these simulations, it was found that the site's electrical characteristics were optimal for both FDEM instruments, MiniExplorer, and GEM2.Furthermore, based on the simulation, the optimal working height was determined to be 30 cm, and the operating frequencies of GEM2 were 5825 Hz, 15,325 Hz, 32,025 Hz, 57,025 Hz, and 80,225 Hz. Electromagnetic Resistivity and In-Phase Maps The apparent resistivity maps obtained from the data collected using the MiniExplorer and GEM2 instruments are depicted in Figures 15 and 16.Both maps show two significant anomalies (see blue circles) in an estimated depth range of 0 to 1.8 m, which deserve closer attention. The tomographies discussed in the next section were conducted with the aim of further investigating these anomalies. Follow-Up Electrical Resistivity Tomography Three additional ERT surveys were conducted along the lines indicated in Figure 16, starting from the north-east and continuing toward the south-west.Tomographies ERT_72_A and ERT_72_B, conducted to investigate electromagnetic anomalies in the south-east of the nuraghe, are shown in the top and middle sections of Figure 17.Both tomographies reveal: (1) a relatively resistive superficial layer (ranging from 25 to 90 Ωm) with a thickness of about 1 m; and (2) a second conductive layer with resistivities less than 5 Ωm, which contains a "tongue" of more resistive material in the intermediate part (in depth).Although the electrical tomography sections do not point to subsurface structures of archaeological relevance, the general trend along the lines, i.e., increasing conductivity in the electromagnetic resistivity and in-phase maps and decreasing resistivity in the electrical tomography sections, is consistent.Also, electrical resistivity tomography ERT_72_C, displayed at the bottom of Figure 17, targeting the low conductivity anomaly in the east of the nuraghe, does confirm the trend seen in the electromagnetic resistivity and in-phase maps.The tomographies discussed in the next section were conducted with the aim of further investigating these anomalies.The tomographies discussed in the next section were conducted with the aim of further investigating these anomalies.of archaeological relevance, the general trend along the lines, i.e., increasing conductivity in the electromagnetic resistivity and in-phase maps and decreasing resistivity in the electrical tomography sections, is consistent.Also, electrical resistivity tomography ERT_72_C, displayed at the bottom of Figure 17, targeting the low conductivity anomaly in the east of the nuraghe, does confirm the trend seen in the electromagnetic resistivity and in-phase maps. Discussion The comparison of data obtained from the various survey techniques used in this study highlighted anomalies that could suggest the presence of buried structures.In particular, both the data obtained by electromagnetic geophysical survey and those obtained by satellite and drone showed anomalies in the southern area close to the excavated part.However, the evidence was not strong enough to come to a final conclusion.Therefore, it Discussion The comparison of data obtained from the various survey techniques used in this study highlighted anomalies that could suggest the presence of buried structures.In particular, both the data obtained by electromagnetic geophysical survey and those obtained by satellite and drone showed anomalies in the southern area close to the excavated part.However, the evidence was not strong enough to come to a final conclusion.Therefore, it would be useful to conduct further electromagnetic tests also in the excavated area; this has not been possible with the geophysical instruments at hand, given the asperity of the site.Future developments of this study include the possibility of mounting the Mini-Explorer electro-magnetometer on the drone, as described in [33]. Electromagnetic surveys have also highlighted anomalies in an eastern area of the excavated part of the Nanni Arrù site characterized by the presence of cistus similar to those present inside some towers in the excavated area, which could suggest possible buried structures. Further developments of this study also include the application of the proposed methodology to nuragic sites with different characteristics such as the type (simple and complex nuraghi, for instance), the presence of surrounding villages, and the involvement in excavation activities. Figure 3 . Figure 3. Google Earth images.Examples of possible soil marks (a) and crop marks (b), pointed out by the arrows. Figure 3 . Figure 3. Google Earth images.Examples of possible soil marks (a) and crop marks (b), pointed out by the arrows. Table 1 . List of indices, in the formulas: NIR indicates the light reflected in the near-infrared spectrum, RED indicates the light reflected in the red range of the spectrum, GREEN indicates the light reflected in the green range of the spectrum, and SWIR indicates the light reflected in the short wave of infrared. Figure 4 . Figure 4.The dotted lines represent the locations of five 2D ERTs and the light red surface, the total area in which 3D FDEM measurements were carried out. Figure 4 . Figure 4.The dotted lines represent the locations of five 2D ERTs and the light red surface, the total area in which 3D FDEM measurements were carried out. Figure 7 . Figure 7. IT platform architecture.The Front-End implements the WebGIS application, accessed by users, that publishes maps using OGC (Open Geospatial Consortium) Compliant services provided by the Back-End.The Platform includes a module for archaeological data documentation based on the CIDOC CRM standard https://www.cidoc-crm.org/(accessed on 15 December 2023) and, in particular, a simplified version of it used in the open source software OpenAtlas https://openatlas.eu/(accessed on 15 December 2023).Through this form, it is also possible to document any phases of excavation or archaeological surveys of the site under study.OpenAtlas is used to document the various information layers produced during archaeological surveys and to assist a WebGIS viewer in configuring the related maps.A raster layer representing a vegetation index managed in the platform and then linked to the investigated site via an Event/Activity (CIDOC-CRM class E5/E7) may be considered as an example.Another example is the classification of entities using Types (class E55) that have been created on the basis of Sardegna Cultura and Nurnet information.The Types characterize all the features present in the site (for instance, "Nuraghe", "Complex Nuraghe", "Complex Nuraghe Bastion", and so on).A screenshot of the ArchaeoSardinia-OpenAtlas archaeological data documentation module is shown in the following Figure8. Figure 7 . Figure 7. IT platform architecture.The Front-End implements the WebGIS application, accessed by users, that publishes maps using OGC (Open Geospatial Consortium) Compliant services provided by the Back-End.The Platform includes a module for archaeological data documentation based on the CIDOC CRM standard https://www.cidoc-crm.org/(accessed on 15 December 2023) and, in particular, a simplified version of it used in the open source software OpenAtlas https://openatlas.eu/(accessed on 15 December 2023).Through this form, it is also possible to document any phases of excavation or archaeological surveys of the site under study.OpenAtlas is used to document the various information layers produced during archaeological surveys and to assist a WebGIS viewer in configuring the related maps.A raster layer representing a vegetation index managed in the platform and then linked to the investigated site via an Event/Activity (CIDOC-CRM class E5/E7) may be considered as an example.Another example is the classification of entities using Types (class E55) that have been created on the basis of Sardegna Cultura and Nurnet information.The Types characterize all the features present in the site (for instance, "Nuraghe", "Complex Nuraghe", "Complex Nuraghe Bastion", and so on).A screenshot of the ArchaeoSardinia-OpenAtlas archaeological data documentation module is shown in the following Figure8.Data publication is completed via WebGIS https://gislab.crs4.it/archaeowebgis/(accessed on 15 December 2023).The viewer enables the consultation of the information layers obtained from the surveys at the Nanni Arrù site and is available for use on a wide variety of devices including smartphones and tablets.It is possible to compare data from drone surveys and geophysical methods by managing the opacity of the same even at high zoom levels.In the following Figure9, there is a screenshot of the ArchaeoSardinia WebGIS. Figure 10 . Figure 10.Screenshot of the tool that shows the NDVI and MSAVI2 trend over the period April 2018-June 2022. Figure 11 . Figure 11.NDVI and MSAVI2 indices calculated on the same days in the years 2018-2021.Black dots indicate Nanni Arrù site, the arrows indicate anomalous areas. Figure 10 . Figure 10.Screenshot of the tool that shows the NDVI and MSAVI2 trend over the period April 2018-June 2022. Figure 10 . Figure 10.Screenshot of the tool that shows the NDVI and MSAVI2 trend over the period April 2018-June 2022. Figure 11 . Figure 11.NDVI and MSAVI2 indices calculated on the same days in the years 2018-2021.Black dots indicate Nanni Arrù site, the arrows indicate anomalous areas. Figure 11 . Figure 11.NDVI and MSAVI2 indices calculated on the same days in the years 2018-2021.Black dots indicate Nanni Arrù site, the arrows indicate anomalous areas. Figure 12 . Figure 12.PS/DS ground motion map.Purple pointer indicates the Nanni Arrù site.In the color scale: green indicates stability, red indicates lowering and blue indicates uplift.The displacement time series graph shows the displacement trend of selected PS (blue pointer) in descending orbit over the monitored time interval (January 2018-May 2023). Figure 13 . Figure 13.MSAVI2, NDVI, and NDRE indices obtained by the survey conducted on 21 October 2022.Blue arrows indicate the areas where cistus are inside towers, black arrow indicates similar cistus that are present where there are no towers. Figure 12 . Figure 12.PS/DS ground motion map.Purple pointer indicates the Nanni Arrù site.In the color scale: green indicates stability, red indicates lowering and blue indicates uplift.The displacement time series graph shows the displacement trend of selected PS (blue pointer) in descending orbit over the monitored time interval (January 2018-May 2023). Figure 12 . Figure 12.PS/DS ground motion map.Purple pointer indicates the Nanni Arrù site.In the color scale: green indicates stability, red indicates lowering and blue indicates uplift.The displacement time series graph shows the displacement trend of selected PS (blue pointer) in descending orbit over the monitored time interval (January 2018-May 2023). Figure 13 . Figure 13.MSAVI2, NDVI, and NDRE indices obtained by the survey conducted on 21 October 2022.Blue arrows indicate the areas where cistus are inside towers, black arrow indicates similar cistus that are present where there are no towers. Figure 13 . Figure 13.MSAVI2, NDVI, and NDRE indices obtained by the survey conducted on 21 October 2022.Blue arrows indicate the areas where cistus are inside towers, black arrow indicates similar cistus that are present where there are no towers. Geomatics 2024, 4 , 16 Figure 15 . Figure 15.Electromagnetic map from MiniExplorer: Electrical conductivity obtained with coils spaced 1.18 m in a vertical configuration corresponding to an investigation depth range of up to 1.8 m.The circles indicate two areas with anomalies to be verified. Figure 16 . Figure 16.Electromagnetic map from GEM2: component in quadrature of the ratio between secondary magnetic field and primary magnetic field.The circles indicate two areas with anomalies to be further investigated by additional ERT surveys, namely ERT_72_A and ERT_72_B indicated by blue crosses, ERT_72_C indicated by red crosses). Figure 15 . Figure 15.Electromagnetic map from MiniExplorer: Electrical conductivity obtained with coils spaced 1.18 m in a vertical configuration corresponding to an investigation depth range of up to 1.8 m.The circles indicate two areas with anomalies to be verified. Geomatics 2024, 4 , 16 Figure 15 . Figure 15.Electromagnetic map from MiniExplorer: Electrical conductivity obtained with coils spaced 1.18 m in a vertical configuration corresponding to an investigation depth range of up to 1.8 m.The circles indicate two areas with anomalies to be verified. Figure 16 . Figure 16.Electromagnetic map from GEM2: component in quadrature of the ratio between secondary magnetic field and primary magnetic field.The circles indicate two areas with anomalies to be further investigated by additional ERT surveys, namely ERT_72_A and ERT_72_B indicated by blue crosses, ERT_72_C indicated by red crosses). Figure 16 . Figure 16.Electromagnetic map from GEM2: component in quadrature of the ratio between secondary magnetic field and primary magnetic field.The circles indicate two areas with anomalies to be further investigated by additional ERT surveys, namely ERT_72_A and ERT_72_B indicated by blue crosses, ERT_72_C indicated by red crosses). Table 3 . Geometrical details of the first two geoelectrical surveys. Table 4 . Geometrical details of the last three geoelectrical surveys.
2024-02-09T16:22:20.520Z
2024-02-07T00:00:00.000
{ "year": 2024, "sha1": "51f996d610ba0c4ab8bf4a57674c08468418d932", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-7418/4/1/3/pdf?version=1707287007", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "c63b98b59eaf06f2c34a84e10c452a155f7ca035", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "History" ], "extfieldsofstudy": [] }
172087399
pes2o/s2orc
v3-fos-license
Three Ones and Aristotle’s Metaphysics Aristotle’s Metaphysics defends a number of theses about oneness [to hen]. For interpreting the Metaphysics’ positive henology, two such theses are especially important: (1) to hen and being [to on] are equally general and so intimately connected that there can be no science of the former which isn’t also a science of the latter, and (2) to hen is the foundation [archē] of number qua number. Aristotle decisively commits himself to both (1) and (2). The central goal of this article is to improve our understanding of what the Metaphysics’ endorsement of their conjunction amounts to. To this end we explore three manners of being one which enter into Aristotle’s Metaphysics: I call them unity, uniqueness, and unit-hood. On the view the article defends, it’s unity (and not uniqueness) that’s at issue in Aristotle’s endorsement of (1) and unit-hood (and not uniqueness) that’s at issue in his endorsement of (2). The Metaphysics’ positive henology as whole, I suggest, is best interpreted by positing a theory-internal distinction between unity, uniqueness, and unit-hood. Some Interpretive Questions In the Platonist philosophical tradition, two basic intuitions about oneness [to hen] loom large. The first is that to hen has some kind of deep connection with being; the second is that to hen has some kind of deep connection with number. Developing these intuitions-and doing so rather differently than Plato and more orthodox Platonists-Aristotle's Metaphysics elaborates and defends the following theses about to hen: Thesis A. being [to on] and to hen are equally general and so intimately connected that there can be no science [epistēmē] of the former that isn't also a science of the latter and its per se attributes Thesis B. to hen is the foundation [archē] of number [arithmos] qua number Aristotle decidedly commits himself to both theses. The central goal of this paper is to improve our understanding of what his commitment to their conjunction amounts to. Suppose we use ontological as a label for the notion of one/oneness [to hen] at issue in Aristotle's endorsement of Thesis A, and arithmetical for the notion of one/oneness at issue in his endorsement of Thesis B. We can then ask what, on Aristotle's view, being one in the ontological sense and being one in the arithmetical sense each amount to, whether-and to what extent-he views them as distinct, and how-if he does think them distinct-these varieties of oneness relate to one another. In the last decades, these questions have not received much sustained attention. But a great many issues in Aristotle, as well as later Aristotelian thought, seem to be tied up with these questions-not least among them, questions about the role of hylomorphism in Aristotle's metaphysical project. Most obviously, our questions bear on the Aristotelian pedigree of the medieval distinction between 'transcendental' and ' quantitative' oneness [Ar.: waḥda, Lat.: unitas]. Now, this latter distinction has its own complex history of interpretations and reinterpretations. 3 And in the interest of prejudging as little as possible and guarding against the conflation of what may turn out to be distinct philosophical issues, I think it safest to here avoid anachronistic application of such scholastic terminology to Aristotle himself. Thus we speak rather of ' ontological' ones/oneness and ' arithmetical' ones/oneness in what follows; and we pursue our questions using these labels in the exact interpretive sense specified in the paragraph above. Now, much of Aristotle's Metaphysics is dialogical and aporetic. As with other topics, its various discussions of oneness are aimed at-and conducted from-a variety of theoretical perspectives and often involve playing off different intuitions about being one against each other. 4 But it turns out to be easy enough to collect from the Metaphysics: (a) a critical mass of texts that clearly pertain to Aristotle's own endorsement of Thesis A (and thus ontological oneness) and (b) a critical mass of texts that clearly pertain to Aristotle's own endorsement of Thesis B (and thus arithmetical oneness). What we find in this body of texts, I contend, are two quite incompatible accounts of oneness. I contend, moreover, that in the overall henology of the Metaphysics these two accounts are best viewed as neither competing nor confused, but as intended to conceptualize two different things that Aristotle himself sees as quite distinct. On the interpretation of the Metaphysics' henology I develop below, ontological oneness in Aristotle is unity (and not uniqueness) and arithmetic oneness in Aristotle is unit-hood (and not uniqueness). On Aristotle's own view of them, ontological and arithmetical oneness are quite separate. This account of Aristotle's henology is far from uncontroversial. It clashes most strikingly with a common interpretation of the Metaphysics' henology on which Aristotle assimilates ontological to arithmetical oneness and/or effectively identifies the former with the latter. We can call such interpretations Assimilationist, this general line of interpretation Assimilationism about Aristotle's henology. 5 Now, the foundation of Assimilationism is a trio of Metaphysics passages: two from Met. Iota 1, one from Met. Δ.6. The three passages are traditionally interpreted together in a very tight hermeneutic circle. Thus interpreted, they seem to constitute strong evidence for some kind of Assimilationism and strong evidence against my competing account of Aristotle's henology. Yet all three, and two of them in particular, are attested in remarkably different versions in the Byzantine manuscript tradition-our best evidence for what Aristotle actually wrote. This has not, I think, been taken seriously enough; but fully engaging with Assimilationism will require that we do so. For operative among the (alternately reinforced and reinforcing) assumptions in the hermeneutic circle that's given rise to Assimilationism are some highly questionable text-critical judgements about these passages. Thinking through what, on the final analysis, the three passages do or do not tell us about Aristotle's henology will involve consideration of some thorny text-critical and text-genealogical issues. This work will occupy us in the paper's penultimate section: Section 6. Prerequisite for serious engagement with the textcritical and interpretative questions addressed by Section 6 is philosophical consideration of much else in the Metaphysics' henology. Among other things, the more purely philosophical work of the five preceding sections is intended to provide such preparation. The section that succeeds this one (Section 3 below) offers a synoptic account of ontological oneness in Aristotle, drawing centrally on Met. Γ.2 and the closely connected analysis of unity in Met. Δ.6 1015b16-1016b17. 6 Section 4 turns to Aristotle's positive conception of number [arithmos], developing an interpretation of arithmetical oneness and the sense in which Aristotle himself thinks it true to say that to hen is the foundation of number. Naturally, our central focus there is Met. Iota 1 1052b20-1053b8 and N.1 1087b33-1088a14. Building on the previous sections' work, Section 5 sets out a series of arguments concerning the 4 This is especially true of Met. Iota 2. As I plan to discuss in a subsequent publication, I take the argument of that chapter to deploy ideas about both ontological and arithmetical oneness in quick succession, and to do so without argumentatively conflating them. 5 5Averroes-e.g. in his Epitome to Aristotle's Metaphysics (English translation in Arnzen 2010) and Long Commentary on Aristotle's Metaphysics (yet to be translated in English: Arabic in Bouyges 1952)-criticizes Avicenna for assimilating (≈what I've called) ontological oneness to (≈what I've called) arithmetical oneness. This doesn't get Avicenna entirely wrong, but Avicenna's considered view is far more complicated-cf. I. 5 was not yet released. Castelli's approach to Aristotle's henology is interesting and interestingly different from my own. I'll be offering more detailed discussion of her work-including our agreements and disagreements-in a later article that takes account of both her books. 6 NB in particular the tight correspondence of Met. Γ.2 1004a25-31 and Δ.6 1016b6-9. distinctness of arithmetical and ontological oneness in Aristotle's thought (and the distinctness of both from uniqueness as Aristotle conceives of it). Finally, Section 6 turns to the three passages that motivate Assimilationism and will prima facie seem to pose a serious challenge-indeed, the main challenge-to my interpretation of Aristotle's henology: Met. Iota 1 1052b16-19, Met. Iota 1 1053b4-6, and Met. Δ.6 1016b17-18. Attending to the relevant text-critical problems, Section 6 defends interpretations of the three passages on which they can be seen to cohere well with both my account of Aristotle's henology and their own immediate context. Section 7 is a conclusion. Ontological Oneness: Unity Aristotle's Metaphysics develops a conception of wisdom [sophia] as an epistēmē: as the mastery of a certain perfected science. So conceived, wisdom will be like other epistēmai in being a kind of impeccable systematic understanding of a subject-matter-a form of perfected knowledge whose characteristic expression is to give the definitive accounts of (subject-matter specific) inexorable phenomena [anagkaia] in terms of of their causes [aitia]. But in contrast to other epistēmai, the Metaphysics argues that wisdom will be an especially profound epistēmē due to the abstraction and extreme universality of its subject-matter. For Aristotle thinks the epistēmē most deserving of the name sophia will have to be a 'big picture' epistēmē of reality as a whole. In particular, his Metaphysics argues that true sophia would be an epistēmē that accounts for the the most general of all truths by explaining them on the basis of their most primitive causes and ultimate foundations [archai]. To attain this epistēmē of sophia is the ultimate goal of the investigative discipline that Aristotle calls First Philosophy [protē philosophia]. The ultimate goal of Aristotle's Metaphysics is not, of course, to propound any such sophia-Aristotle doesn't claim to have it-but simply to make progress in First Philosophy (so conceived). In Met. Γ.1 Aristotle famously teaches that the maximally general truths that First Philosophy takes as explananda concern being as such. They are, that is, inexorable truths concerning what holds of (all or certain types of) beings simply insofar as they manifest their associated ways of being. But as Γ.1-2 develops this line of thought, we soon learn that this science of being qua being is (somehow) also a science of oneness qua oneness and what pertains to it per se (1003b33-6, 1004b5-8). For according to Aristotle, there is a type of oneness [to hen] that is as universal a phenomenon as being is-a type of oneness that's (nonaccidentally) convertible with being. (Aristotle calls two phenomena convertible iff every case of the first is a case of the second and vice versa). Met. Γ.2 argues that being and this type of oneness are in some sense 'the same single nature' with the result that this type of oneness is 'nothing different over-and-above being' (1003b22-3, 1003b31). The upshot of this is supposed to be that the envisioned epistēmē of wisdom must be thematically concerned with the explication of this particular phenomenon of oneness and must account for its per se attributes (= whatever holds of things insofar as things somehow manifest this type of oneness). At issue in these remarks is ontological oneness in the sense of Section 2. And what Aristotle has in mind here clearly isn't uniqueness: i.e. being one in the sense of countable as ' one'. For if it were then the mathematical epistēmē of number theory [arithmētikē] would (by Aristotle's lights) be part of the metaphysical epistēmē that First Philosophy seeks-and according to Aristotle it most certainly isn't. Nor can ontological oneness be identified with self-sameness. For, in line with other texts, Met. Γ.2 affirms the priority of ontological oneness to sameness: the latter being among the 'per se attributes' pertaining to the former (Γ.2 1004b5-8; cf. Δ.9 1018a7-9). No, for Aristotle ontological oneness is unity. 7 Now, according to Aristotle being and ontological oneness-i.e being and unity-are not just convertible but convertible per se. 8 On this view, there can be no generation of a being that isn't as such the generation 7 At issue here is the sense of to hen relative to which Aristotle himself wants to affirm what he says about ontological oneness in Met. Γ.2. On the final analysis, I think it pretty clear that ontological oneness for Aristotle is unity. And to bring out the attractiveness of this interpretation this section develops it in connection with Γ.2, and in some detail. That in the context of Aristotle's other views ontological oneness turns out to be unity seems to me quite hard to deny. All things considered, I have a difficult time seeing what else ontological oneness could possibly be given the various things Γ.2 says about it. Section 5 below will rehearse some further arguments for why ontological oneness in Aristotle cannot be plausibly identified with uniqueness, arithmetic oneness, or unit-hood. While they all deploy ideas introduced in Sections 3-4, the majority of these arguments do not rest on any interpretive identification of ontological oneness with unity. Some strong textual evidence for thinking that ontological oneness in Γ is unity has already been noted: I mean the tight connection between Γ.2 1004a25-31 and Δ.6 1016b6-9 (for Δ.6 1015b16-1016b17 is manifestly about unity). Aristotle's ideas about the relationship between unity and sameness are complex. I won't be discussing them in any detail below, but hope to treat them in a later publication. I'll mention here one further reason for denying that ontological unity for Aristotle is self-sameness: Met. Z.17 1041a14-20 effectively distinguishes to hen as unity from to hen as self-sameness on the grounds that there can be causal inquiry into the former but no causal inquiry into the latter. 8 Regarding the theses about being and ontological oneness rehearsed in this and the following paragraph see esp.: Met. Γ.2 1003b22-34 and Iota 2 1054a13-19. of a unity nor any generation of a unity that isn't as such the generation of a being: and mutatis mutandis for destruction. Every conception of a being is a conception of a unity, every conception of a unity a conception of a being. And by necessity: every being is a unity and every unity is a being. It is important to appreciate that in assenting to such claims, Aristotle isn't conceiving of unity and being as determinate characteristics or uniform natures in which all things share. For Aristotle thinks it manifest that there are many different ways to be a being and many different ways to be unified. And so 'being' and 'unity' in the paragraph above need to be interpreted such that: X 'is a being' means X exhibits some way of being, and X 'is a unity' means X exhibits some way of being unified. In such contexts, Aristotle will think of being and unity as not 'subjects' [hupokeimena] but maximally general 'predicables' [katēgorēmata] 9 -albeit predicables of a very peculiar sort. They will not, he thinks, fall into any of the categories [katēgoriai]: calling X a unity or being won't express what X is, or a quality of X, or quantity of X, or…. Nor do being and unity, thus construed, transcend categories in the manner that per accidens compounds like teenager do; nor are they at all property-like since, pace Avicenna (and perhaps Plato), Aristotle thinks it makes no sense to posit intrinsically being-less and unity-less subjects that underlie being and unity. So, according to Aristotle there are different ways to be a being (something that is), and different ways to be a unity (something that's unified). And to say that there are different ways to be X is not simply to say that there are different kinds of X with impressively different essences. Though I'm relabeling it, the distinction I have in mind is Aristotle's own. To see it contrast (i) the manner in which an isosceles triangle and a scalene triangle are both triangles, with (ii) the manner in which (the horse) Rocinante and a photo of Rocinante are both animals. The former two items are different kinds of triangle, but they are not triangles in different ways because (as Aristotle would put it) what it is for each of them to be a triangle is the same. In contrast, while the sentence 'This is an animal' can be truly said of both Rocinante and his photograph, the horse and the photo aren't different kinds of animal. These two (in contrast to Rocinante and Xanthippe) are animals in different ways since what it is for Rocinante to be an animal differs from what it is for the photo to be an animal. 10 A more a illuminating example of this different ways to be X phenomenon, Met. Γ.2 invites us consider the term 'healthy' as deployed in medicine. (Here, and in what follows, 'medicine' means human medicine). Now, among the things that doctors know to be healthy there are humans, foods, complexions, lungs, and exercise regimens. But what it is for a food to be healthy (≈for its consumption to promote health) differs from what it is for a human to be healthy; what it is for a complexion to be healthy (≈for it to indicate health) differs from both, and what it is for a lung (or exercise regimen) to be healthy differs still. Quite evidently, there isn't some single property of healthiness that all such healthy things share. What we have here is rather a plurality of different ways to be healthy. And this case is particularly interesting to Aristotle because if we collect together all such ways of being healthy with which medicine is concerned we'll have network of distinct properties linked together not only by our language but also in extra-linguistic reality. For, as he interprets the case, the complex disposition whose possession constitutes human health enters into the real definitions of all other such ways to be healthy; and they, in turn, are all (in one manner or another) ' of or related to [human] health' [pros hugieian]. One of the central proposals of the Metaphysics is that what medicine calls 'healthy', in taking as its subject-matter everything that's healthy, is structurally analogous to what First Philosophy calls 'being' in taking as its subject-matter everything that is a being (i.e. everything that is). For, Aristotle thinks it difficult to maintain that (e.g.) humans, deaths, numbers, and pleasures all exist (=are beings) in the same way. And there a great many aporiai that he takes to be best solved using well-motivated distinctions between different ways to be a being. But as with the various ways to be healthy, Aristotle further contends that there's one particular way to be a being that's fundamental and definitionally prior to the rest: the way of being enjoyed by substantial-beings [ousiai]. More precisely, if X is an ousia then what it is for X to be something that is is for X to be an ousia; if X isn't an ousia, it isn't. In the latter case what it is for X to be something that is can differ for different values of X-but it will always involve some sort of relationship to ousia. Reasonably, Aristotle thinks that it's by studying of the nature and causes of human health that the field of medicine best advances its understanding of healthy diets, healthy complexions, healthy respirations, etc. And for analogous reasons, he thinks that First Philosophy will best advance its understanding of being in general by privileging foundational studies of the primitive causes and foundations of ousia. Now, as with being Aristotle thinks that adequate sensitivity to the diversity of real should compel us to admit that there are many ways to be a unity. To see the plausibility of this line, one might note that often (if not always) 11 unity is a matter of some parts constituting a whole. But consider (e.g.) this human, this making of a brisket, this episode of pleasure, this number, the plot of this tragedy, water (the natural kind), and color (the universal). And consider what it is for each of these to be unified-what it is for each of them to have its parts constitute the whole it is. Intuitively, these items would appear to have parts in some strikingly different ways. But then why think they constitute wholes in the same way? Or more concretely, compare the unity of this drop of water with the unity of all that water (cf. Met. Δ.26). The unity of the drop will be destroyed if we divide its left side from its right with a barrier, but there is no positional rearrangement [metathesis] of all that water that destroys its unity. She who insists that all unities are unified in the same way will need another way to resolve this and great many other aporiai. But Aristotle responds by distinguishing between two ways of being unified. For the water drop to be unified, he proposes, is for its portions to be continuous. This is why the drop will survive any positional rearrangement [metathesis] that preserves corporeal continuity and none which do not. However, he will add, it's in a different way that all that water is a unity: for it to be unified is not for its portions to be continuous but simply for its portion to exist as what they essentially are (i.e. water). Here as elsewhere, Aristotle is attracted to well-motivated distinctions between ways of being unified where they prove readily intelligible and explanatorily powerful. Aristotle often insists that philosophy respect the radical diversity of the real. And among other things, his division of categories is supposed to capture a dimension of this radical diversity. So it is not surprising, that Aristotle is attracted to the view that there are distinct ways of being unified for items in distinct categories. Met. Γ.2 (esp. 1003b33-34) and Iota 2 (1054a14) strongly suggest what Z.4 (1030b10-11) explicitly statesthat, with respect to the categories, unity and being are predicated in equally many ways. Suppose we call a way of being unified derivative iff some other way of being unified enters into its real definition, and otherwise call it primitive. Aristotle quite evidently thinks some ways of being unified are primitive while the rest are derivative. Having previously proposed that being is structured focally [pros hen] in the manner that healthy is, Met. Γ.2 (1004a25-31) goes on to add that the same holds for unity. The immediate lesson Aristotle draws from this is that First Philosophy must not only distinguish between different ways to be a unity, but also account for 'how [other ways to be a unity] are formulated in relation to the primitive [way]' (1004a28-30). Aristotle is sometimes read as holding that there's exactly one primitive way of being a unity-that which is characteristic of ousiai. But Met. Δ.6 actually tries to work out the kind of focal analysis that Met. Γ.2 calls for at 1004a25-31. And from that discussion a somewhat more complex picture emerges (Δ.6 1016b6-9): Most things are said to be united [hen] because they either do or have or undergo or are related to something else united. But things said to be united in the primitive way are those whose ousia is united-and united either by continuity or by a form or by a definition. Met. Δ.6 had previously analyzed these three aforementioned ways for X's ousia to be united as three distinct ways of being undivided [adiaireton]. So, regarding the focal structure of unity, Aristotle's more considered view would seem to be (1) that every way to be a unity is primitive or derivative, (2) that while there are many derivative ways to be a unity there are several (but perhaps not terribly many) primitive ways, and (3) that the several primitive ways to be a unity are all akin to one another in constituting analogous ways for things' ousiai to be undivided. With respect to this last point, consider (say) the corporeal undividedness that's the substantial unity of this drop of water and the formal undividedness that's the substantial unity of The basic idea would be that while these two types of undividedness don't amount to the same thing-and while neither of these two is definitionally prior to the other-that nonetheless the two types constitute abstract (but non-trivial) analogues with respect to one another. Aristotle's habitual characterization of being unified as a matter of being internally undivided [adiateron] might seem to suggest that that he conceives of unity in fundamentally privative terms. After all, as Aristotle himself notes, the word 'undivided' [adiaireton] is certainly privative in its linguistic form. 12 But on Aristotle's considered view, the unity of X's ousia is always a sort of fulfillment [entelecheia] and always an undividedness that makes X something definite. And on Aristotle's considered view, it's not unity but the indefinite manyness to which it's opposed that's the privative phenomenon. (More on this in Sections 5-6). Descendants of Aristotle's idea that there are different ways to be a unity will be familiar to some readers from its renaissance in contemporary 'neo-Aristotelian' metaphysics. Other features of his account of unity have made much less impact on contemporary debates about part/whole. One in particular warrants special emphasis here. Aristotle thinks that unity comes not only in many varieties but also in degrees: that some things are more unified than others. The thought that being a single thing (countably ' one') comes in degrees verges on incoherency. Bo the dog is a single thing so is the American Department of Defense. If one counts how many things Obama thought about today, they counted equally-neither is more ' one' than the other. But it's both coherent and prima facie reasonable to contend that any live dog is more unified than the American Defense Department. 13 On this kind of basis, Aristotle will insist that a dog's wholeness constitutes an achievement that not every unity matches. Heaps and collections are not unified to this degree. Aristotle will explain a dog's high degree of unity by arguing that every (proximate) part of a dog is essentially a part of that dog-that is, dog is prior in definition to all dog-parts. In contrast, Aristotle would claim, the parts of a heap are not prior to the whole they compose. Interestingly, Aristotle thinks animals which can be divided into two animals of the same kind (e.g. worms) are less unified than those which cannot (e.g. humans). 14 But Aristotle also thinks a human being less unified than an Unmoved Mover who has neither different parts at different times, different parts at different places, or different parts conceivable through different accounts. 15 There are further complexities to Aristotle's theory, and interesting philosophical questions about all of this that I set aside here. Having briefly treated the one that Aristotle takes to be closely connected with being, I turn now to the one(s) that enter into Aristotle's account of number. Arithmetical Oneness, Number, and Unit-hood Number, like other mathematical concepts, has a history. Thus it's often noted that the mathematicians of Greek antiquity have no notion of negative number or irrational number, that (officially at least) they do not even take fractions to be numbers. But to understand Aristotle's approach to number we must also take seriously the fact that our present concept of natural number is a fairly recent achievement. For the concept of number [arithmos] one finds in both philosophical and non-philosophical texts of Greek antiquity turns out to be remarkably alien to what now seems the intuitive notion of ' counting number'. Consider, for instance, the following exchange from Plato's Theaetetus (204d: trans. in Burnyeat 1990, lightly Prima facie, one might think Aristotle to be saying here that in the pair of contraries {hen-adiaireton, plēthos-diaireton} it's the former that's privative and the latter that's positive. But this would make Aristotle a radical revisionist with respect to the traditional sustoicheiai of contraries in manner that looks to be inconsistent with numerous texts. And among these texts are passages that occur shortly after the Iota 3 passage just quoted: 1054a29-32, and 1055a33-1055b29. As I read the passage (Iota 3 1054a26-29, Aristotle is trying to explain why it is that our language formulates adiaireton as a privative of diaireton when by nature to hen in the ontological sense is positive and prior to its privative contrary to plēthos. The explanation for this, according to Aristotle, is that what's 'nearest to us' is aisthēsis, and what's diaireton is more easily detectible by aisthēsis than what's adiaireton. I take it that the word logos in 1054a28 means something like 'language' rather than ' definition'. A final remark about { diaireton, adiaireton} as a contrary pair in Aristotle's thought. While I think it fairly clear that, on Aristotle's view, the undividedness which constitutes ontological oneness is a positive and not a privative contrary, this is compatible with Aristotle recognizing other senses of diaireton/adiaireton on which the latter is by nature a privative contrary posterior to the former. In this connection, consider Aristotle's remarks on the intelligibility of geometrical points in De Anima III.6 430b20-21-a text whose parallels to Iota 3 1054a26-29 have been impressed on me by Menn. (One might contend that it's also intelligibility to us that's at issue in the DA III.6 passage; but it's far from obvious that this is so). 13 One way to see that Met. Δ.6 1015b34-1016b17 and Iota 1 1052a15-1052b1 concern not uniqueness but unity is to note ubiquity of degree talk (X is more hen than Y) in these passages. 14 On Youth, Old Age, Life and Death,: ' animals of a superior constitution do not suffer [multiplication by division] because their nature is one to the highest possible degree [dia to einai tēn phusin autōn hōs endechetai malista mian]'. 15 1016b1-3 characterizes (but does not name) which beings in the universe are 'unitary in the highest degree'. From Plato's perspective, the exchange dramatizes a straightforward application of a (if not the) ordinary concept of arithmos. The exchange becomes intelligible if one appreciates that the ancients primarily use the noun arithmos to mean a count or countable multitude and that it often meant quantifiable amount. (Thus the arithmos of a mile is the 5,280 feet that compose it. The arithmos of this army is, say, a number of men now laying siege to our village: not the cardinality of a set or any kind of ' abstract object'). As a foundation for a mathematical theory of countable multitudes, Euclid defines number [arithmos] as 'plurality consisting of monads' (Elements VII def. 1). This is a significantly different, and less primitive, mathematical concept than that at issue in Frege's attempts to define number. The modern number theorist starts with an object she calls '0' and uses a successor function to define a sequence of objects (0, 1, 2, …) whose third member is plausibly interpreted (given the theory) as answering to the description 'the number two'. The ancient number theorist starts by allowing herself to posit as many mathematical monads (≈position-less mathematical points) as she wants, and develops her 'number theory' [arithmētikē] as a theory of finite pluralities of these monads. In this setting, the smallest 'numbers' (=countable pluralities) are twos; and to get a four you need distinct twos to compose it. Nothing in the theory is readily interpretable as the number two. In an attempt to explicate the (then) contemporary concept of arithmos, Met. Δ.13 (cf. Cat. 6) teaches that a number 16 is a delimited multitude [peperasmenon plēthos]: an amount [poson] that's countable [arithmēton] because it admits of some determinate division into finitely many discrete parts. On this conception, anything viewed as a finite plurality of distinct existents will constitute a number. So it's not surprising that the existence of numbers is something that Aristotle takes to be manifest. Indeed, he takes so much to be manifest by sense-perception: numbers being (as we learn in De Anima II.6) among the per se objects of senseperception common to all senses. In the context of Aristotle's psychology, this latter claim means that even non-rational animals can perceptually discern some of the numbers in their sensible environment. A bird, e.g., will perceive a number in her nest when she sees two white bodies and is thereby alerted that one of her eggs has gone missing. Human beings, thinks Aristotle, differ from non-rational animals in having both a perceptual discernment of some sensible numbers (say, 3 goats) but also the ability to count-and thereby gain knowledge of-other sensible numbers that our perceptual capacities do not suffice to grasp (e.g. 133 goats). (NB that with Plato and Aristotle, we use 'sensible number' to mean any arithmos that is a plurality of sensible existents. Thus, while Aristotle will contend that animals can-quite literally-see some sensible numbers as the numbers they are, most sensible numbers will be far too large to admit of perceptual discernment). So, for Plato and Aristotle, some numbers are sensible and corporeal. Now, from Aristotle's perspective it is of great importance to see that every number is a number of things-that every number is a plurality that's determinately many somethings. 17 And this, he thinks, is no less true for incorporeal numbers than it is for sensible numbers. He will ultimately argue that numbers in general have reality only as amounts [posa] that more basic entities give rise to. And this means that no number is a substantial-being [ousia] per se and that every number exists in the category of quantity. As for numerical ratios, Aristotle contends that they are not strictly speaking numbers at all but are actually relatives [pros ti]: 1092b16-35. Evaluated as an account of our present concept of natural number, this picture will seem basically a nonstarter. But mathematical concepts have histories. And as an account of the (then) contemporary notion of arithmos the view proved highly attractive-and not only to opponents of Platonic metaphysics. For concerning the sensible numbers with which ordinary reasoners are ubiquitously engaged, ancient Platonists could and in some cases explicitly did accept Aristotle's analysis of them as quantifiable amounts of things: i.e. as non-substances in the category Aristotelian of quantity [poson]. What the ancient Platonist proceeded to insist on was that in addition to these non-substantial numbers [arithmoi] there also exist incorporeal intelligible numbers which are not mere quantities [posa] but separate substantial-beings in their own right. From Aristotle's testimony, it seems that Plato himself (in his mature metaphysics at any rate) posited two different kinds of intelligible numbers as separate substantial-beings: (1) the objects studied by expert mathematicians in number theory [arithmētikē], and (2) the Forms themselves which Plato now proposed to interpret as numbers that transcend those at issue in number theory. Aristotle tells us in the Metaphysics that when Plato introduced intelligible numbers as separate substantial-beings he went on to argue that such numbers are causes of being. Aristotle adds that Plato had proposed a substantial-being named 'the One' [to hen] as the highest cause and foundation [archē] of being. And we further learn that Plato and his followers had attempted to work out a theory of how to hen functions as a foundation of being by developing the intuition that to hen is a foundation of number. Aristotle spends a good deal of time in the Metaphysics trying to show that this research programme has gone nowhere and is ultimately hopeless. On the final analysis, he contends, neither to hen nor any numbers are separately existing substantial-beings; thus neither can be identified as foundational sources of being in general. Mathematicians, he explains, are methodologically correct to pursue number theory as they in fact do-positing point-like but position-less mathematical monads and speaking about them as if they were non-corporeal substantial individuals. But these theoretical objects are not in fact separate substantialbeings at all. And neither are the pluralities of mathematical monads that get called 'numbers' in the context of mathematical number theory-such aggregates being mere amounts [posa] of monads: abstracted quantities of a sort. The truth of number theoretic theorems, he argues, gives us no reason to think otherwise. Despite his decisive rejection Plato's henological programme for First Philosophy, Aristotle agrees with his Platonist interlocutors that there's something importantly right in the thought that to hen constitutes a foundation [archē] for numbers. And in the Metaphysics Aristotle proves eager to explain the sense in which it is indeed true that to hen is the foundation of numbers. Now, in contrast to its English correlate, the Greek verb metrein can mean not only 'to measure' but also 'to count'. Observing that in ordinary Greek, hen sometimes functions in a noun-like way and means metron: i.e. 'measure' in the sense of unit-of-measure (unit-of-count), Aristotle's basic proposal is that what most deserve to be called the 'foundations' of numbers [arithmoi] are the measuring-units [metra] that make counting [arithmein, metrein] possible. For a number [arithmos], as he explains, is not simply a multitude but is more specifically a determinately countable multitude. A multitude, however, can only be determinately countable relative to some metron, some unitof-count, with respect to which the multitude can get correctly or incorrectly counted. It thus seems, to Aristotle at any rate, that a number's being-the-number-it-is must always be founded upon some hen in this sense of metron. And this suggests that a hen in this specific sense is the kind of hen that constitutes a foundation of numbers qua numbers. Aristotle's positive account of arithmetical oneness, of to hen qua foundation of number, gets its most detailed exposition in two texts: Met. Iota 1 (1052b20-1053b8) and N.1 (1087b33-1088a14). Aristotle's strategy in both texts is to explain his proposal in the context of a kind of abstract theory of measurement and quantitative knowledge (i.e. knowledge of quantity qua quantity). 18 For, while Aristotle sometimes deploys a more narrow notion of measurement in which counting and measuring are contrasted and viewed as different kinds of activities (see e.g. Met. Δ.13), he evidently thinks that on a deeper and more abstract level measuring a magnitude and counting a plurality need to be understood as the same kind of activity. The theory starts from the observation that there are remarkably different kinds of thing that can be understood quantitatively: herds of sheep, harmonic intervals, metric poetry, corporeal magnitudes, weights, speeds, and so on. We develop our quantitive knowledge of the quantities populating these various domains of quantifiables by measuring them. And in every such case this involves deploying one or more domain specific unit-of-measure [metron]. Speaking a bit more precisely, on the abstract theory of measurement Aristotle develops in Met. Iota 1 and N.1 measuring is understood to involve the following ingredients: 19 1. a particular quantity q to be measured 2. a genus of quantifiables G to which q belongs 3. one (or more) primitive units-of-measure U 1 , U 2 ,…, where each such U i marks off equisized quantity-tokens of type U i in genus G 18 The theory is discussed at much greater length in Sattler (2017). Our two interpretations of the theory harmonize well-though there are differences on details, and some pronounced differences in emphasis (unsurprising given her focus on how the theory connects to issues in natural philosophy, and mine on how it connects to issues in metaphysics). In revising the present article, I've tried to tailor its presentation of Aristotle's abstract measurement theory to facilitate comparison with the interpretation advanced by Sattler in her article. 19 Cf. Sattler (2017). If q is the height of the statue, G will be length, there will be (say) just one unit-of-measure U and it will be (say) foot. If q is the size of an octave, G will be harmonic magnitude, and (on the harmonic theory Aristotle has in mind in Met. Iota 1 and Nu 1) our units-of-measure will be two: a diesis-interval of one size and diesis-interval of another size. If q is a number of goats in a pen, G will be goat pluralities, and U will be goat. To better see how the theory handles the special case of counting, it will be useful quote Met. N.1 1087b33-1088a14 at length. (Bringing out the intended sense of metron I render it below as 'unit-of-measure' rather than simply 'measure'). It is apparent that to hen signifies a unit-of-measure [metron]; also [that] Echoing Met. Iota 1 (1052b21-4, b31-2), our N.1 text explains that where to hen signifies foundation of number, to hen signifies a metron: a measuring-unit. A number, on this analysis, is taken to be a multitude partitioned by an associated measuring-unit into a determinate multiplicity of measured-units. 21 For instance, says Aristotle, if a number of horses are to be counted, the unit [to hen] will be horse. In contrast, if human being, horse, god are to be counted, the unit [to hen] is presumably animal, and their number will be a number of animals (1088a9-13). If I am counting 'man, pale, walking', the unit [to hen] will be something like kind, and their number will be a number of kinds (1088a13-14). In all such examples, to hen qua foundation of number is the type of thing which the number determined is a number of. 22 A comprehensive philosophical examination of these ideas would require a very extended discussion. Here we shall have to pass over more than a few important questions about this theory. But for present purposes, I'd like to emphasize one point in particular made quite manifest by the text above. When the Greek terms hen and metron indicate 'unit' and 'unit of measure', they exhibit the same type/token ambiguity as their English correlates ('Yard and meter are two units' vs. 'Two units stat'). But in explaining how to hen in the sense of unit functions as a principle of number, N.1 1087b33-1088a14 emphasizes that a unit (a hen) of the relevant sort will always be a repeatable type: 'in every case, the unit-of-measure [metron] must be some same thing that holds of [huparchein] all [the measured things]' (1088a8). 23 Thus if a census-taker is determining the size of the populace, the hen which measures the plurality and constitutes the foundation of the number can be neither Callias nor Xanthippe. The hen must be something repeatable: e.g. human being. Note that on this picture, the unit [hen] that gets called the 'foundation' for an arithmos doesn't itself get counted when we count that arithmos. And indeed, I fail to count the arithmos of goats in my field if I include in my count the repeatable type goat. 20 Following a suggestion of Menn's I'm reading ho ti…ho ti in 1088a4-5 where Ross and Jaeger print hoti…hoti. 21 With 1088a6-8 cf. Iota 6 1056b20ff, Iota 1 1053a27-30, Δ.6 1016b18-20. 22 Cf. Physics IV.14 223b13-14. 23 The point that to hen qua foundation of quantitative knowledge must be a repeatable type is made especially clear in this N.1 passage. But it's manifest that Aristotle is working with the same picture in Iota 1. To see this, one can attend to Aristotle's remark at 1053a15: 'the measure is not always one in number but sometimes there are many [measures]' As Aristotle goes on to explain, at times we use two or more types to measure (e.g., the two diesis intervals in harmonic theory). The pairing of 'not always' with 'sometimes' in a15 is supposed to contrast the kinds of case Aristotle has been discussing with those he is about to discuss. On pain of absurdity, the earlier cases where the measure is ' one in number' must be cases in which the measure is a single type. That the foundation of a number-a measuring unit-should thus be external to the numbers it measures, is (I think) a point of some importance for Aristotle in both Iota 1 and Nu 1. 24 We noted above that from Aristotle's testimony, we learn that Plato (and certain of his followers) wanted to posit some particular substantial-being called the One [to hen] as a foundation [archē] of number and of being. As a transcendent ' one above many', this One was understood to be a unique substance and separate from the intelligible numbers of which it is the foundation. To better understand what has gone wrong here, Aristotle thinks it important to clear get on the extent to which this picture gets something right about to hen as a foundation of number. He takes it to be importantly correct that a totality can only constitute a particular arithmos-a determinately countable plurality-by being related to a unit of count. 25 The unit of count in relation to which a totality constitutes some particular determinate arithmos is a hen. But this kind of hen is not among the items that gets counted as ' one' when we use it to count. This hen transcends the numbers of which it is the foundation [archē] as a measure or standard of measurement. In sum, consider the utterance 'I'm holding two coins: one in this hand and one in that.' On Aristotle's view of the utterance, neither ' one' nor ' one in this hand' will pick out a unit: i.e. the foundation of this two. The relevant unit here is coin (or coin I'm holding). And what the utterance characterizes as singularities (in the sense of Section 1) are the two coins. 26 The Distinctness of Ontological and Arithmetic Oneness In Section 2, we noted Aristotle's deep commitment to the following two theses: Thesis A. being [to on] and to hen are equally general and so intimately connected that there can be no science [epistēmē] of the former that isn't also a science of the latter and its per se attributes Thesis B. to hen is the foundation [archē] of number [arithmos] qua number We proceeded to introduce ' ontological' as a label for the notion of one/oneness [to hen] at issue in Aristotle's endorsement of the former thesis and ' arithmetical' for the notion of one/oneness at issue in his endorsement of the latter thesis. Taking stock of our work in Sections 3-4, it will be useful to collect some arguments concerning the distinctness of ontological and arithmetical oneness in Aristotle's thought. First, an ' extensional' argument to the effect that Aristotle could not, by his own lights, have coherently identified ontological and arithmetical oneness. As we've seen, Aristotle maintains that being and ontological oneness are convertible (and convertible per se). From this it follows that by necessity: X is a being iff X is one in the ontological sense. Now, units-arithmetical ones as conceptualized in Metaphysics N.1 and Iota 1-are always repeatable types. But Aristotle holds that some beings are non-repeatable individuals. So insofar some beings are not units but all beings are ones (in the ontological sense) Aristotle must think that arithmetic oneness is something different from ontological oneness. Now, unit-hood is not something Aristotle discusses in very many texts. And he's never (as far as I'm aware) especially concerned to highlight the fact that mundane individuals cannot be units. But that the First Unmoved Mover is not a unit, is a point Aristotle will make. So, in a Met. Λ.7 (1072a31-34) text affirming the simplicity of the First Unmoved Mover, we find Aristotle refusing to call his highest deity hen in the sense of 'unit'. Aristotle explains (1072a32-34): '"unit" [hen] and "simple" [haploun] are not the same; for "unit" indicates metron, but "simple" [indicates] a [thing] itself holding in a certain way [pōs echon auto]'. From Aristotle's perspective this entails that, contra Plato, the god which is the most fundamental foundation of being is not also a foundation of number. But though the First Unmoved Mover of Metaphysics Λ is not an arithmetical one, Aristotle does maintain that the god is simple and (hence) a unity. Indeed, while Λ.7 1072a32 uses 'hen' to mean 'unit-of-measure' in indicating something which the First Unmoved Mover is not, Λ.8 uses hen as adjective to mean 'unified' in explaining something this god is ( A second argument concerning the distinctness of ontological and arithmetical oneness has the added benefit of helping us see why ontological oneness in Aristotle cannot be uniqueness in the sense of Section 1 (=being countably one). We've noted that central to Aristotle's engagement with ontological oneness is his insistence that hen in the ontological sense is a pollachōs legomenon. This is his insistence, to use the language of Section 3, that not everything which is one in the ontological sense is one in the same way. In contrast, for arithmetical oneness and uniqueness Aristotle proposes a quite uniform analysis. Every arithmetical one (qua being an arithmetical one) is a one in the same way; and every countable one (qua being a countably one) is one in the same way. There can, of course, be different arithmetical ones-different units-for different numbers X and Y: say if X is these four humans and Y is these two couples. But on Aristotle's view of relatives [ta pros ti], this relativity no more entails that there are different ways to be an arithmetical one than the fact that X and Y are husbands of different people entails that X and Y are husbands in different ways. 28 Aristotle will likewise say that this couple is countably one relative to this unit (say: couple) but not countably one relative to this other unit (say: human). And it's by this strategy that he will account for the fact that different things count as ' one' in different counting contexts. But he will also insist that everything which is countably one is countably one in the same way. For denying this, he thinks, will allow for the absurd possibility of countable ones X and Y that fail to be equal to one another and don't collectively constitute a two. 29 A third argument pertaining to the distinctness of ontological and arithmetical oneness in Aristotle's thought starts from a passage in Met. Iota 6. As we shall discuss further in Section 6 below, Met. Iota 1 (echoing Δ.6) proposes analyzing distinct ways to be unified as distinct ways to be undivided [adiaireton]. For instance, if the unity of X is corporeal continuity then X's unity (says Aristotle) consists in its parts being undivided from one another in place; in contrast, if X is humankind then X's unity consists in its parts (humans) being undivided from one another in real definition. Developing this picture, Iota 3 points out that oneness in the sense of being unified and undivided [adiaireton] will have being divided [diaireton] as its contrary [enantion]. 30 This privative contrary of unity Aristotle calls to plēthos by which he means 'multitudinousness' not in the sense of determinate multiplicity but rather in the sense of the indeterminate manyness of something viewed as dis-unified and lacking integration: a corpse say. (For according to Aristotle, what a perishing thing-qua perishing-perishes into is per se a multitude [plēthos] but is not per se an arithmos). So, to hen in the sense of unity has a privative contrary: namely, multitudinousness [to plēthos] in this sense of indeterminate dis-united dividedness. In contrast, as Aristotle points out in Iota 6, to hen in the sense of unit-hood has no privative contrary. As blind is to sighted, white is to black, and united is to dis-united: unit is to nothing. 31 Where 'multitude' [plēthos] has the sense of determinate multiplicity and number, Aristotle says to hen in the sense of unit is still opposed to plēthos. But here the mode of opposition is that of relatives [ta pros ti] (1057a12-17): Aristotle will, of course, use the word adiaireton ('undivided', 'indivisible') not only in connection with unities but also with singularities (=items countable as ' one'). But it's unity that's at issue in this text. And whereas a unity is adiaireton in the sense of being internally undivided and hanging together, for Aristotle a singularity is adiaireton in the sense that (qua singularity) it lacks a proper quantitive divisor and is quantitatively equal to everything else that's also countable as ' one'. Singularities, qua singularities, are thus externally adiaireton. According to Aristotle, the subject-matter of (what he knew as) number theory is things adiaireta in this sense and what pertains to things adiaireta in this sense per se. In contrast, ontological oneness and its per se attributes are studied in First Philosophy, not by mathematicians researching number theory. I'd like to close this section by suggesting that from Aristotle's perspective to conflate ontological oneness with either arithmetical oneness or being countably one would constitute something like a category mistake. As we have seen, an item functions as an arithmetical one by being a unit. But unit-hood, like motherhood, is a characteristic something possesses only relationally. To say X is a unit, is to say that X plays a 28 Thematic discussions of the category of relative [pros ti] appear in Cat. 6 and Met. Δ.15. Both chapters discuss the sense in which a metron is pros ti as as do Iota 1 and Iota 6 (in somewhat different ways). 29 Cf. Met. M.7 1082b16-19. 30 On Met. Iota 3 1054a26-29 see note 12 above. 31 Of course, measured has a privative contrary in unmeasured. The point is that measure itself has no such contrary. certain role relative to a particular class or range of measurables. In this respect, Aristotle will see 'X is a unit' as a predication in the category of relative. Likewise, depending on how it's viewed, being countably one can be considered either as a relative [pros ti] or as an amount [poson]. Yet according to Aristotle, ontological oneness is importantly extra-categorical. As Aristotle puts it in Iota 2, being and ontological oneness ' accompany the categories equally but are not in any category' (1054a13-16). And if X is a unity, it's not externally but internally that X is a unity. The Source of Assimilationism: Three Texts As in Section 2, let's call 'Assimilationist' any interpretation of the Metaphysics' henology on which Aristotle effectively assimilates ontological to arithmetical oneness. These are interpretations on which Aristotle either identifies ontological with arithmetical oneness or takes ontological oneness to be (somehow) analyzable in terms of arithmetical oneness. While it's easy enough to find scholarly endorsements of Assimilationist theses, these endorsements don't really get coupled with detailed philosophical elaborations of what positive Aristotelian henology might result. Indeed, the primary cause for scholarly attraction to Assimilationism is textual rather than philosophical. For, scholars like Ross and Menn have thought that there are a handful of Metaphysics texts-three in particular-that simply need to be read as asserting some kind of assimilation of ontological to arithmetical oneness. It is to these three passages that we now turn. But first some preliminary remarks on our books. When the contemporary scholar turns to consult the Metaphysics she opens one (or both) of the two most recent critical editions-those of Ross (1924/1953) or Jaeger (1957. But it is widely agreed among philologists that a new critical edition of Aristotle's Metaphysics is needed. For, we today know a great deal more about the transmission history of Aristotle's Metaphysics than Ross and Jaeger did 60+ years ago. And numerous editorial choices Ross and Jaeger made in reconstructing Aristotle's text turn out to rest on incomplete and/or inaccurate information. 32 Ross and Jaeger based their editions on the direct testimony of just three Greek manuscripts (E, J, and A b ). Building on the pioneering work Harlfinger (1979) (who was himself building on Bernardinello (1970)) subsequent research has confirmed that all Greek Metaphysics manuscripts known to be extant ultimately descend from two (no longer extant) late ancient exemplars standardly dubbed α and β respectively. Modulo a good deal of intra-stemmatic contamination, readings from the α-text are best witnessed by manuscripts: E, J, V d , E s ; readings from the β-text are best witnessed by: A b , M, C, V k . 33 When they produced their editions, Ross and Jaeger were in no good position to reliably reconstruct the readings of both α and β for textually problematic Metaphysics passages. But given the present state-of-theart, if the relevant manuscript data is available we often can reliably reconstruct the original α-and β-texts for a given Metaphysics passage. 34 This will be important in follows. For, as mentioned above, the primary cause for scholarly attraction to Assimilationism is a trio passages-let's call them Passage 1 (Met. Iota 1 1052b16-19), Passage 2 (Met. Iota 1 1053b4-6), and Passage 3 (Met. Δ.6 1016b17-18). Passages 1-3 each seem to say something about 'the essence of oneness'. But two of the passages are highly unstable in the manuscript tradition. Given the differing texts our manuscripts preserve, what Aristotle actually wrote in these two passages is far from obvious. Now, the modern scholarly tradition has tended to read Passages 1-3 together in a tight hermeneutic circle. And among the mutually reinforced and reinforcing interpretive assumptions now operative in this hermeneutic circle are both Assimilationist interpretive ideas and some questionable text-critical judgments about what we have best reason to believe Aristotle actually wrote. These judgments are embodied in the very versions of Passages 1-3 that Ross and Jaeger print. In the light of our philosophical work above, and with attention to genealogy of the Byzantine manuscript tradition-our best evidence for determining the correct versions of Passages 1-3-this section sketches a case for skepticism about the apparent support Passages 1-3 seem to provide for Assimilationism. In what follows, I will not rehearse fully detailed interpretations of Met. Iota 1 and Δ.6. But I will be defending readings of Passages 1-3 on which they complement rather than tell against my larger interpretive approach to Aristotle's henology. The readings I will be presenting cannot be dismissed as ad hoc. For the readings are based on text-critical proposals about Passages 1-3 that are no less, if not more, plausible than those adopted by Ross and Jaeger; and they are readings that succeed in making excellent sense of the passages in their immediate context. This, at any rate, is what I intend to argue. Passage 1-2 (Met. Iota 1 1052b15-19 and 1052b4-6) Regarding the context of Passages 1-2, the first thing to note is that the two texts occur in Metaphysics Iota 1: a chapter that stretches from 1052a15 to 1053b8. Thus they are embedded in the chapter as follows: For what follows, some remarks on the block of Iota 1 text preceding Passage 1 will prove useful. Met. Iota 1 opens (1052a15-16) with the statement that ' oneness is said in many ways'. For Aristotle, a predicable [κατηγόρημα] X is 'said in many ways' [λέγεται πολλαχῶς] iff X fails be associated with a unique formal cause Y such that for anything that is X to be X is by definition precisely for it to be Y. So in making this statement, Aristotle is effectively claiming that (as we put it above) there are many different ways to be one-he's claiming that what it is for some things to be one differs from what it is for other things to be one: that τὸ ἑνὶ εἶναι for some things differs from τὸ ἑνὶ εἶναι for other things. This isn't supposed to be news to anyone. For, as Iota 1 reminds us (1052a15-16), Met. Δ.6 has already explained that τὸ ἕν is said in many ways [λέγεται πολλαχῶς]. Having asserted that τὸ ἕν is said in many ways, Aristotle adds that although it's said in more ways [πλεοναχῶς], there are 'four main ways [τρόποι]' of being one exhibited by beings that are 'primitive and called one per se rather than per accidens ' (1052a15-19). 35 The sequel (1052a19-34) furnishes a sketch of four ways for something's substantial-being [ousia] to be unified, characterizing each such way of being unified as a way of being undivided. And summing up this discussion Aristotle writes (1052a34-1052b1): λέγεται μὲν οὖν τὸ ἓν τοσ αυταχῶς, τό τε σ υνεχὲς φύσ ει καὶ τὸ ὅλον, καὶ τὸ καθ΄ ἕκασ τον καὶ τὸ καθόλου, πάντα δὲ ταῦτα ἓν τῷ ἀδιαίρετον εἶναι τῶν μὲν τὴν κίνησ ιν τῶν δὲ τὴν νόησ ιν ἢ τὸν λόγον. Within Iota 1's 1052a19-1052b1 discussion of these four ways to be a unity, Aristotle had interspersed a handful of remarks to the effect that this-or-that primitive entity exemplifies this-or-that way of being a unity (thus 1052a27-28 says that the outermost heavenly sphere is a unity in way (2), 1052a33-34 says that 'the primitive cause of the oneness of οὐσ ίαι' is a unity, apparently, in way (4)). But the continuation of the passage quoted above emphasizes the importance of not conflating 'What is X?' questions and 'What things are X?' questions (1052b1-7): δεῖ δὲ κατανοεῖν ὅτι οὐχ ὡσ αύτως ληπτέον λέγεσ θαι ποῖά τε ἓν λέγεται, καὶ τί ἐσ τι τὸ ἑνὶ εἶναι καὶ τίς αὐτοῦ λόγος. λέγεται μὲν γὰρ τὸ ἓν τοσ αυταχῶς, καὶ ἕκασ τον ἔσ ται ἓν τούτων, ᾧ ἂν ὑπάρχῃ τις τούτων τῶν τρόπων· τὸ δὲ ἑνὶ εἶναι ὁτὲ μὲν τούτων τινὶ ἔσ ται, ὁτὲ δὲ ἄλλῳ ὃ καὶ μᾶλλον ἐγγὺς τῷ ὀνόματί ἐσ τι... Now, it's necessary to understand that we should be differently receptive to accounts of: What sorts of things are called one? and What is it to be one? (i.e. What is its definition?). For oneness is said in this many ways [i.e. ways (1)-(4) just rehearsed] and each of the things in which any of these ways [of being one] is present will be a one. But to be one is in some cases to be one-or-another of these [i.e. to be continuous, or to be whole, or…], and in other cases it's to be something else which is also rather close to the name… 36 35 Reading πρώτων with Ross and the MSS at 1052a18, and rejecting Sylburg's conjecture printed by Jaeger. Like Δ.6, Iota 1 does not claim to provide an exhaustive classification of per se types of unity: Aristotle simply means to be collecting four particularly salient and important types of per se unity connected with primitive (or purportedly primitive) entities. 36 The passage continues (1052b7-9): Aristotle is here claiming that what is for X to be one, is in some cases to be this, in other cases to be that, and in further cases to be something else. This would seem to suggest the phenomenon of oneness is simply too diverse to be captured in any single real definition. Indeed, this is exactly what should be the case if τὸ ἕν really is, as Aristotle announced in the first sentence of Iota 1, 'said in many ways'. [1.1] For this reason to be one is to be indivisible (being essentially a 'this' [1.2] and capable of existing apart either in place or in form or thought); or perhaps to be whole [1.3] and indivisible; but it is especially to be the first measure of a kind, and above all of quantity. Scholars have tended to write about Passage 1 as if Aristotle were considering two competing candidates for the essence of oneness: (i) being undivided, and (ii) being a first measure. But this can't be quite right. For the essence of oneness would be the unique formal cause Y such that by definition for anything to be one is precisely for it to be Y. And according to Aristotle, no such Y exists. As the first sentence of Iota 1 declares: oneness is 'said in many ways'; while there can be a real definition of this or that way to be one, τὸ ἑνὶ εἶναι in Passage 1 simply cannot be intended as a name for the essence of oneness. So, in the broader context of Met. Iota 1, the Assimilationist reading of Passage 1 on which Aristotle is definitionally identifying arithmetic oneness with oneness quite generally looks like a non-starter. But the Assimilationist will want to insist that the ' especially' [μάλισ τα] in text [1.3] needs to be taken seriously. In this vein, Assimilationist readers of Passage 1 have suggested that someone who answers 'What is it to be one?' by replying 'It's to be the first measure of a kind' has (according to Aristotle) defined the most fundamental variety of oneness to which all other varieties of oneness somehow reduce. Now, if this were what ...τῇ δυνάμει δ' ἐκεῖνα, ὥσ περ καὶ περὶ σ τοιχείου καὶ αἰτίου εἰ δέοι λέγειν ἐπί τε τοῖς πράγμασ ι διορίζοντα καὶ τοῦ ὀνόματος ὅρον ἀποδιδόντα. The τῇ in τῇ δυνάμει is absent in β-family MSS A b and M. Assimilationist readers may be tempted to take ἐκεῖνα as referring to the four just mentioned ways [τρόποι] of being unified. Apparent problems for this suggestion are that 1052b5-6 twice uses τούτων to refer to these four [τρόποι], and that ἐκεῖνα is neuter rather than masculine. It seems to me that the ἐκεῖνα most likely refers to either (a) entities one would enumerate in answering the ποῖά τε ἓν λέγεται; question, or (b) the pair of questions λέγεσ θαι ποῖά τε ἓν λέγεται; καὶ τί ἐσ τι τὸ ἑνὶ εἶναι; whose combined force [δύναμις] is like what would be felt if one were a dialectical respondent who needed to give an account of (say) element both ἐπὶ τοῖς πράγμασ ι διορίζοντα and τοῦ ὀνόματος ὅρον ἀποδιδόντα. The sense of ἐγγὺς τῷ ὀνόματί ἐσ τι is far from obvious. Unattested elsewhere in Aristotle and-so far as I can tell-contemporaneous philosophical prose, the idiom may be related to one we find in the fragment 245 of the 4th cent. BCE comic poet Alexis in which, after lamenting an aporia about erōs an unnamed philosophizer declares (245,14-16): καὶ ταῦτ' ἐγώ, μὰ τὴν ᾿ Αθηνᾶν καὶ θεούς, οὐκ οἶδ' ὅ τι ἐσ τίν, ἀλλ' ὅμως ἔχει γέ τι τοιοῦτον, ἐγγύς τ' εἰμὶ τοὐνόματος. 37 1052b7-1052b15, ὥσ περ...ἁπάντων, explains how τὸ πῦρ σ τοιχεῖον ἐστι is true in one sense and false in another. The sentence is true if it means 'Fire is an element' and constitutes an answer to the question ποῖα σ τοιχεῖα ἐστι;. But it's false if the ἐστι in τὸ πῦρ σ τοιχεῖον ἐστι communicates definitional identity: i.e. what's at issue when one asks τί ἐσ τι τὸ σ τοιχείῳ εἶναι ἐστι;. Other readers of Iota 1, Assimilationist ones in particular, have been tempted to read 1052b7-15 as if the comparison with fire and element were not supposed to explain the distinction of 1052b1-3 (ποῖα X ἐστι; vs. τί ἐσ τι τὸ X dative εἶναι;) but was rather supposed to explain 1052b5-7 and tell us something about how the different ways of being one alluded to at 1052b5-7 relate to each another. This is implausible for several reasons. Most importantly: both in Aristotle generally and in 1052b7-15 in particular, to be fire is not a way of being an element any more than to be a human or to be a horse is a way of being an animal. Humans and horses, on Aristotle's view, are distinct instances of the kind animal, but what it is for a human to be an animal and what it is for a horse to be an animal are identical. Likewise, what 1052b12-3 says about fire-ὡς μὲν πρᾶγμά τι καὶ φύσ ις τὸ πῦρ σ τοιχεῖον -holds also for water. Neither to be water nor to be fire is a way of being an element; both are elements and elements in the same sense. So where τὸ ἑνὶ εἶναι is sometimes (1052b5) one of the four τρόποι of being one, τὸ πυρὶ εἶναι is never a correct answer to τί ἐσ τι τὸ σ τοιχείῳ εἶναι. Aristotle wanted to say, writing that τὸ ἑνὶ εἶναι is μάλισ τα to be a primary measure would be a very weird way to say it. But arguably, the biggest problem with this suggestion is that Aristotle nowhere explains how any such reduction would go, never tries to motivate the philosophical plausibility of this kind of reduction, and doesn't ever even assert that any such reduction is possible (it's far from obvious that it is). I won't push this line further here. For beyond defending alternative readings of Passages 1-3, my main goal in what follows is to undermine the text-critical and interpretive assumptions about Passages 1-3 that have led scholars to make these kinds of otherwise unmotivated suggestions in the first place. To this end, let us now take a closer look at Passage 1 itself. It bears emphasis that the Greek text for Passage 1 one reads in Ross' and Jaeger's editions of the Metaphysics-the Greek text which gets translated in standard, modern translations of the text-owes much to several questionable editorial interventions originally proposed in the 19th century. To sharpen our thinking about the relevant text-critical issues, it will be useful to reconstruct the original readings of the (non-extant) α-and β-exemplars from which our extant manuscript sources descend. Using Harlfinger's stemma codicum, and inspecting several manuscripts ignored by Ross and Jaeger, this is readily done: 38 Passage 1: α-text We have here two rather different versions of Passage 1. Apparently persuaded by Bonitz's worry that ἀχωρίσ τῳ (1052b17) is unlikely Aristotelian Greek, Ross andJaeger follow Bonitz (1848, 1849) in favoring the testimony of manuscript A b (β-tradition) at [1.2] thus reading ἰδίᾳ χωρισ τῷ. Accepting the ἢ καί of the α-version in [1.2], they again follow Bonitz in emending τῷ ὅλῳ to τὸ ὅλῳ (1052b17), and preferring the β-tradition's ἀδιαιρέτῳ (1052b17-18) over the α-version's διωρισ μένῳ. Consider now μάλισ τα...πρῶτον in [1.3]. This is the text that, when read in Ross' and Jaeger's editions, seems to say that the essence of unity 'is especially to be a primary measure'. While Ross and Jaeger print μάλισ τα δὲ τὸ μέτρῳ εἶναι πρώτῳ, neither the αnor the β-exemplar actually transmit this. To get the reading printed by Ross and Jaeger one must (i) accept the β-version's τὸ and reject the α-version's τῷ in [1.3], and (ii) manually change μέτρον εἶναι πρῶτον (the reading transmitted by both versions) to μέτρῳ εἶναι πρώτῳ (which is transmitted by neither). This was precisely the suggestion of von Christ in his 1886 edition of the Metaphysics. Indeed, modulo one quite minor difference at 1052b16, the version of Passage 1 printed by Ross and Jaeger is identical with von Christ's 1886 proposed reconstruction. Unfortunately, von Christ, Ross, and Jaeger provide no substantive case for any of these textual interventions. Speaking generally, modulo standard interpretative/linguistic considerations Ross and Jaeger tend to prefer β-tradition readings when textual agreement between the αand β-traditions is lacking. 39 In contrast, contemporary philologists like Primavesi argue for ceteris paribus deference to α-tradition readings in such contexts. 40 As for Passage 1, it seems to me that Bonitz, von Christ, Ross, and Jaeger are wrong to prefer the β-version of [1.2] in reading ἰδίᾳ χωρισ τῷ instead of the α-tradition's ἀχωρίσ τῳ. The worry that ἀχωρίσ τῳ is unlikely Aristotelian Greek is readily overcome by attention to Met. Δ.6 1016b2. I know of no commentator who has given an illuminating account of what the text would exactly mean if Aristotle wrote ἰδίᾳ χωρισ τῷ. But as best as I can tell, ἀχωρίσ τῳ makes far better philosophical sense in the immediate context. 41 As we've seen, earlier in Iota 1 Aristotle had distinguished four types of unity as ways of being undivided [ἀδιαίρετον] (1052a36ff). Reading the α-text of [1.2], Aristotle would be developing this proposal by explicating the undividedness at issue in 1052b16 (τὸ ἀδιαιρέτῳ εἶναι) as a matter of something's being 'un-separated' [ἀχωρίσ τῳ] in one of the four respects: ἢ τόπῳ ἢ εἴδει ἢ διανοίᾳ, ἢ καὶ τῷ ὅλῳ. 42 Concerning [1.3] we noted above that where the consensus reading of both α and β in 1052b18 is μέτρον εἶναι πρῶτον, Ross and Jaeger instead print μέτρῳ εἶναι πρώτῳ. In thus printing two datives rather than two accusatives, these editors are rejecting the testimony of the most important direct (and indirect) text-witnesses for [1.3] and adopting what's effectively a conjectured emendation of von Christ. This emendation would seem to be of no little significance. For, as is well known, Aristotle's technical idiom of essentialist definition ('to be X is to be Y') is τὸ X dative εἶναι...ἐσ τι... Y dative εἶναι. The idiom is ubiquitous in Iota 1 1052b5-19 quite generally, and in Passage 1 (1052b15-19) in particular. In standard Aristotelian Greek (given declinable expressions X and Y), τὸ X dative εἶναι...ἐσ τι... Y accusative εἶναι would be a highly irregular way to express a definitional identification in which the definiendum is named by X and candidate definiens named by Y. And given [1.1]-[1.2]'s preceding series of definientia in the dative, it seems especially unlikely for Aristotle to have written τὸ ἑνὶ εἶναι...ἐσ τι...μέτρον εἶναι πρῶτον in [1.3] if he wanted to indicate a further definitional identification with μέτρον πρῶτον naming the candidate definiens. In order for Passage 1 to state the definitional thesis 'to be one is…above all to be a first measure…', von Christ's μέτρῳ εἶναι πρώτῳ really does seem to be required. 43 As μέτρον εἶναι πρῶτον in [1.3] is the consensus reading of both α and β, to determine whether we ought to accept von Christ's emendation, sound philological practice requires we ask whether Passage 1 can be made grammatically and philosophically intelligible without the emendation. It turns out that it can-provided, at any rate, we read τῷ μέτρον εἶναι πρῶτον with α rather than τὸ μέτρον εἶναι πρῶτον with β. 44 Now, there are good philological grounds for preferring the α-version's διωρισ μένῳ at 1052b17 over the more philosophically sterile ἀδιαιρέτῳ of the β-version. 45 And in fact, it seems to me that the α-version of Passage 1 in general-and [1.3] in particular-makes excellent sense both linguistically and philosophically. Indeed, I think that the α-text of Passage 1 is very much philosophically superior to the reconstructions of Passage 1 proposed by modern editors, and that the α-text of Passage 1 is quite likely what Aristotle actually wrote. What follows is a proposed punctuation for the α-text of Passage 1 together with a translation: 40 Thus Primavesi (2012: 458): In editing the Metaphysics, one is well advised to give preference to the wording of the α-version in passages which are transmitted by both versions, but to examine with particular care the credentials of passages which are transmitted by α-alone. 41 Pantelis Golitsis has suggested to me that the β-reading, could easily have emerged from the α-reading by a process of mechanical dittography followed by subsequent confusion of the triangular uncial letters, alpha and delta: KAIAXWPISTW ↦ KAIAIAWPISTW ↦ KAIIΔIAXWPISTW. 42 The intended correspondences with 1052a34-1052b1 would perhaps be: Passage 1: α-text and proposed translation [1.1] διὸ καὶ τὸ ἑνὶ εἶναι τὸ ἀδιαιρέτῳ ἐσ τὶν εἶναι, ὅπερ τόδε ὄντι [1.2] καὶ ἀχωρίσ τῳ ἢ τόπῳ ἢ εἴδει ἢ διανοίᾳ ἢ καὶ τῷ ὅλῳ, [1.3] καὶ διωρισ μένῳ μάλισ τα δὲ τῷ μέτρον εἶναι πρῶτον ἑκάσ του γένους καὶ κυριώτατα τοῦ ποσ οῦ. [1.1] So indeed, to be one is to be undivided, to be a being that's precisely a 'this': [1.2] i.e. to be un-separated (either in place, or in form, or in conception, or even with respect to the whole), [1.3] and to be defined, but preeminently [to be one is to be defined] with respect to the existence of a prime measure of a particular genus: and predominantly of quantity. The translation above is more 'literal' than 'interpretive'. To further explain my proposed interpretation I submit the following five points. (1) When Aristotle uses τόδε to characterize something's manner of being, often the intended contrast is τόδε vs. τοιόνδε. But this is unlikely to be the distinction at issue when Aristotle writes ὅπερ τόδε ὄντι in [1.1]. 46 Following a suggestion of Menn's, I propose taking the intended contrast in [1.1] to be τόδε vs. τάδε: 'for X to be one is for X to be undivided, for X to constitute not a these (τάδε) but a this (τόδε)-i.e. for X to be unseparated…'. I'm taking the καί in καὶ ἀχωρίσ τῳ as basically epexegetic. (3) διωρισ μένῳ is a perfect passive participle of the verb διορίζειν which means define in the sense of give definition to: i.e. mark out, specify, or distinguish. (In Aristotle, διορίζειν almost never means define in the sense of give a logos of some name or essence). For the sake of literalness (to preserve the participle's verbal features) I've translated διωρισ μένῳ as 'to be defined'. But as a matter of interpretation, I think διωρισ μένῳ in [1.3] is best understood as meaning to be definite where something is definite in the relevant sense either per se or with respect to something else that's specified and thereby defined it (i.e. given it definition). (4) I think the καί (' and') in καὶ διωρισ μένῳ is best taken in one of two ways: either (a) as a connective linking διωρισ μένῳ to the unit ἀχωρίσ τῳ ἢ... ὅλῳ or (b) as a connective linking διωρισ μένῳ to the unit τὸ ἀδιαιρέτῳ...ὅλῳ. On reading (a) Aristotle would intend καὶ διωρισ μένῳ to extend and further amplify his explication of the same phenomenon of oneness (i.e. unity) he's so far characterized as ἀδιαιρέτῳ εἶναι, ὅπερ τόδε ὄντι. The thought would be that unity is indeed a matter of un-separatedness (ἀχωρίσ τῳ εἶναι)-things' togetherness in a place, or in a kind, or in some other respect-but that unity is also simultaneously a matter of being definite (διωρισ μένῳ εἶναι). 48 On reading (b), Aristotle would be introducing διωρισ μένῳ εἶναι as a way of being one that's distinct from (though not necessarily unrelated to) to unity. 49 Either way, I take the remainder of [1.3] to concern not unity but another variety of oneness such that (on Aristotle's view) for X to be one in this latter way is for X to be διωρισ μένῳ in a very specific sense: i.e διωρισ μένῳ ['marked off'] τῷ μέτρον εἶναι πρῶτον. (5) With μάλισ τα δὲ τῷ μέτρον εἶναι...ποσ οῦ Aristotle is introducing a new point that's supposed to be accepted not on the basis of anything that's come earlier in the chapter, but on the basis of what will follow. Grammatically, the articular infinite must be taken as some kind of dative of respect; and it's most easily interpreted as picking up on διωρισ μένῳ. Aristotle is asserting, I suggest, that there's an especially prominent sense of ' one' on which for X to be one is for X to be defined (διωρισ μένῳ) in the sense of being distinguished or marked out by some (domain-specific) measuring unit. This sense of one is especially prominent or common (μάλισ τα can mean both) because of its prevalence in the ubiquitous practices of counting and measurement. And on the theory of counting/measuring developed in the remainder of Iota 1, for X to be counted or measured as one is for X to be defined with respect to the existence of a 'prime measure' (μέτρον πρῶτον)-a domain specific unit of count/measurement. On the reading I've sketched with (1)-(5) above, Passage 1 introduces several different definitional characterizations of several different ways to be a one; and it does so without asserting any claim concerning their analyzability relative to each another. The reading is based on the transmitted α-text which, I contend, has a considerably better claim to represent what Aristotle actually wrote than the text for Passage 1 that Ross and Jaeger print. As a linguistic matter, to get Passage 1 to say that to be a first measure is (somehow) the best or most fundamental account of τὸ ἑνὶ εἶναι, it seems that we have emend the transmitted text of [1.3] in the manner proposed by von Christ and uncritically adopted by Ross and Jaeger. The result of doing so, I argue, is a philosophically inferior text. And as the following note explains, even putting interpretive issues aside, text-genealogical considerations counsel against von Christ's proposal. 50 In support of von Christ's emendation the Assimilationist will point to the concluding sentence of Iota 1. This, of course, is our Passage 2. And it is to this latter text we must now turn. For I intend to argue that when carefully read, Passage 2 actually provides strong support for the interpretation of Passage 1 I've sketched above against the Assimilationist alternative. Evidently, then, being one in the strictest sense, if we define it according to the meaning of the word, is a measure, and especially of quantity, and secondly of quality. It will be noted that Ross' translation constitutes a highly interpretive rendering of Passage 2. As I explain below, this translation very much depends on some highly dubious interpretive assumptions. Indeed, the broader context of Met. Iota 1 makes it quite unlikely that Passage 2 should mean what Ross' translation suggests it does. It will be useful to begin with a few linguistic and textual points. (1) In Aristotle, the construction τὸ X dative εἶναι doesn't always mean 'the essence of X', and in Passage 2 τὸ ἑνὶ εἶναι simply cannot mean 'the essence of one' in an unqualified sense. For, to repeat, according to Iota 1, τὸ ἕν is 'said in many ways' [λέγεται πολλαχῶς] and this entails for Aristotle that there's no such thing as the essence of X. (2) Philological information unavailable to Ross and Jaeger supports reading ἀφορίζουσ ι in Passage 2 rather than ἀφορίζοντι with Jaeger and Ross. 51 But while I proceed on the assumption that ἀφορίζουσ ι is correct, the interpretive line I develop works equally well for the reading ἀφορίζοντι. (3) The construction governing Passage 2 is ὅτι μὲν οὖν...φανερόν ('Evidently then…', more lit.: 'Thus, it's clear that…'). The tag ὅτι μὲν οὖν...φανερόν is ubiquitous in Aristotle's writing. And Aristotle is clearly deploying it in Passage 2 as he usually does: to mark a concluding summary of a point he thinks he's established or (at least) successfully explained in the preceding discourse. In connection with Ross' interpretation of Passage 2 this last point bears special emphasis. In Passage 2, Aristotle is asserting that the previous discussion has successfully 'made clear' something-namely, that τὸ ἑνὶ εἶναι μάλισ τά ἐσ τι κατὰ τὸ ὄνομα ἀφορίζουσ ι μέτρον τι, καὶ κυριώτατα τοῦ ποσ οῦ, εἶτα τοῦ ποιοῦ. So if the latter is to mean something along the lines of 'being one in the strictest sense, if we define it according to the meaning of the word, is a measure…', then we'd like to know where in the preceding discussion Aristotle thinks he has established, or explained, or motivated, or given considerations in favor of such a 50 Suppose von Christ's hypothesis that Aristotle originally wrote: τὸ μέτρῳ εἶναι πρώτῳ at [1.3] were correct. How, then, would the erroneous αand β-readings arise? Firstly, a β-scribe would have to change τὸ μέτρῳ εἶναι πρώτῳ to τὸ μέτρον εἶναι πρῶτον. So much is not implausible. But when it comes to explaining the appearance of τῷ μέτρον εἶναι πρῶτον in the α-tradition it's quite difficult to see how conscious intervention or standard scribal error can get you from an original τὸ μέτρῳ εἶναι πρώτῳ to τῷ μέτρον εἶναι πρῶτον. In contrast, on the hypothesis that Aristotle in fact originally wrote the α-exemplar's τῷ μέτρον εἶναι πρῶτον it's easy to account for how τὸ μέτρον εἶναι πρῶτον would come to be recorded in the β-tradition. Indeed, erroneous changes of case in article + εἶναι constructions are very common in the β-tradition (see e.g. 1016b18, 1031b9, 1053b4, 1029b18, b19, b21-22). 51 In printing the ἀφορίζοντι in Passage 1, Ross and Jaeger are following the testimony of A b (the lone β-family manuscript they consult), and rejecting the α-reading: ὅ ἀφορίζουσ ι. From the perspective of Harlfinger's stemma this proves questionable. Besides A b , there are but two independent β-witnesses extant for Passage 2: the 14th cent. manuscript M and the 15th cent. contaminated manuscript C. At the relevant text location, M reads ἀφορίζουσ ι and C (likely from α-contamination) reads ὅ ἀφορίζουσ ι. remarkable claim. The stretch of Iota 1 preceding Passage 1 does nothing of the sort. And neither does the stretch of Iota 1 that intervenes between Passage 1 and Passage 2. Indeed, as far as I can see, there isn't any passage in the corpus that really does this. The portion of Iota 1 that intervenes between Passage 1 and Passage 2 can be outlined (pretty uncontroversially I think) as follows: 1. 1052b20-31: elaboration of what it means to say that τὸ ἕν is a primitive measure of quantity (≈it means τὸ ἕν is a foundation for quantitative knowledge); that it is qua counting unit that τὸ ἕν is principle of number 2. 1052b31-1053a14: that we deploy a ἕν (a primitive unit-measure) in order to develop quantitative knowledge of various domains: lengths, weights, speeds, etc; that in each such case our unitmeasure must be indivisible in some sense; that not all measures are intrinsically indivisible in the same sense 3. 1053a14-30: that to develop knowledge of quantities in any given domain of quantifiables we apply a domain-specific unit-measure; that quantitative knowledge of some domains involves more than one measure 4. 1053a31-1053b3: the sense in which it's true to say that episteme and perception are measures, or even that 'man is the measure' with Protagoras In our discussion of Passage 1, we noted that Iota 1 advises us not to overlook the difference between 'What sorts of things are ἕν?' and 'What is it for something to be ἕν?' (1052b1-3). As the above outline suggests, and the reader can readily verify, the stretch of text that intervenes between Passage 1 and Passage 2 says nothing (or next to nothing) about definitional questions at all but much about measures and the sorts of things called ἕν because they are measures. Now, to the question of Passage 2's meaning. Note that the word which Ross translates ' define' is the compound verb ἀφορίζειν. Presumably because of the verb's root, scholars have tended to follow Ross in assuming that ἀφορίζειν in Passage 2 means define in the specialized philosophical sense: i.e. 'giving a logos of what something is or what something means.' But the assumption is highly dubious. For this is not the usual meaning of ἀφορίζειν in the Stagirite or other writers. Aristotle himself has a fairly stable lexicon of verbs for defining in the philosophical sense (ὁρίζειν, δηλοῦν, σ ημαίνειν, ἀποδιδόναι) he doesn't much deviate from. Moreover, ἀφορίζειν turns out to be used around 50 times in the Corpus Aristotelicum. And a study of the verb's other occurrences in Aristotle's writings reveals that the Stagirite nowhere else uses ἀφορίζειν as a verb for formulating a (real or nominal) definition. 52 When ἀφορίζειν does appear in Plato or Aristotle, it just about always means to mark off (some X) from (some Y). 53 And I suggest that this is precisely what the verb means here. Interpreting the participle personally 54 I propose translating Passage 2 as follows: Passage 2: proposed text and translation ὅτι μὲν οὖν τὸ ἑνὶ εἶναι μάλισ τά ἐσ τι κατὰ τὸ ὄνομα ἀφορίζουσ ι μέτρον τι, καὶ κυριώτατα τοῦ ποσ οῦ, εἶτα τοῦ ποιοῦ, φανερόν. Thus, it's clear that for those who mark off by the name ['one'], to be one is above all a certain measure-and predominantly of quantity, then of quality. Given the semantic closeness of the two verbs, it's natural to connect the active ἀφορίζουσ ι in Passage 2 with the passive διωρισ μένῳ in Passage 1. I suggest the basic line of thought here is the following. When we call X one [ἕν], we might be asserting X's holding together in itself. But we also call things one to mark them off from others: 'this is one, that's one, they are two'. In the stretch of Iota 1 between Passage 1 and Passage 2 (1052b20-1053b3) Aristotle has conducted a focused discussion of measures from which it has emerged that there's a special sense of ἕν that applies to items like inch and gram-a sense on which τὸ ἕν names a unit of measure which can give rise to a number by partitioning a totality into determinately many discrete διωρισ μένα. I submit that in Passage 2 Aristotle is merely declaring that 1052b20-1053b3 has successfully elucidated a sense of ἕν on which it signifies measure. We remarked above that the construction ὅτι μὲν οὖν...φανερόν is here deployed, as it frequently is, to mark a concluding summary of what Aristotle thinks he has achieved in the preceding discourse. The discussion that precedes Passage 2 doesn't argue that other senses of ἕν reduce to this one or even really raise this kind of issue. So simply as a matter of interpreting Passage 2 in context, it seems to me that something like the deflationary reading of the passage given above is to be preferred over the kind of Assimilationist reading embodied in Ross' translation. The αand β-versions of Passage 3 diverge rather sharply. Prima facie, both versions look like ungrammatical gibberish. The reconstructed text for Passage 3 that Ross prints-a text that seems to have been originally proposed by von Christ-is grammatically impeccable Aristotelian Greek. But it is far from obvious that Ross' interpretation of the linguistically smooth text he prints makes good philosophical sense within either the broader henology of Aristotle's Metaphysics, or even the local context of Met. Δ.6. As usual, we can distinguish (a) the version of Passage 3 that Aristotle actually wrote, (b) Ross' proposed reconstruction of Passage 3 (the text printed in his and von Christ's editions), and (c) the interpretation Ross assigns to (b). In what follows, I will first argue that given the immediate context of Passage 3 it's quite unlikely that the intended meaning of (a) was something along the lines of (c). Next, I sketch an alternative interpretation of Passage 3 based on an alternative text-critical proposal somewhat similar to Jaeger's. My basic approach to Passage 3 takes its start from an easy observation. While, as we've noted, the text of Passage 3 (=1016b17-18) is quite unstable in the manuscript tradition, the block of text which immediately follows Passage 3in particular, 1016b18-23-is textually stable and is clearly supposed to explain (or provide warrant for) whatever it is that Aristotle did say in Passage 3. (To see this, note the γάρ...οὖν...γάρ particle structure of 1016b18-21). To determine whether Ross' textual and interpretive proposals for Passage 3 are plausible, let us then consider 1016b18-23. Now, 1016b18-23 constitutes a highly compressed presentation of a train of thought familiar from Met. Iota 1 and Nu 1. There are different kinds of thing that can be understood quantitatively: harmonic intervals, metric poetry, corporeal magnitudes, the lengths of changes. To get knowledge of the quantities populating these different domains [γένη] of quantifiables we must use one or more domainspecific units of measure. 1016b18-23 argues that these domain-specific units of measure can be viewed as a domain-specific epistemic primitives-foundations for quantitative knowledge in their associated domains. And the evident point of this is to elucidate the sense in which Aristotle thinks it true that τὸ ἕν is foundation [ἀρχή] of number (NB the οὖν in 1016b20 and that epistemic primitive is one of the recognized senses of ἀρχή distinguished in Δ.1). Thus 1016b18-23 which (again) is supposed to somehow justify or explain whatever it is Aristotle is saying in Passage 3. If we are to accept Ross' text-critical and interpretative proposals on which Passage 3 concerns the essence of oneness quite generally we need a good account of how this could make sense in the immediate context. But I see no plausible way to read 1016b18-23 as an attempt to support any such thesis about the essential core of oneness in general. Pace Ross, these lines do nothing to explain or help motivate the view that the other forms of oneness discussed in Δ.6 all (somehow) reduce to arithmetical oneness. In fact, 1016b18-23 seem rather to have been written in order to support and elaborate upon a far weaker thesis-the claim there's a type of oneness such that for X to be one in this way is for X to be a foundation of number. Moreover, if the authentic Passage 3 (1016b17-18) was really supposed to mean what Ross takes it to mean, we'd expect it to exercise at least some conceptual influence on the remainder of Δ.6 outside of 1016b17-31's excursus on arithmetical oneness. But even this expectation isn't met. The notion of measure, for instance, plays no discernible role in Δ.6 outside of 1016b17-31-not even in the chapter's concluding remarks on the senses of 'many' [τὰ πολλά] (1017a3-6). 57 We noted above that in contrast to Passage 3 (1016b17-18) the text of 1016b18-23 is textually stable, and moreover that 1016b18-23 is clearly supposed to explain (or provide warrant for) whatever thesis Aristotle means to commit himself to Passage 3. Now, while 1016b18-23 does not support the thesis that Ross takes Passage 3 to assert, it does support the weaker thesis that being a foundation of number is a way of being one. Moreover, the hypothesis that the authentic Passage 3 merely asserted this weaker thesis coheres well with the broader context of Met. Δ.6. 58 This suggests we would do well to ask whether Passage 3 can plausibly be interpreted along these lines. And in this connection, it's striking that if we attend to Alexander of Aphrodisias' lengthy comments on Met. Δ.6 we find this very kind of reading of Passage 3 (see esp. In Met. 368,15ff.). In contrast to modern scholars like Ross and Tarán, Alexander neither takes Passage 3 to assert that arithmetical oneness is the essence of oneness quite generally, nor does he take Passage 3 to claim for arithmetic oneness any kind of analytical priority over the remaining varieties of oneness. In fact, these latter two readings are't even raised by Alexander as misinterpretations of Passage 3 his students need to be warned against. Alexander discusses 1016b19-23 as if it were clear that Passage 3 asserts that being a foundation of number constitutes merely a way for something to be ἕν. 59 It's far from obvious that the reconstruction of Passage 3 proposed by Ross is correct. And since the publication of Ross' edition, several alternative reconstructions of Passage 3 have been proposed. 60 Most notable is that of Jaeger. Passage 3: Jaeger's text τὸ δὲ ἑνὶ εἶναι ἀρχὴ <τοῦ> τινί ἐσ τιν ἀριθμῷ εἶναι. While Ross' text stays fairly close to the β-version of Passage 3, what Jaeger prints is identical with the α-version save for his insertion of a τοῦ. Jaeger himself proposes to interpret τινι as modifying ἀριθμῷ (Jaeger 1917). But as Tarán and others have rightly objected, it would be a bit strange for Aristotle to be saying here that τὸ ἑνὶ εἶναι is a foundation not of ἀριθμός but rather of τό τινι ἀριθμῷ εἶναι. Be this as it may, Jaeger's proposed text need not be interpreted as Jaeger himself suggests. In particular, it seems to be linguistically possible to interpret Jaeger's text by construing τινι with ἀρχή so that the meaning would be: for X to be a one 'is a foundation for something [τινι] of its being a number'. But this latter reading is linguistically easier on the following reconstruction which I myself favor. Passage 3: Proposed text with proposed translation τὸ δὲ ἑνὶ εἶναι ἀρχή τινί ἐσ τιν <τοῦ> ἀριθμῷ εἶναι. And to be a one is a foundation for something of its being a number. Like Jaeger's text for Passage 3, the text I'm proposing is identical with the α-version save for supplying a τοῦ. From a text-genealogical perspective, this reconstruction seems to be no less plausible than Jaeger's or Ross'. 61 And as the translation above shows, my proposed reconstruction is readily interpretable as communicating the weak thesis that a way to be a ἕν is to be a foundation of number. Given the immediate context of Δ.6 and 1016b18-23 in particular, the statement of such a thesis in Passage 3 would make a good deal of sense. So much cannot be said for Ross' interpretation of the text that he proposes, or Jaeger's interpretation of the text he proposes. To conclude, many alternative reconstructions for the Greek of Passage 3 are possible. What matters most for present purposes is two points. First, there are very good reasons to be skeptical that the Passage 3 Aristotle actually wrote was supposed to say what Ross takes his reconstruction to say: that the essence of oneness quite generally is to be one in the arithmetical sense. Secondly, existing text-critical evidence can be marshaled to support attractive, alternative reconstructions of Passage 3 on which the text effectively says that a way for X to be ἕν is for X to be ἕν in the arithmetical sense. ). 60 For a rather swift discussion of some options see the paper of Tarán mentioned in note 56. 61 For, on the hypothesis that ἀρχή τινί ἐσ τιν τοῦ ἀριθμῷ εἶναι is what Aristotle originally wrote, the erroneous emergence of the αand β-texts for 1016b18 is readily intelligible. One easy genealogy is: ἀρχή τινί ἐσ τιν τοῦ ἀριθμῷ εἶναι ἀρχή τινί ἐσ τιν ἀριθμῷ εἶναι (α-version) ἀρχή τινί ἐσ τιν ἀριθμοῦ εἶναι ↓ ἀρχῇ τινί ἐσ τιν ἀριθμοῦ εἶναι (β-version) Commenting on Met. Δ.6, Alexander paraphrases Passage 3: τὸ ἑνὶ εἶναι τὸ ἀρχὴ ἀριθμοῦ εἶναι ἐσ τιν (In Met. 368,(15)(16). In light of the work of Primavesi and Kotwick, the erroneous emergence of the β-text might also be accounted for by positing the influence of this paraphrase in production of β. Conclusion Aristotle manifestly views ontological oneness as a central explanandum for positive First Philosophical research. The explication of its causes and foundations is, he thinks, a component of genuine sophia-the metaphysical science which First Philosophy seeks. Discussions of arithmetical oneness, in contrast, seem to enter into Aristotle's Metaphysics primarily for negative reasons connected to the Stagirite's rejection of erroneous First Philosophical explanantia posited by his contemporaries. A mastery of the 'philosophy of number', I submit, is simply not a component of sophia as Aristotle understands it. It will be noted that I've not adduced any particular Aristotelian text whose very point is to elucidate the distinction between ontological and arithmetical oneness. 62 This is correct. For, what I've effectively been arguing is that if we closely attend to what Aristotle says about ontological and arithmetical oneness, it becomes difficult to make sense of his views without attributing to him a sharp theory-internal distinction between the two. The case is similar, I suggest, to that of eidos in the sense of species (opposed to individual and genus) and eidos in the sense of form (opposed to matter and matter-form composite). While there is no text in which Aristotle himself formulates the species/form distinction, we today find near universal agreement on attributing to Aristotle a theory-internal distinction between species and forms. What brought scholars to insist on this distinction in interpreting Aristotle was careful study of the philosopher's diverse uses of eidos-concepts. I contend that analogous considerations strongly motivate insisting on a (theoryinternal) unity/uniqueness/unit-hood distinction in Aristotle, and denying that Aristotle identifies ontological and arithmetical oneness. Ignoring these latter distinctions, no less than ignoring the species/form distinction, has been and will continue to be a source of significant mistakes.
2019-06-01T13:16:07.565Z
2018-11-23T00:00:00.000
{ "year": 2018, "sha1": "8588165c5afcf538c1df5a4244c556c1fd24391d", "oa_license": "CCBY", "oa_url": "http://metaphysicsjournal.com/articles/10.5334/met.6/galley/17/download/", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "4f6cf01230d4513f841fbf8d04eef6a45dcfbcbb", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Philosophy" ] }
246421303
pes2o/s2orc
v3-fos-license
Virtual Reality Aided Therapy towards Health 4.0: A Two-Decade Bibliometric Analysis Health 4.0 aligns with Industry 4.0 and encourages the application of the latest technologies to healthcare. Virtual reality (VR) is a potentially significant component of the Health 4.0 vision. Though VR in health care is a popular topic, there is little knowledge of VR-aided therapy from a macro perspective. Therefore, this paper was aimed to explore the research of VR in aiding therapy, thus providing a potential guideline for futures application of therapeutic VR in healthcare towards Health 4.0. A mixed research method was adopted for this research, which comprised the use of a bibliometric analysis (a quantitative method) to conduct a macro overview of VR-aided therapy, the identification of significant research structures and topics, and a qualitative review of the literature to reveal deeper insights. Four major research areas of VR-aided therapy were identified and investigated, i.e., post-traumatic stress disorder (PTSD), anxiety and fear related disorder (A&F), diseases of the nervous system (DNS), and pain management, including related medical conditions, therapies, methods, and outcomes. This study is the first to use VOSviewer, a commonly used software tool for constructing and visualizing bibliometric networks and developed by Center for Science and Technology Studies, Leiden University, the Netherlands, to conduct bibliometric analyses on VR-aided therapy from the perspective of Web of Science core collection (WoSc), which objectively and visually shows research structures and topics, therefore offering instructive insights for health care stakeholders (particularly researchers and service providers) such as including integrating more innovative therapies, emphasizing psychological benefits, using game elements, and introducing design research. The results of this paper facilitate with achieving the vision of Health 4.0 and illustrating a two-decade (2000 to year 2020) map of pre-life of the Health Metaverse. Introduction Health is a state of complete physical, mental, and social well-being [1] that has been given particular attention following the establishment of the Millennium Development Goals [2]. In September 2015, the General Assembly of the United Nations adopted the Sustainable Development Goals (SDGs) to ensure healthy lives and promote well-being for all ages [3]. Concurrently, there has been increasing demand for guidance regarding the pathways and resources needed to realize health-related SDGs [4]. Health care (HC) is typically considered a chief determinant to promote the health of people around the world. The share of HC in global revenue has been slowly but steadily increasing in the past few decades, and cross-country evidence has shown that health investment will return substantial health returns [5]. However, there are massive conflicting goals in HC, including accessibility, profitability, quality, cost containment, therapy, thus providing a potential guideline for futures application of therapeutic VR in healthcare towards Health 4.0. Materials and Methods A mixed research method was adopted for this research, which comprised the use of a bibliometric analysis (a quantitative method) to conduct a macro overview of VR-aided therapy, the identification of significant research structures and topics, and a qualitative review of the literature to reveal deeper insights. Bibliometric methods can enable quantitative analyses of written publications [28]; they enable researchers to base their findings on aggregated bibliographic data produced by other scientists to obtain insights about research structures, social networks, and topical interests [29]. One branch of bibliometrics is science mapping, which aims to visually reveal the structures and dynamic changes in a field of scientific research [30]; it can provide a spatial representation through physical proximity and relative locations to show how disciplines, fields, specialties, and individual papers or authors are related [31]. Hence, science mapping is useful to explore the macro picture of a research field. In this paper, two kinds of science mapping methods were employed, namely bibliographic coupling analysis (BCA) and term co-occurrence analysis (TCA). Both were implemented through the VOSviewer software, which is a commonly used tool for constructing and visualizing bibliometric networks, and developed by Center for Science and Technology Studies, Leiden University, the Netherlands. The VOSviewer pays more attention to graphical representation compared to other tools, which is beneficial for displaying large bibliographic maps in an easy-to-understand manner [32]. Figure 1 illustrates the research protocol, which encompasses four stages: (1) the Web of Science core collection (WoSc) was chosen for collecting relevant publications using the query "virtual reality" AND "therapy". Web of Science is a commonly used database in many disciplines, particularly in the medical and health domain [33], that is suitable for this research. Articles and review articles written in English and published before the year 2021 were selected. The bibliometric data of search results were exported through the export function of WoSc, the record content of which was set to full-record and reference, as well as being tab-delimited (win"UTF-8) for the file format. At the same time, available relevant articles were categorized using Excel. (2) VOSviewer was used to conduct BCA, which clusters publications based on their bibliographies. It is mechanical with no subjective scientific knowledge or judgment of the content [34], and it has a higher accuracy of cluster analysis than other bibliometric methods [35]. (3) A term co-occurrence map was created by VOSviewer and analyzed the occurrence frequency and the common occurrence of terms from the titles and abstracts of the articles. These visual data could be assigned meanings by the researchers to identify latent patterns and pose new questions for further analysis [36]. (4) Interpretable terms from the term co-occurrence map were selected, de-duplicated, and categorized; then, a literature review was conducted based on the two quantitative analyses to obtain more detail data of the field. Results of the Bibliographic Coupling Analysis (BCA) The VOSviewer was used to generate a bibliographic coupling map of VR and therapy after setting the minimum number of citations of an article to five. Of the total 271 articles, 207 articles met this threshold, and the most extensive set of related articles consisted of 205 articles. In Figure 2, there are circles with text labels (author and year) and lines in the map; every circle with a text label represents an article and the lines represent the relationship between articles. The bibliographic coupling map has three visualization views, namely network visualization (Figure 2), density visualization (Figure 3), and overlay visualization (Figure 4). In the network visualization, the size of each circle with a text label is dependent on the numbers of citations of the articles, since citations largely represent the influence of an article [37], and the distance between circles stands for the affinity of articles [38]. As shown in Figure 2, VOSviewer divided these articles into seven clusters with different colors. Results of the Bibliographic Coupling Analysis (BCA) The VOSviewer was used to generate a bibliographic coupling map of VR and therapy after setting the minimum number of citations of an article to five. Of the total 271 articles, 207 articles met this threshold, and the most extensive set of related articles consisted of 205 articles. In Figure 2, there are circles with text labels (author and year) and lines in the map; every circle with a text label represents an article and the lines represent the relationship between articles. The bibliographic coupling map has three visualization views, namely network visualization ( Table 1 lists the representative articles of each cluster, showing the authors, years, research methods, research subjects, medical conditions, and therapies. Each cluster has been named in line with the 11th revision of the International Classification of Diseases (ICD-11) provided by International Health Organization [39]. Cluster1 concerns eating disorders (ED); Cluster2 is mainly focused on pain management (PM); Cluster3 concerns diseases of the nervous system (DNS); Cluster4 looks into post-traumatic stress disorder In the overlay visualization (Figure 4), the color of articles, which transitions from purple to yellow, is determined by the publication year. The older the research is, the closer the color is to purple, and the newer the research is, the closer color is to yellow. Since the earliest article meeting the threshold is the article with the text label of 'Hoffman (2000)' [40] indicating the first published VR and therapy study in year 2000, Figure 4 shows the 20-year development of VR-aided therapy from 2000 to year 2020. It was found that the earliest applications in last two decades of VR for therapy were in PTSD [41], PM [40], and DNS [42]. However, recent studies have been more focused on A&F, as shown by the distribution of yellow in Figure 4. VOSviewer allows users to set the color range of articles. The four limited year ranges have been set, i.e., from 2000 to 2004, 2005 to 2009, 2010 to 2014, and 2015 to 2020, to explore the chronological development of VR-aided therapy. In the virtualization of the four ranges, as shown in Figure 5, yellow or purple stands for the research beyond the specified range and green represents contemporary research of the specified year range. It was found that the focus of research has changed over the long research period: VR-aided therapy was mainly used to treat PTSD in the first five years of this century, i.e., from 2000 to 2004; in the following five years, studies began to shift from PTSD to A&F; from 2010 to 2014, the studies were mainly distributed in two fields of A&F and DNS; and in the most recent five years, the majority of the studies concentrated on A&F. In the network visualization, the size of each circle with a text label is dependent on the numbers of citations of the articles, since citations largely represent the influence of an article [37], and the distance between circles stands for the affinity of articles [38]. As shown in Figure 2, VOSviewer divided these articles into seven clusters with different colors. Table 1 lists the representative articles of each cluster, showing the authors, years, research methods, research subjects, medical conditions, and therapies. Each cluster has been named in line with the 11th revision of the International Classification of Diseases (ICD-11) provided by International Health Organization [39]. Cluster1 concerns eating disorders (ED); Cluster2 is mainly focused on pain management (PM); Cluster3 concerns diseases of the nervous system (DNS); Cluster4 looks into post-traumatic stress disorder (PTSD); and Cluster5, Cluster6, and Cluster7 are associated with anxiety and fear related disorder (A&F). Table 1. The representative articles of each cluster in Figure 2 for virtual reality (VR) aided therapy (devised by the authors). Cluster Author In the density visualization ( Figure 3), the colors denote the number of nearby articles and their weights [38]. The more an area has been intensively researched, the closer the color is to red. Only four areas, namely A&F, DNS, PTSD, and PM, have formed relatively intensive research areas, as marked in Figure 3. In addition, the density view shows the affinity of research areas through positional distance. As shown in Figure 3, PTSD research is close to A&F, while the PM and DNS research areas are relatively independent and far away from PTSD and A&F. This is consistent with ICD-11 [39] since both PTSD and A&F belong to mental, behavioral, or neurodevelopmental disorders, whilst DNS and PM are related to neurology and rehabilitation. In the overlay visualization (Figure 4), the color of articles, which transitions from purple to yellow, is determined by the publication year. The older the research is, the closer the color is to purple, and the newer the research is, the closer color is to yellow. Since the earliest article meeting the threshold is the article with the text label of 'Hoffman (2000)' [40] indicating the first published VR and therapy study in year 2000, Figure 4 shows the 20-year development of VR-aided therapy from 2000 to year 2020. It was found that the earliest applications in last two decades of VR for therapy were in PTSD [41], PM [40], and DNS [42]. However, recent studies have been more focused on A&F, as shown by the distribution of yellow in Figure 4. VOSviewer allows users to set the color range of articles. The four limited year ranges have been set, i.e., from 2000 to 2004, 2005 to 2009, 2010 to 2014, and 2015 to 2020, to explore the chronological development of VR-aided therapy. In the virtualization of the four ranges, as shown in Figure 5, yellow or purple stands for the research beyond the specified range and green represents contemporary research of the specified year range. It was found that the focus of research has changed over the long research period: VR-aided therapy was mainly used to treat PTSD in the first five years of this century, i.e., from 2000 to 2004; in the following five years, studies began to shift from PTSD to A&F; from 2010 to 2014, the studies were mainly distributed in two fields of A&F and DNS; and in the most recent five years, the majority of the studies concentrated on A&F. Results of Terms Co-Occurrence Analysis (TCA) For this section, the authors used VOSviewer to extract the terms from the titles and abstracts of collected articles, and then they calculated their occurrences and co-occurrences using the binary counting method. Regarding occurrences, Table 2 lists the 10 most influential terms in virtual reality-aided therapy via VOSviewer: virtual reality exposure therapy, exposure therapy, virtual reality therapy, test, PTSD, rehabilitation, control group, anxiety disorder, post-traumatic stress disorder, and phobia. Post-traumatic stress disorder 28 10 Phobia 28 During the TCA, the minimum occurrences of terms has been set to five. Out of the 6497 terms, 393 met the threshold, and VOSviewer selects 60% of the most relevant terms, thus resulting in 236 terms used to create the term co-occurrence map of VR-aided therapy, as shown in Figure 6. Similar to Figure 2, Figure 6 shows circles with text labels and lines, Results of Terms Co-Occurrence Analysis (TCA) For this section, the authors used VOSviewer to extract the terms from the titles and abstracts of collected articles, and then they calculated their occurrences and co-occurrences using the binary counting method. Regarding occurrences, Table 2 lists the 10 most influential terms in virtual reality-aided therapy via VOSviewer: virtual reality exposure therapy, exposure therapy, virtual reality therapy, test, PTSD, rehabilitation, control group, anxiety disorder, post-traumatic stress disorder, and phobia. Table 2. The most frequent terms on the theme of VR and therapy in the Web of Science core collection (WoSc) database calculated by VOSviewer (devised by the authors). Rank Term Occurrences Post-traumatic stress disorder 28 10 Phobia 28 During the TCA, the minimum occurrences of terms has been set to five. Out of the 6497 terms, 393 met the threshold, and VOSviewer selects 60% of the most relevant terms, thus resulting in 236 terms used to create the term co-occurrence map of VR-aided therapy, as shown in Figure 6. Similar to Figure 2, Figure 6 shows circles with text labels and lines, where a circle with a text label represents a term and a line represents the relationship between terms. VOSviewer divided these terms into five clusters, as marked in Figure 6. There is a cluster in the middle of the map mainly related to technology, including the terms virtual reality technology, VR system, and fMRI, which is surrounded by four clusters corresponding to the four areas determined in the density visualization of BCA (Figure 3), namely PTSD, A&F, DNS, and PM. Figure 6 shows that the link between A&F and PTSD is strong, but they are rarely directly associated with DNS and PM. where a circle with a text label represents a term and a line represents the relationship between terms. VOSviewer divided these terms into five clusters, as marked in Figure 6. There is a cluster in the middle of the map mainly related to technology, including the terms virtual reality technology, VR system, and fMRI, which is surrounded by four clusters corresponding to the four areas determined in the density visualization of BCA (Figure 3), namely PTSD, A&F, DNS, and PM. Figure 6 shows that the link between A&F and PTSD is strong, but they are rarely directly associated with DNS and PM. To explore four application areas of VR-aided therapy in more detail, all terms in Figure 6 have been scrutinized and selected the explanatory significance terms. Then, they amalgamated the terms with same meanings, such as virtual reality therapy and VRT. It was found that there were four kinds of terms in each area, specifically, there were terms about medical conditions, therapies, research methods, and outcomes. To put it more clearly, the medical condition terms refer to the illness, symptoms, and patients, such as PTSD. The therapy terms refer to the treatments that were used in articles, such as virtual reality exposure therapy. The outcome terms refer to the therapeutic effects and other benefits, such as the term significant change. The terms that come from the title and abstract of the articles are classified in the tables in the following sections. These terms provide more specific knowledge, compared to BCA, and aid in the deep exploration of the research content of this field to identify a micro qualitative analysis pattern. The full classification of terms is presented in the following sections, which consider micro qualitative analysis. To explore four application areas of VR-aided therapy in more detail, all terms in Figure 6 have been scrutinized and selected the explanatory significance terms. Then, they amalgamated the terms with same meanings, such as virtual reality therapy and VRT. It was found that there were four kinds of terms in each area, specifically, there were terms about medical conditions, therapies, research methods, and outcomes. To put it more clearly, the medical condition terms refer to the illness, symptoms, and patients, such as PTSD. The therapy terms refer to the treatments that were used in articles, such as virtual reality exposure therapy. The outcome terms refer to the therapeutic effects and other benefits, such as the term significant change. The terms that come from the title and abstract of the articles are classified in the tables in the following sections. These terms provide more specific knowledge, compared to BCA, and aid in the deep exploration of the research content of this field to identify a micro qualitative analysis pattern. The full classification of terms is presented in the following sections, which consider micro qualitative analysis. Micro Qualitative Analysis For this section, the results from the previous two quantitative analyses have been integrated and explored deeper based on related articles. Specifically, four major applications of VR-aided therapy have been identified through the bibliographic analysis in Section 3.1, namely PTSD, A&F, DNS, and PM. Then, in Section 3.2, term occurrence analysis has been used to map out the terms of each application and found that there were four categories, namely therapy, research method, medical condition, and outcome. To align with aforementioned results, the qualitative analysis was divided to four parts corresponding to four major applications. For each part, the classification of terms in Figure 6 has been shown first, which reflect the partial details, followed by a further review of related articles. This review is supported by the quantitative analyses, for which it is supplementary. Post-Traumatic Stress Disorder (PTSD) Post-traumatic stress disorder is perhaps the most common psychiatric disorder to occur after an encounter with a traumatic event, and it can result in a major public health burden because of inefficient universal treatment [43]. Terms associated with PTSD from Figure 6 are categorized in Table 3. Table 3. Categorized terms associated with post-traumatic stress disorder from Figure 6 (devised by the authors). The terms of medical conditions, e.g., IRAP, combat, veteran, and active duty soldier, in Table 3 indicate that PTSD research has been focused on war trauma. The terms of therapy, e.g., exposure therapy, imaginal exposure, and prolonged exposure, listed in Table 3 suggest that therapies for PTSD are most related to exposure, as exposure therapy (ET) is one of the most recognized trauma-related therapies for symptom remission. ET involves graded exposure to situations leading to fear response, enabling an individual to become desensitized to fear cues [44]. ET integrated with VR, namely virtual reality exposure therapy (VRET), is considered an alternative therapy to allow patients to experience a presence in a computer-generated, three-dimensional environment that is immersive and interactive, to help with minimizing avoidance behavior strategies and promote the emotional participation of the patients [45]. As the outcome terms significant reduction and effective treatment suggest, numerous studies have reported that VR is useful for PTSD. For example, Wood et al. found that VRET can alleviate the measurable physiological responses and PTSD symptoms of veterans who participated in the war on terror. A similar effect was found in the therapy of active service members (both males and females) [48,50,68]. Moreover, Difede et al. [51] claimed that VR is effective for survivors of terrorist attacks, and it can help those who cannot engage with traditional imagination therapy. Furthermore, Walsh et al. [69] developed an exposure program comprising a virtual driving game and virtual environment, and they found that it was feasible to use such a computer-generated environment to treat PTSD for traffic accidents, even when the patient's condition was complicated with depressive symptoms. Hence, the therapeutic effects of VR for various kinds of PTSD have been proven. Interestingly, engagement with VR has been claimed to improve the satisfaction of patients [52]. However, the meta-analysis of Kothgassner et al. [66] showed only moderately positive outcomes of VRET to treat PTSD, which could be explained by the difference of ability to immerse individuals. The outcome term potential benefits in Table 3 show that VR could benefit therapies for PTSD. Rothbaurn et al. [46] reported that VR could scale up therapists' choices, enable the creation of customized virtual environments, and establish shared experiences. MR makes it easier to transmit and control corresponding stimuli to patients and make them more actively participate in therapy, which will activate more traumatic memories to increase the possibility of eliminating conditional fear [65]. Therefore, VR-aided therapy may be helpful for PTSD patients who are resistant to traditional exposure. In addition, VR can maintain long-term contact with stimuli in ways that are safer and more feasible than other exposure therapies [70]. Further, VRET has not shown iatrogenic or negative results in which the curative effect remains unchanged in the long-term follow-up, and their recurrence rate is shallow (4.5%) [55]. Deng et al. [67] argued that VRET has a sustained therapeutic effect and does not have the so-called dependence or withdrawal reaction in drugs, as well as that there could be a positive correlation between VRET dose and efficacy response, which suggests that VR-aided therapy may benefit from frequent use. Anxiety and Fear Related Disorder (A&F) Clinically, fear can be regarded as the response to a specific cue, while anxiety is a more long-lasting phenomenon that is not specific to overt cues; both are debilitating situations that affect a significant number of individuals [71]. Terms associated with A&F from Figure 6 are categorized in Table 4. Table 4. Categorized terms associated with anxiety and fear related disorder from Figure 6 (devised by the authors). Medical Condition Therapy Method Outcome Avoidance, depressive symptom, distress, panic disorder, paranoia, phobia, public speaking anxiety, sad, social anxiety, social anxiety disorder, social phobia, social situation, specific phobia, spider For A&F therapies, VR is as effective in inducing emotional responses as reality, and its application is extremely valuable in exposure treatment [72], as reflected by the therapy terms vivo exposure and VR exposure in Table 4. However, there have been several studies on VR-based cognitive behavior therapy (VR-CBT) [73][74][75][76][77] since it is the standard evidencebased psychological treatment for anxiety disorders [78]. Some other therapies also appeared in this field, including mindfulness therapy [79] and talking cure [80]. Additionally, some studies highlighted game elements in the therapy process [80][81][82][83]. The outcome terms effective treatment and significant reduction in Table 4 echo the effectiveness of VR to some extent. Several studies reported that VR-based therapy could provide a long-term therapeutic effect for anxiety disorders. In Wallach et al.'s [74] randomized clinical trial, VR-CBT was equally helpful as traditional CBT for public speaking anxiety and significantly more effective than waiting list control in anxiety reduction and subject's self-rating of anxiety during a behavioral task. A follow-up study showed that the improvement of symptoms was maintained for one year [75]. Similarly, VRET is believed to have a comparable effectiveness to exposure group therapy for treating social fears, and refinement has been maintained for one year [93]. In addition, VR can exert functions in the treatment of phobias. The virtual bungee jumping environment developed by Jang et al. [123] provided a preliminary illustration of the effectiveness of VRET in treating acrophobia. Triscari et al. [73] highlighted that VR-based treatment can significantly improve the symptoms of flight phobia, and the effect could last for one year when associated with high measurement scores. The behavior and attitudes of the patients to actual flight were changed after the therapy. Interestingly, VR is also helpful for phobias of small animals, such as the game developed by Lindner et al. [81,82] for spider phobia. However, the outcomes of VR-based therapy for various medical conditions are different. In some specific disorders, VR can achieve outstanding results. The metaanalysis of a study by Cardo et al. [113] showed that VRET was more effective than any other traditional evidence-based therapy for fear of flying. However, for more general conditions, Carl et al. [108] concluded that VRET is just as helpful as in vivo exposure for many anxiety-related diseases, although it exerted a more significant influence range than the control group and the influence was only medium or large when treating specific phobias, sadness and performance anxiety, post-traumatic stress disorder, and Parkinson's disease. In contrast, VR is inferior to traditional therapy in some conditions. For example, Wechsler et al. [109] demonstrated that the therapeutic outcome of VRET for agoraphobia is significantly lower than the use of in vivo exposure. Additionally, Jiang et al. [124] argued that VR cannot be a standard treatment for blood-injection-injury phobias since it has no impact on the confidence of patient to deal with the fear situation. Despite the discrepancy of effects, VRET is regarded as an acceptable and effective substitute for in vivo exposure, and it is believed that it will improve with advancements of technology and procedure [109]. Due to differences in effectiveness, research on VR for therapy is suggested to determine practical enhancements for anxiety, phobias, and clinical predictors of improvements [118]. Most studies have focused on the sense of presence, which refers to the awareness of being in an environment, either real or virtual [125]. Presence has been found to influence the anxiety experience of a virtual environment, and it has a relationship with fear factors within VR. However, there is a lack of evidence to support an association between presence and therapeutic effects, which means that presence seems to be a necessary condition of the therapeutic effect but not strong enough to influence the therapy results [115,126]. A study based on self-reported results regarding public speaking anxiety [127] suggested that maximizing presence may only increase the experienced fear, while maximizing involvement, or attentional focus, may lead to better treatment responses. It seems that presence is a requirement for a successful outcome because it induces anxiety and fear [105]. In a study of VR-CBT [92], the contributions of immersion and presence to therapy results were specious, though the study results supported a significant correlation between patient expectations and most changes in therapeutic effects, i.e., the work alliance evaluated by patients had a medium effect in explaining patients' rational and irrational modifications and changes in anxiety symptoms and therapist's performance also significantly impacted the changes of anxiety symptoms. Hence, how presence affects the experiences of patients and therapy results is debatable, while other factors, such as participation, the expectation of patients, and work alliance, can influence the outcome of therapy to a different extent. The outcome terms, e.g., low cost, availability, and wide range, in Table 4 convey the benefits of VR in A&F. Cost-effectiveness is a remarkable feature of therapeutic VR, especially in the treatment for fear of flying, since VR can generate more gradual therapy settings (sequence and intensity) and efficiently create exceptional exposure (different flight destinations, crew members, and weather conditions) that can be endlessly reused in therapy [117]. Malbos et al. [83] claimed that VRET is a more economical treatment for claustrophobia in comparison to in vivo exposure. As the cost of a complete VR system has been substantially reduced, the technology has become more available [75]. Since it allows for individualized, progressive, controllable, and immersive exposure and is accessible for therapists and patients, the attitudes of therapists and patients toward VR have changed are no longer the main obstacle to implement next-generation VR in routine clinical practice [99,128]. Although, the compliance of VR-based therapy is superior to traditional treatment, the VR-based therapy does not show noticeable adverse effects compared with in vivo exposure, and VR exposure and in vivo exposure therapy have shown similar loss rates [111]. Additionally, the deterioration rate of VR-based therapy is consistent with or lower than other treatments [110]. However, more knowledge is needed to promote VR for therapy, including regarding how to achieve greater clinic acceptability, better strategies for exposure, and the avoidance of reoccurrence [105]. Diseases of the Nervous System (DNS) Diseases of the nervous system include dyskinesia, neurocognitive impairment, cerebrovascular disease, and cerebral palsy [39]. Terms associated with DNS from Figure 6 are categorized in Table 5. Table 5. Categorized terms associated with diseases of the nervous system (DNS) from Figure 6 (devised by the authors). The motor function and balance of patients are the focuses of DNS research using medical condition terms, such as balance, motor, and gait. The therapy terms conventional therapy and virtual reality therapy refer to the two most discussed therapies in the DNS field. VR-based therapies are mainly integrated with conventional therapy, such as movement [129][130][131], serious game [132][133][134][135][136][137], mirror [138][139][140], and gesture therapy [141,142]. Medical Condition As shown in Table 5, the main research method used for VR for DNS is the randomized controlled trial [129,[143][144][145][146][147], with which most studies compared VR-aided therapy with conventional rehabilitation and one study [147] assessed the impact of adding psychological training. In addition, systematic review [133,148,149] and meta-analysis [135,150,151] studies of the field have concerned the evaluation of VR and conventional rehabilitation therapies, while literature reviews have provided some information about the benefits of VR [152,153]. In terms of the measurement method, the Berg, Fugl-Meyer, and stoke influence scales are the most used in balance, function, and quality of life assessments [136]. An advanced method for DNS research is functional magnetic resonance imaging (fMRI), which can be used to explore the neuroplasticity of neural rehabilitation patients. Orakpo et al. [18] argued that the promising behavioral improvements demonstrated in clinical trials must be associated with an understanding of cortical and subcortical changes that form the biological basis of rehabilitation. The compatibility between VR and such imaging technology has enabled researchers to present multimodal stimuli with high ecological effectiveness while recording changes in brain activity, which is also beneficial for therapists [154]. The observed neural reorganization results have provided convincing evidence for the effectiveness of VR rehabilitation. For example, You et al. [155] used fMRI to detect the neuroplasticity changes in the smooth muscle cells of an eight-year-old cerebral palsy patient during VR therapy via three different motor games, and these changes seemed to be closely related to the enhancement of age-appropriate motor skills in the affected limbs, which supported the effectiveness of VR to treat children with hemiplegic cerebral palsy based on the principle of neuroplasticity. Orihuela-Espina et al. [142] conducted a study on upper limb hemiplegia caused by stroke in order to quantify the neural plasticity changes given to patients by VR gesture therapy, of which the prefrontal cortex and cerebellum activities have been found to be the driving forces for rehabilitation. However, the impacts on neuroplasticity of VR-based therapy and conventional therapy have been found to be similar but different. This may be because visual and tactile feedback enhancement affect the high-order somatosensory and visual-motor areas when using VR [156]. The terms in Table 5 indicate that VR is effective for DNS. For example, Okmen et al. [145] studied children with cerebral palsy and found that VR significantly contributed to the success of treatment to improve their motor function. In addition, VR rehabilitation has been claimed to dramatically improve the balance and gait of Parkinson's patients compared to traditional physical therapy [143]. Further, Mohammadi et al. [151] systematically reviewed comparative studies on the efficacy of VR-enhanced conventional therapy and conventional therapy, and they showed that VR-enhanced traditional therapy was more effective in improving the balance of post-stroke patients. The outcome terms positive effect and significant improvement in Table 5 also indicate that VR is an effective tool for DNS. However, further high-quality research is needed, and there are still challenges such as the content of intervention measures and the generality of research results at different stages of the disease, thus suggesting a lack of clarity regarding the most effective type associated with the time, setting, and duration of VR-based therapy [157,158]. The outcome terms motivation, participation, and mobility may indicate that VR is beneficial in DNS rehabilitation. VR can improve the motivation and emotions of patients. For instance, in addition to improving the postoperative balance ability, VR could significantly enhance patients' motivation, participation, and cooperation level in Sharar et al.'s study [159] on children with cerebral palsy. Additionally, Wille et al. [134] highlighted that VR, as a part of a pediatric interactive therapy system for dyskinesia, can facilitate the improvement of patient participation be giving them a greater freedom of self-control, reducing the cost of therapy by using a group therapy environment, and helping with the objective evaluation of the progress of a patient's condition through game score and difficulty level settings. This also shows that VR is customizable. VR can provide a functional, purposeful, and motivating context for rehabilitation and therapies, is easily graded, and is easily documented [160]. Therefore, VR allows the patients to enjoy customized therapies. For therapists, VR can aid in the observation of the conditions of patients, which is useful for research work. Moreover, the application of VR for DNS treatment has great mobility. Such an intervention could be conducted in-home, which can become a family-based complementary treatment [139], and it will become a fundamental aspect of personalized therapy and telemedicine [153]. Pain Management (PM) Uncontrolled pain has a universal and potentially negative effect on quality of life [161]. Brennan et al. [162] argued that PM is supported to be a fundamental human right. Formal pain management comprises the improvement of understanding of pain generation mechanisms and the use of analgesic drugs [163], and there are more alternative therapies than ever. The medical condition term burn patient may denote that burning pain is the focus of VR in PM. Terms associated with PM from Figure 6 are categorized in Table 6. Table 6. Categorized terms associated with pain management (PM) from Figure 6 (devised by the authors). Medical Condition Therapy Method Outcome Burn patient, chronic, female, pain Attention, distraction, exercise, immersive virtual reality, physical therapy, VR session Case study Average, preliminary evidence, range, side effect The therapy terms immersive virtual reality and distraction are shown in Table 6 and suggest immersive VR can be used as a distraction therapy. For instance, a study by Huang et al. [164] provided VR equipment to subjects suffering from pain and immersed them in a virtual landscape trip. In addition, VR can be used as assistance to psychopathology [165] or physical therapy [166]. The authors of most studies used case studies [167][168][169][170] to explore the use of VR, and several studies used randomized controlled trials [164,166,171,172] to find differences in VR effects. Interestingly, there was also a meta-analysis [173] of the analgesic effects of VR in burn patients undergoing dressing changes in physical therapy. There is some evidence supporting the application of VR in PM. For example, Hoffman et al. [40] conducted a survey on physical therapy integrated with VR for adult burn patients, and VR was found to help reduce pain to a statistically significant extent according to the self-reported measurements. Pain reduction also appeared in the study of Schmitt et al. [166], where VR was used to conduct a distraction therapy for pediatric burns and was claimed to remarkably improve the interest level of patients. Additionally, VR has been used in a wide range of medical conditions of pain and achieved positive results. For example, Hoffman et al.'s [170] preliminary evidence showed that VR can be used as an auxiliary and non-drug pain relief technique for multiple blunt traumas. Moreover, a review [173] showed moderate evidence for VR treatment to reduce both pain and functional damage in patients with acute pain. Furthermore, the effect of VR to alleviate pain is relatively stable among various populations, since Sharar et al. [174] showed that pain relief is not influenced by age, gender, and race, as well as that the effectiveness of VR is irrelevant to the sense of presence. Meanwhile, VR's analgesic effect does not become ineffective with repeated use [169]. Nevertheless, VR cannot replace traditional analgesic methods. For instance, there is no evidence for the continued effectiveness of VR for chronic pain, although it has short-term effects, suggesting that the use of non-opioid drugs to treat chronic pain is infeasible [167]. Likewise, in a study exploring the impact of intravenous infusion on the self-sedation needs of patients undergoing artificial joint replacement under regional anesthesia, although immersive VR was well-tolerated, it did not reduce the overall sedation needs of patients undergoing joint replacement surgery [164]. VR has various advantages in PM, as well as great acceptance. Ford et al. [175] evaluate the views of key stakeholders (i.e., patients and providers) on the feasibility, acceptability, and effectiveness of using low-cost VR technology in routine burn care, and the quantitative and qualitative results unanimously supported applying low-cost VR technology in burn clinics. Their study also highlighted that VR is cost-effective for pain dispersion. Currently, highly immersive VR is available and affordable, and more patients can use it for pain control, potentially at home [168]. VR is practical for various other uses, too. More patient groups, such as those with chronic back pain [176], fibromyalgia [177], and cancer [165], can receive VR treatment. Discussion In this paper, VR-aided therapy has been investigated, and a BCM was created to visually map out the research landscape in the field. Four major areas were identified: A&F, PTSD, DNS, and PM. A&F and PTSD are related to psychology and psychiatry, while DNS and PM are associated with neurology and rehabilitation. The network visualization of VR-aided therapy revealed that recent studies have mainly been focused on A&F and PTSD, which is in line with Thurner et al.'s [178] study that indicated that psychology and psychiatry are the medical fields where VR studies are growing, thus suggesting that VR has exceptional strength in these fields. Although the importance of rehabilitation is not highlighted in this paper because there were few related studies shown in the network visualization, a previous study [179] claimed that VR, when used as a tool for evaluation and treatment, has drawn attention in this area. The application of VR in HC from the perspective of therapy also has been investigated, showing that VR for use in therapy is a relatively emerging field with no standard approach. David et al. [78] claimed that VR is not a new therapy sect but a tool to facilitate traditional therapy, although it has numerous unique characteristics. Accordingly, there are considerable differences between VR-based therapies [170], which may include interventions from ET, CBT, movement therapy, game, mirror therapy, gesture therapy, and distraction therapy. It can be inferred from the current situation that VR has excellent scalability and the potential to integrate with more novel therapies. For example, Hacmun et al. [180] suggested that art therapy with VR could improve medical and HC services. Moreover, methods for research on therapeutic VR have been investigated in this paper, revealing that case studies and randomized controlled trials are the most frequently mentioned methods. However, there are difficulties in further systematic reviews or metaanalyses, as Botella et al. [61] argued that some studies have failed to use VRET following the clinical guidelines of PTSD evidence-based intervention. In addition, as mentioned in Section 3.3.1, the influence of experimental methods and measurements is a considerable reason for the observed disparity of results. Accordingly, as Birckhead et al. [181] stressed, it is necessary to first build consensus regarding the scientific framework for VR therapy development and evaluation and then provide a methodological framework of clinical research according to the opinions of an international working group. The overall results of the therapeutic effect of VR are optimistic and can be used for evidence-based HC design. An interesting finding is that the psychological effects of VR seem to be beneficial to both mental and physical health. For example, the integration of VR with traditional rehabilitation techniques can improve psychological adaptation, thus making it an essential contribution to cerebral palsy therapy [182]. In addition, Park and Park [147] highlighted that the incorporated psychotherapy of VR could provide more remarkable improvements during the therapy process for upper extremity rehabilitation. It seems that the psychological benefits may be a mediating factor of therapeutic effects, improving patients' quality of life. As pointed out in a mental health report issued by Department of Health, the United Kingdom [183], good mental health is paramount to physical health and other outcomes. Unfortunately, the results of network visualization in this paper show that the link between physical and psychological research is weak, which may require the further introduction of more medical theories such as psychosomatic medicine. The psychosomatic medicine is a branch concerning the interaction between psychosocial and biological factors in the process of disease [184]. In the future research of therapeutic VR, studies should not ignore the impact of psychological effects on final therapy outcomes, and HC service providers should better learn how psychological impacts improve HC services. Furthermore, this paper indicates that VR is advanced in customization, compliance, cost, accessibility, motivation, and convenience. A key question is how to use these features to achieve a better outcome. Several successful therapies have applied game elements to the process, resulting in serious games [69,81,82,136,155]. These methods can be called gamification, a conceptual framework to apply game elements and techniques to optimize the process in a non-game context, and they can motivate players to perform challenging tasks with game mechanics, dynamics, and components [185]. In the context of HC, Pereira et al. [185] noted that gamification is beneficial for the users' emotional experiences, sense of identity and social positioning, cognition, social skills, and psychomotor skills. It seems that VR with gamification may maximize the value of therapy; VR provides appropriate personalized simulation, adjustable stimulation, and repeatable and multi-person participation, and gamification can improve motivation, guide learning, change cognition, and provide accurate data [186][187][188][189]. Udara and Alwis [188] hypothesized that the extensive use of augmented reality and VR in such gamification solutions will be more visible in the future; they could comprise a feasible way to realize the vision of Health 4.0. Hence, researchers could attempt to add more game elements into VR therapy and clarify their practical effects, as done by Muangsrinoon and Boonbrahm [189], who studied the use of points, feedback, levels, leader boards, challenges, badges, avatars, competition, and co-operation in the use of VR for HC. The identified four limited year ranges in two decades (2000 to year 2020), as shown in Figure 5, illustrating the progress of VR-aided therapy development from for treating PTSD to A&F, could be regarded as the early development (pre-life) stages of the Health Metaverse. Healthcare providers and patients can well communicate in the virtual world through gamification, which has shown a potential to monetize digital health integrated with other latest emerging technologies, such as Blockchain and Non-Fungible Token, for helping future consumers to actively manage their health and making smarter healthy decisions [190]. Indeed, the result of this paper is a map of pre-life of the Health Metaverse, as Chen and Zhang [191] highlighted the innovative use of medical VR that will be an important part of Health Metaverse. It is predictable that the arrival of Health Metaverse will be accelerated because of the technology integration and the epidemic. Therefore, future research could be devoted to explore the function of VR-aided therapy in Health Metaverse facilitating with achieving more sustainable and social significance to meet the feature of the metaverse [192]. Although VR can bring outstanding outcomes [6], including therapeutic effects and benefits, there are challenges for in the application of VR. Sharma et al. [193] stated that VR has technical, physical, privacy, behavioral, and investment risks, but the most significant problem is obtaining funds, which depends on the level of public acceptance. Additionally, the profit produced by the HC industry seems to be relatively low, which is not attractive to most game developers [189]. Moreover, the barriers and facilitators of VR in healthcare include technology developments, end-user capabilities, and clinical settings [194]. Furthermore, the gap of population diversity, such as the acceptance of elderly patients [195], needs more research [26]. Therefore, studies on therapeutic VR must consider more stakeholders, e.g., patients, therapists, developers, and designers, and more dimensions, e.g., production, service, and therapy circumstances. A possible solution could be the use of design research methods that can notably contribute to future HC. This could bring opportunities for health communication, prototyping, co-design, digital design, salutogenic design, and holistic design [196]. Some studies have mentioned design-based research, such as user-centered design [197], usability [198], and innovative methods [24]. The results of these studies encourage the development of design research in VR for therapy, which is a significant part of the HC system's design. Conclusions The purpose of this study was to explore an overview and details of VR and therapy and to guide VR application in HC, which was achieved with the bibliometric analyses of publications from WoSc. The bibliometric analyses of articles and terms showed the potential to explore VR-aided therapy from the perspective of HC, which provides reliable macro knowledge since HC is a broad field and hard to summarize with other methods, such as literature reviews. In addition, a variety of visual maps can help healthcare stakeholders overcome the limitations of a single medical condition with multi-perspective insights, which could facilitate the development of VR-aided therapy and HC. This paper makes the following contributions to the field. First, this study was the first attempt to use VOSviewer to conduct bibliometric analyses of VR and therapy from the perspective of WoSc, a widely used database for science and technology as well as medical and health domains, which objectively and visually shows research structures and research topics in contrast to traditional literature reviews. Secondly, this study was the first systematic investigation of the research status of VR-aided therapy incorporating articles of two decades (2000 to year 2020), which not only provides a panorama of this field but also shows the details of four major research areas, i.e., PTSD, A&F, DNS, and PM, including medical conditions, therapies, methods, and outcomes. This could bridge the gap of knowledge of VR and therapy, as well as inspire future studies. Thirdly, this paper presents a discussion of the use of VR to aid therapy from a holistic point of view, which provides the foundation for future studies on Health Metaverse. This paper highlights that VR-aided therapy is effective for various medical conditions, and VR has advantages in customization, compliance, cost, accessibility, motivation, and convenience that highlight its potential in HC. These advantages enable VR technology to be integrated into a variety of therapies and help traditional therapies overcome the limitations of physical factors, which is important in the current context of the coronavirus disease 2019 (COVID-19) pandemic. Additionally, a VR environment can easily stimulate people's emotional response because it is an excellent medium to combine psychotherapy and physical therapy and to provide more abundant patient data to HC professionals, which could improve the relationship between therapists and patients. Furthermore, VR can be integrated with emerging methods, such as gamification and user-centered design, to provide customized therapy services for patients and improve the quality and satisfaction of the healthcare system. Overall, these potentials of VR help achieve the vision of Health 4.0 and even future exciting Health Metaverse. This paper offers instructive insights for HC stakeholders, particularly researchers and service providers, regarding the integration of more innovative therapies, psychological benefits, game elements, and design research. Further, this paper provokes questions that need further investigation regarding the standardization of research methods and the difficulties for VR application in the context of HC, among others. However, several limitations of this study need to be considered. In this paper, WoSc has been adopted for collecting data. Additionally, the authors used two types of publications, i.e., peer reviewed articles and reviews, to ensure the high quality of publications. In the future, more databases, e.g., PubMed, and types of publications, e.g., conferences papers, can be used in this area of study, which may provide updated knowledge regarding VR and therapy.
2022-01-31T16:06:44.493Z
2022-01-28T00:00:00.000
{ "year": 2022, "sha1": "592f2d1b109a10c5cff66f093a0beef6b9a46849", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/3/1525/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d47620db254fcd40256da8272b62339bf75dd660", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257386609
pes2o/s2orc
v3-fos-license
Increased fat mass negatively influences femoral neck bone mineral density in men but not women Background Obesity is known to be a protective factor against osteoporosis. However, recent studies have shown that excessive adiposity may be detrimental for bone health. Objective To determine the association of lean mass (LM) and fat mass (FM) with bone mineral density (BMD) in Thais. Methods Bone density studies of consecutive patients of Srinagarind Hospital, Khon Kaen, Thailand between 2010 and 2015 were reviewed. LM, FM, lumbar spine (LS) and femoral neck (FN) BMD were measured. Lean mass index (LMI) and fat mass index (FMI) were calculated [LMI=LM (kg)/height (m)2, FMI=FM (kg)/height (m)2] and analyzed to determine the association with LS and FN BMD using multiple regression analysis. This study was approved by the institutional ethical committee (HE42116). Results A total of 831 participants were included. The mean ± SD age was 50.0 ± 16.3 years. In men, LMI (per 1 kg/m2 increase) was positively correlated with FN BMD (g/cm2, β 0.033) and LS BMD (g/cm2, β 0.031), after adjusting for age, height and FMI. Whereas FMI (per 1 kg/m2 increase) was negatively correlated with FN BMD (g/cm2, β -0.015) but not with LS BMD (g/cm2, β 0.005) after adjusting for age, height and LMI. In women, both LMI and FMI were positively correlated with LS BMD (g/cm2, LMI: β 0.012; FMI: β 0.016) and FN BMD (g/cm2, LMI: β 0.034; FMI: β 0.007) with age, height, LMI and FMI included in the model. Conclusion Our findings indicate that FM has a sex-specific influence on BMD in Thais. Introduction Osteoporosis, a condition characterized by bone fragility secondary to low bone mass and loss of connectivity and structural integrity of bone tissue, is the most common metabolic bone disease that affects over 200 million people worldwide (1,2). It is estimated that one in every three women over the age of 50 years and one in every five men will suffer from fragility fractures as a result of osteoporosis during their lifetime (3). Traditional risk factors for osteoporosis include advanced age, female sex, family history, low calcium intake, malabsorption, vitamin D deficiency, lack of physical activity, weight loss, smoking, excessive alcohol use, and the presence of chronic inflammatory diseases (4). On the other hand, increased body weight and obesity have long been thought to be a protective factor against osteoporosis (4, 5). Interestingly, recent evidence suggests that excess fat mass (FM) may be detrimental for bone health, as recent studies have found an inverse relationship between FM and bone mineral density (BMD), whereas previous studies found the opposite (6)(7)(8)(9). Given the inconsistencies of the data, it is assumed that the relationship between FM and BMD is complex and different across sex and sites of BMD measurements (5,6,10). Therefore, we aimed to investigate the association of lean mass (LM) and fat mass (FM) with lumbar spine (LS) and femoral neck (FN) BMD in Thai men and women. Study population Bone density studies of male and female consecutive community-dwelling patients aged 20 -90 years were retrospectively reviewed from the medical record database of Srinagarind Hospital, Khon Kaen, Thailand between 2010 and 2015. Participants aged 20 to 90 years who underwent BMD testing at both the lumbar spine and the hip were included in this study. Patients with one of the following exclusion criteria were excluded: history of fragility fractures at any sites; history of traumatic fractures of the spine or femur; medications that may affect bone metabolism except calcium and vitamin D; history of any spinal surgery; lumbar scoliosis greater than 20 degrees; two or more non-assessable lumbar vertebrae; early or surgical menopause; and Z-score outside the range of ± 2.0 at either the lumbar spine, total proximal femur, or the femoral neck. This study was reviewed and approved by the Khon Kaen University Human Research Ethics Committee in accordance with the Helsinki Declaration and the Good Clinical Practice Guidelines (Reference No. HE42116). Study measurements Demographic data were collected including age, body weight, height, and body mass index (BMI) was calculated. Lumbar spine (LS), femoral neck (FN) BMD, lean mass (LM) and fat mass (FM) were measured using dual energy x-ray absorptiometry on a Lunar Prodigy bone densitometer (GE Healthcare, Madison, WI). Lean mass index (LMI) and fat mass index (FMI) were calculated [LMI=LM (kg)/height (m) 2 , FMI=FM (kg)/height (m) 2 ] and were analysed to determine the association with LS and FN BMD using multiple regression analysis. Statistical analysis Comparisons of participants' characteristics between males and females were performed using independent sample t-test for continuous parametric data, Mann Whitney U-test for continuous non-parametric data and Chi-square test for categorical data. Comparisons of participants' characteristics among groups with different LMI and FMI were performed using one-way ANOVA followed by post-hoc LSD and Bonferroni tests for continuous parametric data. Pearson correlation analysis was used to determine univariate association of age with LM, FM, LMI, FMI and FN and LS BMD. Linear regression analysis was performed to determine univariate and multivariate association of LMI and FMI with FN and LS BMD. Logistic regression analysis was used to determine unadjusted and adjusted odds ratios (OR) and 95% confidence interval (CI) that represent the association of LMI and FMI with osteoporosis at FN and LS. Statistical significance was defined as p-value <0.05. SPSS version 27 (SPSS Inc., Chicago, IL) was used to perform statistical analysis. Data illustrations were generated using the GraphPad Prism software 9.4.0 (GraphPad, La Jolla, CA, USA). Association of lean mass index and fat mass index with femoral neck and lumbar spine bone mineral density Regression coefficients of LMI and FMI on FN and LS BMD were demonstrated in Table 2. In male participants, FN BMD (g/ cm 2 ) was positively correlated with LMI (per 1 kg/m 2 increase; b 0.033, 95%CI 0.024 -0.041) and inversely correlated with FMI (per 1 kg/m 2 increase; b -0.015, 95%CI -0.022 --0.007), after adjusting for age and height with LMI and FMI included in the same model (Model 3, Table 2). Whereas LS BMD was associated with only increased LMI (per 1 kg/m 2 increase; b 0.031, 95%CI 0.021 -0.040) but not FMI (per 1 kg/m 2 increase; b 0.005, 95%CI -0.003 -0.014), with adjustment for the same variables (Model 3, Table 2). Association of lean mass index and fat mass index with osteoporosis at femoral neck and lumbar spine Multivariate logistic regression analysis of the association of LMI and FMI with presence of osteoporosis at FN and LS (defined by T-score BMD ≤-2.5) were demonstrated in Table 3. In male participants, LMI (per 1 kg/m 2 increase) was statistically significantly associated with decreased odds of FN osteoporosis (OR 0.466, 95%CI 0.305 -0.711) but not LS osteoporosis, after adjusting for age, height and FMI (Model 3, Table 3). FMI (per 1 kg/ m 2 increase) was statistically significantly associated with increased odds of FN osteoporosis (OR 2.037, 95%CI 1.132 -3.666) after adjusting for age, height and BMI (Model 2, Table 3), but the association became insignificant in the model adjusting for age, height and LMI (Model 3, Table 3). Discussion In this cross-sectional study of 333 men and 498 women, we found that increased LM had a positive effect on LS and FN BMD in both men and women. On the other hand, we revealed a sex-specific association between FM and BMD as increased FM had a negative effect on FN BMD and no significant effect on LS BMD in men but a positive effect on FN and LS BMD in women. Furthermore, the subgroup analysis revealed that FM was positively associated with LS BMD only among women with low LM. The results of our study confirm the previously reported positive impact of LM on BMD in multiple studies (7,(11)(12)(13). More importantly, our results support the recent observation from the NHANES 2011 -2018 database that increased FM was negatively associated with total body BMD, particularly in men (0.13 lower T-score per 1 kg/m 2 increase in FMI), which is contrary to the result from a prior meta-analysis of 44 studies demonstrating a positive association between FM and BMD (6). Notably, our findings underscore that increased FM in men may selectively affect FN BMD, rather than LS BMD, which suggests that high body fat may selectively affect cortical bone rather than trabecular bone. Although the exact underlying mechanism of the negative impact of FM on BMD in men, but not in women, is still unclarified, it is thought to involve the effects of obesity-related hormonal changes on the skeleton (14,15). First, obesity and increased fat mass are known to cause decreased testosterone level, an anabolic hormone that stimulates bone formation, in men due to suppression of the hypothalamic-pituitary-testicular axis and insulin resistance−associated reductions in sex hormone binding globulin (16,17). Therefore, men with increased fat mass could have lower testosterone levels, which may explain the observed sex-specific association between fat mass and lower FN BMD. It is however unclear why this would selectively affect FN BMD but not LS BMD. Additionally, it should be noted that obesity is associated with increased estrogen concentrations among males and that estrogen is protective against osteoporosis in both sexes (18,19). Data on sex hormones concentrations would have been valuable to identify the potential explanations for our observations. Another explanation could be the difference in visceral and subcutaneous fat proportions between men and women, as previous studies have suggested that increased visceral fat may have a detrimental effect on BMD compared to subcutaneous fat due to its associated low-grade chronic systemic inflammation (increased interleukin-6 and tumor necrosis factor-a) (5). Data on body fat distribution and inflammatory markers would have been valuable to explain the difference in the results between men and women. Unfortunately, such data were not available in our study. Other possible explanations for the inverse association between FM and BMD involve leptin, insulin resistance, vitamin D status and lifestyle factor. It has been shown that leptin-deficient and leptinreceptor deficient mice were shown to have increased bone formation, suggesting the negative effect of increased leptin in obesity on bone formation (20). Insulin resistance may also play a role in triggering bone loss, although previous studies have shown mixed results (21,22). Furthermore, vitamin D deficiency is well-known to be associated with increased FM and obesity and therefore could mediate this association (23,24). Finally, increased FM may represent sedentary lifestyle and lack of physical activity, which can be associated with decreased mechanical load to the skeleton and low cortical BMD (25,26). This could particularly explain our observation of the inverse association between FMI and FN BMD in men. Interestingly, we found that FM was positively correlated with both LS and FN BMD in women with low LM, but not in those with high LM. This suggests that LM and sex could be effect modifiers of the association between FM and BMD, which may explain the discrepancy in the results among the prior studies (6)(7)(8)(9). The positive effect of FM on BMD could be due not only to increased mechanical load to the skeleton, but also increased estrogen produced by the adipose tissue, especially in postmenopausal women (15). This study has certain limitations that should be acknowledged. First, data were collected retrospectively and thus factors on how DXA examinations were acquired may not have been adequately controlled, despite the established standard practice protocols in our institution. Examinations were done by several technologists, which could have some effects on the precision of the data (27), but this would, on the other hand, permit better generalizability of our finding (e.g. our results are generalizable regardless of the experience level or other characteristics of the technologist). In addition, the causal association cannot be concluded with certainty as this study is cross-sectional by design. Data on potential confounders and mediators, such as medical comorbidities, functional status, physical activity, vitamin D status, fat distribution, sex hormones and inflammatory markers were also not available in this study. Further prospective cohort studies with more robust adjustments are needed to confirm our observations. Conclusion Our results indicate sex-specific influence of fat mass on BMD in Thais. Increased lean mass had a positive association with LS and FN BMD in both men and women. On the other hand, increased fat mass had a negative association with FN BMD and no significant association with LS BMD in men but a positive association with FN and LS BMD in women. Further prospective cohort studies are needed to draw causality of these associations. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by Khon Kaen University Human Research Ethics Committee. The patients/participants provided their written informed consent to participate in this study.
2023-03-08T16:03:49.355Z
2023-02-28T00:00:00.000
{ "year": 2023, "sha1": "46fefb8108c301c28e3dd44517899cce5d38460b", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2023.1035588/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1289b4dbe6c57c6c116c27c9ce5dffd61dd649fa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
210877225
pes2o/s2orc
v3-fos-license
Meshless Local Petrov–Galerkin Formulation of Inverse Stefan Problem via Moving Least Squares Approximation Abstract: In this paper, we study the meshless local Petrov–Galerkin (MLPG) method based on the moving least squares (MLS) approximation for finding a numerical solution to the Stefan free boundary problem. Approximation of this problem, due to the moving boundary, is difficult. To overcome this difficulty, the problem is converted to a fixed boundary problem in which it consists of an inverse and nonlinear problem. In other words, the aim is to determine the temperature distribution and free boundary. The MLPG method using the MLS approximation is formulated to produce the shape functions. The MLS approximation plays an important role in the convergence and stability of the method. Heaviside step function is used as the test function in each local quadrature. For the interior nodes, a meshless Galerkin weak form is used while the meshless collocation method is applied to the the boundary nodes. Since MLPG is a truly meshless method, it does not require any background integration cells. In fact, all integrations are performed locally over small sub-domains (local quadrature domains) of regular shapes, such as intervals in one dimension, circles or squares in two dimensions and spheres or cubes in three dimensions. A two-step time discretization method is used to deal with the time derivatives. It is shown that the proposed method is accurate and stable even under a large measurement noise through several numerical experiments. Introduction A free boundary problem (FBP) is a partial differential equation with initial and boundary conditions, in which a part of the boundary of domain, called a free boundary, is unknown at the outset of the problem. FBPs have many applications in science and engineering. FBPs usually happen in phase separation problems, which can be either stationary or moving free boundaries. The Stefan problem is one kind of the free boundary problems which describes the process of melting and solidification. In this paper, the numerical solution of these problems are considered. The determination of the temperature function and the position of the free boundary are desired [1][2][3][4][5]. The existence and uniqueness of the solution to these problems are investigated in References [2,3,5]. In recent years, several methods have been employed for solving the Stefan problems numerically, such as the homotopy analysis method [6,7], Lie-group shooting method [8], finite difference and finite element methods [9] and the variational iteration method [10]. Grzymkowski and Slota [11,12] applied the Adomian decomposition method (ADM) to solve one-phase Stefan problems and Slota [13] used the homotopy perturbation method for one-phase inverse Stefan problems. In Reference [14] the method of fundamental solutions was applied to the one-dimensional Stefan problem. For many years, the finite element method (FEM) has been considered a standard and effective technique for numerically solving many applied problems in science and engineering [15,16]. Due to several limitations, these techniques cannot solve some of the complex problems of today's world. For this reason, the development and formulation of new and effective numerical techniques in recent years has been an interesting field for some engineers and mathematicians. In recent years, meshless methods have gained considerable attention, in engineering and applied mathematics. Flexibility and simplicity are the advantages of these methods. Meshless methods overcome the shortcomings of the mesh-based techniques [17]. In these methods, a system of algebraic equations is created using a set of scattered nodes-called field nodes-within the domain and its boundary for representation (but not for discretization) the whole domain of the problem and its boundary, therefore it is not necessary to use a predefined mesh for the domain discretization. Meshless methods are generally divided into three categories. The first category includes methods that use integration and are based on weak forms of PDEs, such as the element free Galerkin method [18][19][20][21][22][23][24]. The second methods are based on the strong forms of PDEs and use integration, for example, the meshless collocation method based on radial basis functions (RBFs) [25][26][27][28] are in this category. The third category is a set of methods based on the combination of weak forms and strong forms. The meshless methods based on strong form are truly meshless methods and the implementation of these methods are usually simple. They are also computationally efficient. In spite of several advantages, they also have some shortcomings, such as numerical instability and less accuracy. The meshless weak form methods are those that use the global and local weak form. The stability and accuracy of these methods make them more attractive. In these methods, the global weak forms are used and numerical integrations are carried out on the global background cells in solving the algebraic equations. In meshless local weak form methods, it does not require any background integration cells for field nodes. The meshless local Petrov-Galerkin (MLPG) method [29][30][31][32][33][34][35][36][37][38][39][40] is based on the local weak form of PDEs. In the MLPG method, the numerical integrations are performed over a local small sub-domain defined for each node. The local sub-domains usually have a regular shape, such as interval, circle, square, sphere, cube, and so forth. The moving least square (MLS) approximations have an important role in using the MLPG method. By considering a local sub-domain for each field node, the MLS approximates the solving function at each field node. In this paper, a kind of MLPG method using the MLS approximation is applied for numerically solving the Stefan problem. The layout of the paper is as follows. In the next section, we give the formulation of the inverse Stefan problem. Section 3 briefly describes the MLS approximation. In Section 4 we present the time discretization of the problem. In Section 5 the local weak form formulation of the discretized problem is presented. We present the MLPG discretization of the problem in Section 6. Numerical examples are given and solved to observe the performance of the proposed method in Section 7. At last, we give a conclusion in Section 8. Statement of the Problem Consider the following heat conduction equation subject to the initial condition: v(y, 0) = f (y), 0 < y < s 0 , s 0 = s(0), and boundary condition v(0, t) = g(t), where α denotes the thermal diffusivity and v(y, t), t and x denote the temperature, time and spatial location, respectively. On the free boundary where α is the thermal diffusivity, β is the thermal conductivity, κ is the latent heat of fusion per unit volume, h(t) is the temperature of the phase change, v(y, t) is temperature and t and x refer to time and spatial location, respectively. Equation (4) represents the continuity of temperature and Equation (5) is the Stefan condition. In this problem, we try to find v(y, t), the temperature distribution in given domain and s(t), the free boundary.This is a nonlinear problem due to Stefan condition [4]. By using the change of variable, we the free boundary problem is transformed into a fixed boundary problem. Let Then Equations (1)-(5) are changed to In this paper, an approach based on the MLPG method and MLS approximations is applied to the Equation (7), which is subjected to the initial condition (8) and over-specified boundary conditions (9)-(11). The MLS Approximation Technique In this section, the formulation of MLS approximation is explained. The trial functions at each node is represented by the MLS approximation. Consider the sub-domain Ω s , with the boundary ∂Ω s , of problem global domain Ω around point x. In fact, Ω s is the domain of definition (or support) of the MLS approximation for the trial function at x. Let q t (x) = [q 1 (x), q 2 (x), ..., q m (x)] be a complete monomial basis in the space coordinate x. For example, the linear basis for one-dimensional is and the quadratic basis function is For all x belong to Ω s the MLS approximation u h (x) of u in Ω s , over a set of random nodes x i (i = 1, 2, ..., n) located in Ω s , is given as where ] is a vector of coefficients. In order to determine the unknown coefficient vector λ(x), we define a function I(λ(x)) as follows where the matrices Q and W(x) in Equation (15) are defined as In the above relations, w i (x), i = 1, 2, ..., n, is the weight function corresponding to the node x i , so that for each x in the support of w i (x) we have w i (x) > 0, n is the number of nodes in Ω s for which the weight functions w i (x) > 0 andû t = [û 1 ,û 2 , ...,û n ] is the vector of fictitious nodal values. It is necessary to mention thatû i , i = 1, 2, ..., n, are not equal to nodal values u i , i = 1, 2, ..., n, of the unknown trial function u h (x) in general ( Figure 1). The stationarity of I(λ(x)) in Equation (15) where F(x) and G(x) are matrices defined as follows The MLS approximation is well-defined only when the matrix F in Equation (16) is non-singular, that is, if and only if the rank of Q equals m. A necessary condition to have a well-defined MLS approximation is that at least m weight functions are non-zero (i.e., n > m) for each sample point x ∈ Ω. Computing λ(x) from Equation (16) and substituting it into Equation (14), gives where or The function ψ i is usually called the shape function of the MLS approximation corresponding to nodal point x i . The partial derivative of ψ i (x) with respect to x is defined as x denotes the derivative with respect to x. In this paper, the following Gaussian weight function is used The Time Discretization of the Problem We use the following finite difference approximations for the time derivative operators where ., M and t = T M . Also by using the Crank-Nicolson technique, we have the following approximations: Using the above approximations, Equations (7) and (11) can be respectively written as: Suppose that λ = 2 t , then we have The Local Weak Form Formulation Let Ω i q be a sub-domain associated with the nodal point x i , i = 1, 2, ..., N, (called local quadrature cell) in the global domain Ω. Ω i q i = 1, 2, ..., N, overlap each other and union of them cover the whole global domain Ω. In this paper Ω i q are intervals centered at x i of radius r q . By applying the MLPG method, the local weak form is obtained over local quadrature cells Ω i q . For each node x i ∈ Ω i q the local weak form of Equation (29) is represented as follows where the Heaviside step function [41,42] ν is used as the test function. Applying the integration by parts in Equation (31) the following local weak form equation is obtained: where ∂Ω i q is the boundary of Ω i q . MLPG Discretization In this section, we obtain a system of algebraic equations from discretization of the Equation (33), by employing MLS approximation. Equation (33) is discretized for this purpose. Consider the N regularly points x i , i = 1, 2, ...N, in the domain of the problem and its boundary such that x i+1 − x i = h. Suppose that u(x i , t k ), is determined and u(x i , t k+1 ), is unknown for i = 1, 2, ...N. In order to determine the N unknown quantities u(x i , t k+1 ) we need to have N equations. For interior nodes x i of the domain Ω, by replacing MLS approximation Formula (19) in the Equation (33), the following discrete equations are obtained For boundary nodes x = 0 and x = 1 we set together with Equation (30) for which its discrete form is written as The matrix form of Equations (34)- (36) for all N nodal points can be represented as follows where Assuming that and U = (u i ) N×1 , Equations (37) and (38) yield the following system of equations According to the boundary conditions (35) and (36) we have for each step At the first step, when k = 0, due to the initial conditions, we have the following assumptions Numerical Experiments In this section, we test the described meshless method with two examples. In numerical computations, the input Cauchy data are considered with noisẽ where δ denotes the level of noise, and R(i) are random numbers in [−1, 1]. In these examples, the domain integrals are approximated using the 4 points Gaussian quadrature rule. In order to investigate the accuracy of computed approximations and the efficiency of the presented method, the following root mean square (RMS) error and absolute error formulas are applied Absolute error s = |s exact (t j ) − s approx (t j )|. In implementing the meshless local weak form, each local quadrature domain Ω i q is taken as interval centered at x i of radius r q = 0.7h where h = x i+1 − x i , i = 0, 1, 2, ...N − 1. Also the radius of support domain Ω s is r s = 4r q and the quadratic basis functions (13) is used in Equation (14). The results of using the proposed method are obtained with t = 0.01, h = 0.01. Figure 2 presents the RMS error for u(x, t) and Absolute error for s(t) versus the shape parameter c at t = 1. For other values of t the results are almost the same. In this example, the interval (0.0097, 0.01155) is suggested for choosing c. It is necessary to note that, ill conditioning occurs by increasing c. Hereafter, we fix it at c = 1.1h = 0.011. In Figure 3 RMS versus N is plotted at t = 1. It can be seen that in this figure, the error values decrease by increasing N. Values of RMS error for u(x, t) and Absolute error for s(t) with δ = 0 and δ = 0.1 are presented in Table 1. It shows that the numerical results are more accurate when there exists no noise on the input data. Under a noise level δ = 0.1, the numerical result obtained by the MLPG method is also acceptable. Exact solution, numerical solution and Absolute errors for u(x, t) and s(t) are plotted in Figures 4-6. Example 2. In this example, we consider the problem (1)-(5) with α = 1, β = −1, κ = 1, T = 1 and The analytical solutions of problem are given by v(y, t) = exp(1 − The approximation results of using the proposed method are obtained with t = 0.01, h = 0.01. Figure 7 presents the RMS error for u(x, t) and absolute error for s(t) versus the shape parameter c at t = 1. For other values of t the results are similar. In this example, the interval (0.0097, 0.01155) is also suggested for choosing c. It is necessary to note that ill conditioning occurs by increasing c. Hereafter we fix it at c = 1.1h = 0.011. In Figure 8 RMS versus N is plotted at t = 1. We observe that in this figure, the error values decrease by increasing N. Values of RMS error for u(x, t) and absolute error for s(t) with δ = 0 and δ = 0.1 are presented in Table 2. We see that the numerical results are more accurate when there exist no noise on the input data. Under a noise level δ = 0.1, the numerical result obtained by the MLPG method is also acceptable. The exact solution, numerical solution and absolute errors for u(x, t) and s(t) are plotted in Figures 9-11, respectively. Example 3. With an example, we compare the average values of the absolute errors between the present method and the Adomian decomposition method and the fourth-order Runge-Kutta method obtained in Reference [43]. We consider the problem (1)-(5) with α = 0.1, κ = 10, β = −1, T = 0.5 and The analytical solutions of problem are v(y, t) = exp(1 − y + 0.1t), s(t) = 0.1t + 1, and u(x, t) = v(xs(t), t) = exp(1 − x(0.1t + 1) + t), s(t) = 0.1t + 1. Tables 3 and 4 show the results of calculations related to the reconstruction of the moving boundary and the temperature distribution using the present method, the Adomian decomposition method and the fourth-order Runge-Kutta method. It can be seen that the presented procedure in this paper is useful and efficient in finding solutions of the considered problem. Conclusions In this paper, a kind of MLPG method using the MLS approximation to represent the trial function at each field node, is applied for numerically solving a nonlinear one-phase Stefan problem. Nonlinearity of this problem is due to the Stefan condition. The free boundary problem is transformed into a fixed boundary by the change of variable. In the presented method, all integrations are performed over small local quadrature domains so it does not require using any background integration cells. In the proposed method, the shape functions are produced by the MLS approximation technique. A two-step time discretization method is used to approximate the time derivatives operators. The Heaviside step function was used as the test function in the local weak form method in MLPG. Numerical results show that the proposed method is accurate and stable, although under a large measurement noise. Author Contributions: All authors contributed equally to this work.
2019-12-12T10:20:51.896Z
2019-12-10T00:00:00.000
{ "year": 2019, "sha1": "96b5f32541d1e7057c727427d8cb5f7c3e7c584a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/mca24040101", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a609ef8ff7b5660f711e645f7d132b17f77677eb", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Mathematics" ] }
13060912
pes2o/s2orc
v3-fos-license
The importance of interactions between atrial fibrillation and heart failure Heart failure (HF) and atrial fibrillation (AF) are amongst the commonest cardiovascular conditions encountered in clinical practice and frequently coexist. Over the last decade, they have evolved into global cardiovascular epidemics. This, in turn, has huge clinical and economic implications. There is ample evidence that AF and HF have a mutually deleterious effect on each other. AF is not only a marker of HF severity but also affects HF prognosis independently. This article presents the close pathophysiological relationship between AF and HF and the adverse prognostic consequences of this bi-directional interaction. The scope of various therapeutic modalities and their potential impacts are discussed briefly. INTRODUCTION Heart failure (HF) and atrial fibrillation (AF) are amongst the commonest cardiovascular conditions encountered in clinical practice and frequently coexist. Up to 40% of patients with HF either have or go on to develop AF and approximately 40% of patients with AF present with (or develop) HF. 1 Heart failure predicts the development of AF and conversely the presence of AF predicts the development of HF. Both are increasingly prevalent phenotypic manifestations of a multitude of different primary or secondary cardiac pathologies (see figure 1). The prevalence of HF and AF has steadily increased over the years. This is in part through an ageing population and as development of more effective therapy improves outcomes associated with other cardiovascular conditions (such as myocardial infarction). In Europe, HF has an estimated prevalence of 30 million with 1 in 5 lifetime-odds of developing HF. 2 Similarly 2% of Europeans have AF with a projected prevalence of 14 -17 million by the year 2030. 3 The lifetime risk of AF is 1 in 4 (as derived from community-based cohorts in Framingham and Rotterdam studies). a) Prognosis The presence of AF is associated with an increased risk of mortality in patients with HF. 4 This adverse prognosis is observed in patients with left ventricular (LV) systolic dysfunction (HF with reduced ejection fraction -HFREF) as well as those with preserved left ventricular function (i.e. HF with preserved LV systolic function -HFPEF). For instance, the SOLVD (Studies Of Left Ventricular Dysfunction) trial demonstrated that even in asymptomatic patients with an LV ejection fraction of <35%, mortality was 34% when AF was present and 24% when it was not. The mortality in new onset AF (12%) was also greater than for persistent AF (7%). 5 Sub-group analysis of the CHARM (Candesartan in Heart Failure Assessment of Reduction in Mortality and morbidity) study revealed that AF has an independent and deleterious effect on long-term all-cause cardiovascular mortality in HF patients. The absolute mortality risk was highest in patients with LVEF <35%, however, Page 5 of 14 The importance of interactions between atrial fibrillation and heart failure those with HFPEF had the highest relative risk of death (HR 1.37, 95% CI 1.06 to 1.79) in contrast to HFREF (HR 1.22, 95% CI 1.04 to 1.43). 6 Similarly, meta-analysis by Mamas et al. (using data derived from 16 studies and incorporating over 50,000 patients) showed that AF has a negative impact on total HF mortality with an odds ratio of 1.40 (95% CI 1.32-1.48, P<0.0001) in randomised trials and 1.14 (95% CI 1.03-1.26, P<0.05) in observational studies. Again, this was applicable to HFREF as well as HFPEF patients. 4 We have previously summarised scientific evidence showing the negative prognostic effect of AF in HF patients (See Tables 1 and 2). 7 However, it is not entirely clear whether AF per se is the cause of increased mortality or merely a marker of more advanced HF. b) Symptoms of HF AF is more likely to occur in patients with more severe HF symptoms (e.g., NYHA I -10%, NYHA 4 -50%). Prolonged exposure to AF with a fast ventricular response also contributes to LV systolic dysfunction -a tachycardiomyopathy. c) Stroke risk AF confers a greater degree of stroke risk in HF patients as the presence of HF carries a weighting of 1 point in the CHADSVASC risk stratification tool for AF and stroke. The risk of stroke is equivalent both in HFREF and HFPEF alike at a rate of up to 4.4% per 100 patient years. 8 TREATMENT OF AF IN PATIENTS WITH HF Treatment algorithms for both are extensively discussed in guidelines elsewhere 9 and are beyond the scope of this article. However, the main principles of the treatment of AF in patients with HF can be summarised as follows: Rate or rhythm control The commonest form of rate control is with AV nodal blocking agents such as beta blockers, rate controlling calcium channel antagonists (provided LV systolic function is preserved) and digoxin. 9 Management of both occurring together is extrapolated from trials in AF that contain between 20 -Page 6 of 14 The importance of interactions between atrial fibrillation and heart failure 30% of patients with HF and HF trials that contain between 10-30% of patients with AF. The optimum target rate to achieve in AF is, however, difficult to determine with any degree of precision. Current guidelines define adequate rate control in atrial fibrillation as maintenance of the ventricular rate response between 60 and 80 beats/min at rest and between 90 and 115 beats/min during moderate exercise. Special considerations for AF ventricular rate control in HF include: • The optimum heart rate suggested in the AF-CHF trial (in which rate versus rhythm strategies were compared in HFREF patients) was 80 bpm at rest and <110 bpm with exertion. This trial also demonstrated no significant benefit in rhythm compared to rate control. 11 • Beta blockers are the most commonly indicated. 9 It is unclear whether the beneficial prognostic effects of beta blockers in sinus rhythm in patients with HFREF are generalizable to similar patients with AF 10 , but currently beta blockers remain the preferred rate control agent in HFREF. • Digoxin is suggested in patients unable to tolerate beta blockers -this may include patients with acute decompensated HF in whom the negative inotropic effect of beta blockers may exacerbate congestion. • The role of cardiac glycosides in HF and AF has lately become controversial due to reports suggesting increased mortality in HF and AF patients on digoxin therapy. 11 This evidence is based on observational studies and post hoc analysis and should therefore be viewed with these limitations in mind. So far, DIG trial is the only randomised controlled trial (RCT) in patients with HF and sinus rhythm but it did not present a direct comparison between digoxin and other rate control agents. On the other hand, there is no RCT studying patients in AF with digoxin. Moreover, there is the confounding bias that physicians are likely to prescribe digoxin in patients who are more unwell and may have a higher overall mortality anyway. Nevertheless, it would be prudent to say that till we have more robust data available, Page 7 of 14 The importance of interactions between atrial fibrillation and heart failure digoxin should be used with caution in HF patients keeping serum levels less than 1.2 ng/mL. 12 On the other hand, rhythm control strategies (using anti-arrhythmic medications) have shown no benefit over rate control in terms of mortality, stroke prevention or hospitalisation. However, they can be reserved for patients who are intolerant to rate control medications or remain symptomatic despite adequate rate control. A number of trials have looked at rate versus rhythm control. AF-CHF (Atrial Fibrillation in Congestive Heart Failure) trial prospectively randomized 1376 HFREF patients to amiodarone or rate control medication. The cohort was followed up for 3 years looking at all-cause mortality, stroke and HF admission. The difference in mortality in the two arms was not significant (27% and 25% respectively). In addition, there was increased morbidity from torsades and bradycardia in the rhythm control arm. 13 findings. The latter, however, were not exclusive to HF patients and arguably, were under-represented by a younger patient cohort who may be more symptomatic as well as in an earlier phase of the disease and thus, are likely to gain more from rhythm control as compared to a rate control strategy. It follows, therefore, that rhythm control should be reserved for patients who are particularly symptomatic with AF despite adequate rate control. Since medications used for rhythm control (e.g. class 1a, class 1c agents, dronedarone) all have an increased mortality in HFREF, any rhythm-control technique that obviates the need for antiarrhythmic agents offers a clear advantage. Pulmonary vein isolation by using various catheter ablation techniques has emerged as an encouraging option in this regard. Its role in HF has shown early promise, but remains to be defined 16 particularly in terms of long-term prognosis. To date, a number of small trials have shown very promising results 17,18 (as risks of the procedure are not increased and mortality appears to be improved) while larger randomised trials are underway 19,20 . Depending upon the results, AF ablation may potentially Page 8 of 14 The importance of interactions between atrial fibrillation and heart failure become an important first-line option in patients with HF. Finally, surgical ablation techniques (such as Cox Maze procedure) are available to patients undergoing cardiac surgery. They have been shown to be safe and effective including in patients with HF. Pacing strategies can also be employed in patients with both AF with a fast and slow ventricular response (bradycardia pacing with subsequent rate control medications) and in patients with refractory AF with fast ventricular response (pacemaker implantation with subsequent AV node ablation). This "pace-and-ablate" strategy, however, does not eliminate AF per se. Studies conducted so far are small, non-randomised and mostly from single centres. Results are promising but further larger trials are warranted. Lastly, development of newer anti-arrhythmic drugs such as selective atrial-specific ion-channel blockers (e.g., Vernakalant) may offer an advantage over previously available ones but their role in HF population needs further studies. b) Thromboprophylaxis Patients with AF and HF have their risk of stroke and systemic embolism doubled as compared to either condition alone. 21 Hence, thromboprophylaxis with oral anticoagulants is of paramount importance and has been shown to be safe and effective. 22 Warfarin is thrice as effective as aspirin and novel oral anticoagulants (NOACs) are at least as good as warfarin. NOACs include inhibitors of either thrombin (Dabigatran) or activated factor X (Apixaban, Edoxaban and Rivaroxaban). Results from major trials (RELY, ARISTOTLE, ENGAGE AF-TIMI48, ROCKET respectively) have been encouraging. 23 Dabigatran was the first NOAC introduced into clinical practice. Comparison with warfarin in the RELY trial shows that its dose of 110mg twice daily is superior for bleeding but non-inferior for thromboembolic protection while 150mg twice daily is non-inferior for bleeding but superior for thromboembolic protection. Apixaban, on the other hand, has been shown to be superior to warfarin in efficacy and associated with less gastrointestinal, intracranial and other major bleeding. Other factor X inhibitors are non-inferior to warfarin and associated with less intracranial and other major bleeding but higher gastrointestinal haemorrhage. Rivaroxaban and Edoxaban have the Page 9 of 14 The importance of interactions between atrial fibrillation and heart failure added advantage of being once daily as well. Importantly, several NOAC reversal agents are currently in development and Idarucizumab (which is a fully humanized monoclonal antibody fragment) has recently received global approval as a specific reversal agent for Dabigatran. 24 Finally, a small number of patients are unable to receive oral anticoagulation due to drug intolerance or bleeding contra-indications. Left atrial appendage occlusion devices (such as Watchman device) hold promise in such cases but need further experience and long-term data. 25 CONCLUSIONS The prevalence of both AF and HF are increasing and they frequently co-exist. Concurrence of AF and HF is associated with a higher risk of morbidity and mortality than either condition alone. There is no clear-cut evidence so far that rhythm control is superior (in terms of long-term mortality) to rate control. Radiofrequency catheter intervention techniques are promising and it is hoped that larger trials (looking at outcome data) would help incorporate these into the standard management algorithm. Finally, novel oral anticoagulants are a welcome addition to the therapeutic armamentarium available to the clinician. Future insights into mechanisms of disease and development of new therapeutic modalities continues to hold promise in this challenging field.
2018-04-03T00:36:17.947Z
2016-06-01T00:00:00.000
{ "year": 2016, "sha1": "5e31d9872a3b31defffef1b9253a78837fa669a1", "oa_license": null, "oa_url": "https://www.rcpjournals.org/content/clinmedicine/16/3/272.full.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "9656d7f63ea3e7a8524f9de926c1a4787a723cf9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
83118672
pes2o/s2orc
v3-fos-license
Induction of Autophagic Cell Death by Targeting Bcl-2 as a Novel Therapeutic Strategy in Breast Cancer Breast cancer is the second leading cause of tumor related death in women in Western countries. It has been estimated in recent reports that this year about 207.000 women will be diagnosed with breast cancer and about 40.000 will die of it in the US. (Jemal et al., 2010). While 5 year survival in early stages is about 90 %, it is as low as 15 % in metastatic stage. Despite the fact that there are many agents to treat the breast cancer, most of the tumors ultimately become unresponsive to these systemic therapeutics (Alvarez et al., 2010). Therefore new therapeutic strategies either alone or in combination with conventional therapies are required to improve the survival rates of breast cancer patients. Induction of Autophagic Cell Death by Targeting Bcl-2 as a Novel Therapeutic Strategy in Breast Cancer 59 classic DNA laddering and believed to be a result of an extensive autophagic degradation of intracellular content (Lockshin RA, Zakeri Z, 2007).Studies showed that cytotoxic signals can induce autophagy in cells that are resistant to apoptosis (apoptosis defective), such as those expressing high Bcl-2 or Bcl-XL, those lacking Bax and Bak or those being exposed to pan-caspase inhibitors, such as zVAD-fmk (Shimizu et al, 2004).Proapoptotic Bcl-2 family member proteins, Bak and Bax, regulate intrinsic apoptotic pathway by causing mitochondrial outer membrane permeabilization and cyctochrome c release.Bax and Bak (-/-) knockout fibroblast cells have been shown to be resistant to apoptosis and undergo an autophagic cell death after the induction of death, following starvation, growth factor withdrawal, chemotherapy (etoposide) or radiation (Moretti et al, 2007).The evidence suggests that autophagy leads to cell death in response to several compounds, including rottlerin (Akar et al, 2007) cytosine arabinoside (Xue et al, 1999), etoposide and staurosporine as well as growth factor deprivation (Xue et al, 1999).A link between autophagy and related autophagic cell death has been demonstrated using pharmacological (e.g.3-MA) and genetic (silencing of ATG5, ATG7 and Beclin-1) approaches for suppression of autophagy.For example, the knockdown of ATG5 or Beclin-1 in cancer cells containing defects in apoptosis lead to a marked reduction in autophagic cell death (and autophagic response) in response to cell death stimuli with no sign of apoptosis (Akar et al, 2007).Studies also suggest that apoptosis and autophagy are linked by effector proteins (e.g., Bcl-2, Bcl-XL, Mcl-1, ATG5, p53) and common pathways (e.g., PI3K/Akt/mTOR, NF-kB, ERK) (Akar et al,2007;Yousefi et al, 2006, Shimizu S et al,2004;Akar U et al 2008)).Overall, there is evidence that autophagy may function as a type II PCD in cancer cells in which apoptosis is defective or hard to induce.Therefore it is reasonable to propose the notion that the induction of autophagic cell death may be used as a therapeutic strategy to treat cancer (Dalby et al, 2010, Ozpolat et al 2007). Targeting autophagy as a novel cancer therapy Autophagy can be used as a new therapeutic strategy either by inducing the autophagic cell death or inhibiting protective autophagy depending on the context (Dalby et al, 2010).Apoptosis defects such as a lack of caspase 3 or apoptosis resistance such as having overexpression of ant apoptotic proteins lead to resistance to chemotherapy, radiotherapy or some other anticancer agents.Up regulation of the expression of several antiapoptotic Bcl-2 family protein members, including Bcl-2, Bcl-XL, prevents cell to undergo apoptosis induced by death ligands or chemotherapeutic drugs (Bardeesy & DePinho, 2002;Simoes-Wust et al., 2002) Either the induction or the inhibition of autophagy can provide therapeutic benefits to patients and that the design and synthesis of modulators of autophagy may provide novel therapeutic tools and may ultimately lead to new therapeutic strategies in cancer.Defects in apoptosis leads to increased resistance to chemotherapy, radiotherapy, some anticancer agents and targeted therapies.Therefore, induction of autophagic cell death may be an ideal approach in those cancers that are resistant to apoptosis by anticancer therapies (e.g., chemotherapy, radiation).As explained in the previous section cancer cells can undergo autophagic cell death when their apoptosis is inhibited, or they are resistant to therapyinduced apoptosis (e.g. in response to DNA-damaging agents such as etoposide), suggesting that autophagic cell death can be induced as an alternative cell death mechanism when cells fail to undergo apoptosis.Therefore, induction of autophagic cell death may serve as a novel therapeutic tool to eliminate cancer cells with defective apoptosis, which is the case in many advanced, drug resistant and metastatic cancers (Dalby et al, 2010).We have recently demonstrated that the inhibition of some protein kinases (e.g., PKCδ in pancreatic cancer) or the targeting of key proteins that are involved in the suppression of autophagy (e.g.Bcl-2, TG2) can trigger autophagic cell death without any other treatment (Akar et al., 2008;Akar et al., 2007;Ozpolat et al., 2007).On the other hand, because a number of cancer therapies, such as radiation therapy, chemotherapy and targeted therapies (e.g.imatinib) induce autophagy as a protective resistance mechanism against anticancer therapies for cancer cell survival, the inhibition of autophagy can be used to enhance the efficacy of anticancer therapies. Bcl-2 The Bcl-2 gene encodes a 26-kDa Bcl-2 proto-oncogene is overexpressed in 40-80 % of breast cancer patients and more than half of all human cancers (Hellemans et al., 1995;Oh et al.2011).Bcl-2 is a gene family consisting of several anti-apoptotic (such as Bcl-2, Bcl-XL, Mcl-1) and pro-apoptotic members (such as Bax, Bak, puma).The balance between pro-and anti-apoptotic proteins determines the cell's fate, to survive or die.Although some studies suggest that enhanced Bcl-2 expression is associated with improved survival in human colon cancer (Buglioni et al., 1999;Meterissian et al., 2001) and breast cancer studies (Cheng et al., 2004).The role of Bcl-2 in cancer cells was shown to be related to its ability to promote the tumorigenesis through interfering with apoptosis and autophagy (Reed et al, 1995, Oh et al.2011).It has been demonstrated that Bcl-2 overexpression leads to an aggressive tumor phenotype in patients with a variety of cancers as well as to the resistance of cancer cells against chemotherapy, radiation, and hormone therapy (Bishop, 1991;Reed, 1995).Recently, Buchholz et al, found 61% of the breast cancer patients treated at MD Anderson Cancer Center were Bcl-2 positive and they had a poor response to chemotherapy compared to those had less Bcl-2 expression (Buchholz et al., 2005).Figure below summarizes the novel functions of Bcl-2 in cancer cells, including metastasis, survival and tumor progresion, these functions will be explained in the following sections.Overall, Bcl-2 over expression confers drug resistance, an aggressive clinical course, and poor survival in patients (Patel et al., 2009;Pusztai et al., 2004). Bcl-2 as an inhibitor of apoptosis and autophagy Bcl-2 family proteins work in pairs with their proapoptotic counterparts for example Bcl-2 heterodimerize with BAX, Bcl-XL with BAK.Proapoptotic members of this family mostly localized to cytosol.Following a death signal the proapoptotic members undergo a conformational change that enables them to target and integrate into membranes, particularly mitochondrial outer membrane (Gross et al., 1999).But anti-apoptotic Bcl-2 is predominantly a mitochondrial protein, and it can prevent mitochondrial changes which take place with apoptosis, including loss of mitochondria membrane potential, release of mitochondria proteins cytochrome c and apoptosis-inducing factor (AIF), and opening of the mitochondria permeability transition pore which is a large conductance pore that evolves in mitochondria after necrotic and apoptotic signals then cytochrome c is released and caspase 9 and 3 are activated (Gross et al., 1999).Therefore, downregulation of Bcl-2 leads to induction of apoptosis, reduction of the apoptotic threshold.Tormo et al. have shown the induction of apoptosis by lipid incorporated Bcl-2 antisense in transformed follicular lymphoma cells (Tormo et al., 1998).An siRNA based inhibition of Bcl-2 is also increased apoptosis in MCF7 breast cancer cells (Lima et al., 2004). Regulation of autophagy and apoptosis through the crosstalk between Bcl-2 and Bcl-XL may determine the predominant response to anticancer therapies. Recently, Pattingre et al., have reported that stable transfection of Bcl-2 in HT29 colon carcinoma cells inhibited starvation induced autophagy, decreased the association of beclin-1 and Vps34 and magnitude of beclin-1 associated class III phosphoinositol-3-kinase activity (Pattingre & Levine, 2006).The proposed mechanism is that Beclin-1 has a BH3 domain that is required to bind to Bcl-2 and bcl-XL for Bcl-2 mediated inhibition of autophagy (Boya & Kroemer, 2009) ( Please see the Figure) (Dalby et al, 2010).It has been shown that the pharmacological BH3 mimetic ABT737 competitively inhibited the interaction between Beclin-1 and Bcl-2/Bcl-XL, antagonized autophagy inhibition by Bcl-2/Bcl-XL and hence stimulated autophagy (Maiuri et al., 2007).A recent study demonstrated that antiautophagic property of Bcl-2 is a key feature of Bcl-2-mediated tumorigenesis ( Oh et al, 2011).MCF7 cells expressing Bcl-2 mutant defective in apoptosis inhibition but competent for autophagy suppression grew in vitro and in vivo as efficiently as wild-type Bcl-2.The growth-promoting activity of this Bcl-2 mutant is strongly correlated with its suppression of autophagy in xenograft tumors, suggesting that oncogenic effect of Bcl-2 arises from its ability to inhibit autophagy but not apoptosis.Recent studies also suggested that silencing of Bcl-2 by siRNA induced autophagic cell death (up to 55%) in estrogen receptor (+) MCF7 breast cancer cell line, but not apoptotic cell death (Akar et al., 2008).An increase in autophagy with increased number of punctates in GFP-LC3 transfected cells, increased LC3-II formation and acridine orange accumulation in autophagosomes as well as induction of autophagy genes (e.g., ATG5 and BECN1) were observed in response to Bcl-2 silencing.We further blocked autophagy with ATG5 siRNAautophagy related gene 5-and inhibition of ATG5 significantly blocked Bcl-2-siRNAinduced cell killing, suggesting the autophagic cell death (Akar et al., 2008).Bcl-2 mediatedautophagic cell death pathway induction is most likely related to MCF-7 cells caspase 3deficiency thus, presenting a higher threshold for the induction of apoptosis, additionally; we reported that doxorubicin induces autophagy and apoptosis.These findings led to the hypothesis that apoptosis resistant cancer cells can be killed by autophagic cell death as an alternative death mechanism and this strategy may be uses as a therapeutic intervention for targeted silencing of genes for induction of autophagic cell death.It is important to recognize the conditions and genetic make up of the cells in order for the induction of autophagic cell death.Furthermore, doxorubicin at a high dose (IC95) induced apoptosis but at a low dose (IC50) induced only autophagy and Beclin-1 expression.In addition, when combined with chemotherapy (doxorubicin), therapeutic targeting of Bcl-2 by siRNA induced significant growth inhibition (83%) and autophagy in about 80% of the MCF-7 breast cancer cells (Akar et al., 2008).We also found that in vivo targeted silencing of Bcl-2 by systemically administrated nanoliposomal Bcl-2 siRNA induced autophagy and tumor growth inhibition in mice bearing MDA-MB231 tumors (Tekedereli et al, in press).These results provided the first evidence that targeted silencing of Bcl-2 induces autophagic cell death in breast cancer cells and that Bcl-2 siRNA may be used as a therapeutic strategy alone or in combination with chemotherapy in breast cancer cells that overexpress Bcl-2. Bcl-2 induces cell proliferation and cell cycle progression We have shown that silencing Bcl-2 decreased the clonogenicity and induced cell proliferation inhibition either alone or in combination with doxorubicin, which is a widely used anti-cancer agent, in estrogen receptor (+) MCF7 breast cancer cell line (Akar et al., 2008).We did not observe growth inhibition in Bcl-2-negative MDA-MB-453 cells after treatment with the siRNA, suggesting that Bcl-2 siRNA specifically inhibits growth of Bcl-2overexpressing breast cancer cells (Akar et al., 2008).We also showed that Bcl-2 knockdown inhibited clonogenicity and cell proliferation in estrogen receptor (-) MDA-MB-231 cells (unpublished data).Emi et al. reported 50-70 % proliferation inhibition by Bcl-2 antisense oligonucleotides (ASO) in BT-474 and ZR-75-1 breast cancer cells.They also showed that pretreatment with bcl-2 antisense led to 2.5 to 10 fold increase in sensitivity to chemotherapy with either doxorubicin, mitomycin C, docetaxel or paclitaxel in MDA-MB-231, BT-474 and ZR-75-1 breast cancer cell lines in vitro (Emi et al., 2005).Inhibition of bcl-2 expression by www.intechopen.comInduction of Autophagic Cell Death by Targeting Bcl-2 as a Novel Therapeutic Strategy in Breast Cancer 63 ASO has been shown to inhibit colony formation in AML progenitor cells (Konopleva et al., 2000).Inhibition of bcl-2 by ASO led the cells to arrest in G1 phase of cell cycle in PC3 prostate cancer cell line (Anai et al., 2007).Some other recent studies in breast cancer experimental models have also demonstrated that in vitro and in vivo downregulation of Bcl-2 by ASO enhanced the sensitivity to chemotherapeutic drugs, such as doxorubicin, paclitaxel, mitomycin C and cyclophosphamide suggesting the downregulation of Bcl-2 may be a useful strategy to prevent drug resistance and enhance-chemosensitivity (Emi et al., 2005;Tanabe et al., 2003).In melanoma, lymphoma and breast cancer xenografts pretreatment with Bcl-2 antisense enhanced antitumor activity of various chemotherapeutic agents such as cyclophosphamide, dacarbazine and docetaxel (Nahta & Esteva, 2003).George et.al., reported that bcl-2 siRNA combine with taxol (100 nM) increased the apoptotic cells in Tunel assay up to 70 % when compared to 30 % in taxol alone (100 nM) group in human glioma cells (George et al., 2009).There are several conflicting studies on the effect of Bcl-2 on cell proliferation.Huang et al. (Huang et al., 1997) have shown that mutated Bcl-2 on BH4 domain which didn't interfere the ability of Bcl-2 to inhibit apoptosis led starved quiescent cells to enter the cell cycle much faster than wild type protein expressing cells on stimulation with cytokine or serum.It has also been suggested that whereas Bcl-2 deficiency caused the accelerated cell cycle progression, increased levels of Bcl-2 led to retarded G0 to S transition in T cells (Linette et al., 1996).It has also been reported that Bcl-2 delayed cell cycle progression by regulating S phase in ovarian carcinoma cells (Belanger et al., 2005).On the other hand, it has been reported that downregulation of Bcl-2 expression by anti-sense oligonucleotide did not change prostate cancer cells proliferation (Anai et al., 2007).Lima et al. have reported that inhibition of Bcl-2 by siRNA led to a decrease in viable cells when compared to control group.However, they further analyzed the cells with BrdU proliferation assay and there was no significant difference between the groups, concluding that decreased cell number was due to the spontaneous induction of apoptosis in MCF7 breast cancer cell line (Lima et al., 2004).Holle et al., used A T7 promoter driven siRNA expression vector system that targets Bcl-2 mRNA in MCF-7 human cancer cells, and inhibition of Bcl-2 expression inhibited cell proliferation and induced apoptosis (Holle et al., 2004). Bcl-2 induces angiogenesis and metastasis Recent studies suggested that Bcl-2 plays roles in metastasis, angiogenesis and autophagy.It is now established that angiogenesis plays an important role in the growth of solid and hematological tumors.Bcl-2 has been shown to induce VEGF expression, which plays a main role of in angiogenesis by regulating differentiation, migration and proliferation of endothelial cells by interacting with its receptors.Moreover, VEGF has also been recently shown to be a survival factor for both endothelial and tumor cells, preventing apoptosis through the induction of Bcl-2 expression (Biroccio et al., 2000;Fernandez et al., 2001;Iervolino et al., 2002;Nor et al., 2001).Anai et al (Anai et al., 2007) recently showed for the first time that knock-down of Bcl-2 by ASO leads to inhibition of angiogenesis in human prostate tumor xenografts.Bcl-2 ASO decreases rates of angiogenesis and proliferation by inducing G 1 cell cycle arrest and apoptosis.This was the first study which shows that therapy directed at Bcl-2 affects tumor vasculature.An increase in angiogenic potential of tumor cells after Bcl-2 transfection was also observed using different in vivo assays.In addition, Bcl-2 overexpression increases VEGF promoter activity through the HIF-1α transcription factor and transcription factors (Iervolino et al., 2002).Indeed, Bcl-2 increases nuclear factor κB (NF-κB) transcriptional activity in the MCF7 ADR line (Ricca et al., 2000).Since NF-κB signaling blockade has been demonstrated to inhibit in vitro and in vivo expression of VEGF, it is possible that Bcl-2 affects VEGF expression through modulation of the activity of NF-κB or other transcription factors.Bcl-2 overexpression increases the metastatic potential of MCF7 ADR breast cancer cell line by inducing cellular invasion, and migration, in vitro and in vivo (Del Bufalo et al., 1997;Ricca et al., 2000).It has also been shown that bcl-2 involves in tumorigenicity, invasion, migration, and metastasis of different tumors (Takaoka et al., 1997;Wick et al., 1998).In glioma cell lines, bcl-2 expression has been shown to correlate with matrix metalloproteinase-2 (MMP-2) therefore the invasiveness (Wick et 1998).On the other hand, the in vivo aggressiveness of tumors derived from cells overexpressing Bcl-2 is much more than cells which do not.It has been attributed to the anti-apoptotic properties of Bcl-2 (Fernandez et al., 2001).Zuo et al. (Zuo et al.) demonstrated a decrease in epithelial markers such as desmoglein-3, zonula occluding-1, cytokeratin and E-Cadherin and a increase in mesenchymal markers such as N-Cadherin, vimentin, fibronectin and also a transition from a cobblestone to a scattered appearance with increased bcl-2 expression.Therefore, they suggested that bcl-2 overexpression induced epithelial to mesenchymal transition and enhanced mobility and invasive character of HSC-3 human squamous carcinoma cells by promoting persistent ERK signaling and elevating MMP-9 production.Wang et al. have shown that a bcl-2 small inhibitor, TW-37, led to increased apoptosis, decreased MMP-9 and VEGF gene transcriptions and their activities consequently inhibited tumor growth in a pancreatic cancer model (Wang et al., 2008).It has been shown that bcl-2 upregulation in tumor associated endothelial cells was sufficient to enhance tumor progression in vivo (Nor et al., 2001).The same group also showed that bcl-2 expression was significantly elevated in tumor blood vessels from head and neck cancer patients as compared to control samples and when they compared bcl-2 expression in tumor blood vessels from lymph node-positive cancer patients with lymph-node negative patients, they found that lymph node-positive cancer patients had significantly higher number of bcl-2 positive blood vessels (Kumar et al., 2008).They showed in human head and neck cancer specimens that bcl-2 expression in tumor associated endothelial cells was directly linked to metastasis, they further found in an in vivo SCID mouse model that tumors with bcl-2 expressing endothelial cells showed significant increase in lung metastasis suggesting bcl-2 expression mediated metastasis through increase in angiogenesis, tumor cell invasion and blood vessel leakiness (Kumar et al., 2008). Bcl-2 as a candidate for targeted therapy in breast cancers Bcl-2 anti-apoptotic and anti-autophagic protein has been proposed as an excellent therapeutic target in various cancers to overcome resistance to conventional therapies and enhance the effects of these therapies.Previous studies suggested that downregulation of bcl-2 by ASO enhances their sensitivity to chemotherapeutic drugs, such as doxorubicin, pactitaxel, mitomycin C and cyclophosphamide in breast cancer experimental models, suggesting a downregulation of Bcl-2 may be a useful strategy to prevent drug resistance and enhance-chemosensitivity (Emi et al., 2005;Tanabe et al., 2003).Bcl-2 specific ASO (Oblimersen) in clinical studies have ended up somewhat disappointing results and tocixity (Tanabe et al, 2003).SiRNA has been shown to be 10 to 100-fold more potent than ASO and causes its degradation, leading to shut down protein expression (Bertrand et al, 2002).In www.intechopen.comInduction of Autophagic Cell Death by Targeting Bcl-2 as a Novel Therapeutic Strategy in Breast Cancer 65 vivo efficient delivery of the siRNA-based therapeutics into tumors , remains a great challenge.Traditionally cationic (positively charged) liposomes have been used as nonviral delivery systems for oligonucleotides (e.g., plasmid DNA, ASO, and siRNA).However, their effectiveness as potential carriers for siRNA has been limited due to the toxicity.We recently developed non-toxic, neutrally charged 1,2-dioleoyl-sn-glycero-3-phosphatidylcholine (DOPC)-based nanoliposomes (mean size 65nM) leading to significant and robust target gene knock down in human tumors animal models (Landen et al, 2005).We found that liposomal siRNA targeting Bcl-2 led to 73 % and 61 % inhibition in Bcl-2 target protein expression on 4 day and day 6, respectively in MDA-MB-231 tumors in mice (Tekedereli et al, in press), indicating that Bcl-2 siRNA therapeutics can be used successfully inhibit overexpressed proteins in in vivo therapeutic modality in cancer. Bcl-2 expression in prognosis of breast cancer patients Overexpression of Bcl-2 occurs in about 40 to 80% of human breast tumors (Doglioni et al., 1994;Hellemans et al., 1995;Joensuu et al., 1994) and confers drug resistance, an aggressive clinical course, and poor survival in patients (Reed, 1995).Recently, Buchholz et al, found that 61% of breast cancer patients are Bcl-2 positive and patients with positive Bcl-2 expression had a poor response to chemotherapy compared to those had less Bcl-2 expression (Buchholz et al., 2005).Because most Bcl-2 positive breast cancers express estrogen and/or progesterone receptors and respond hormonal therapy, Bcl-2 does not seem to be an independent prognostic marker in short-term (5 year) follow up (Joensuu et al., 1994).However, Bcl-2 fails to maintain its prognostic relationship in breast cancer when considered in multivariate analysis and longterm follow up studies (Daidone et al., 1999;Joensuu et al., 1994).Lack of Bcl-2 expression was associated with a higher probability of complete pathological response to doxorubicin-based chemotherapy (Pusztai et al., 2004).Antiestrogens such as tamoxifen and ICI 164384 promote apoptosis by downregulating Bcl-2 without affecting Bax, BCl-X L or p53 (Kumar et al., 2000) and Bcl-2 upregulation plays a role in resistance to estrogens (Teixeira et al., 1995).Overall, these data suggest that tumors with a decreased level of in Bcl-2 had better response to chemotherapy and hormonal therapy, and targeting Bcl-2 is a viable strategy. Concluding remarks Apoptosis (type I) and Autophagic (type II) programmed cell death play crucial roles in such physiological processes as the development, homeostasis and elimination of unwanted or cancer cells.Autophagy is characterized by the sequestration of cytoplasmic contents through the formation of double-membrane vesicles (autophagosomes).Subsequently, the autophagosomes merge with lysosomes and digest the organelles, leading to cell death if its induced excessively.In contrast to apoptosis, autophagic cell death is caspase-independent and does not involve classic DNA laddering (Ng & Huang, 2005).Targeting autophagy is can be used as a therapeutic strategy where autophagy is induced as a protective mechanism or induction of autophagic cell death can also be used as a therapeutic strategy where apoptosis is defective or anti-apoptotic proteins is overexpressed.
2017-09-15T21:03:58.735Z
2012-02-29T00:00:00.000
{ "year": 2012, "sha1": "4712a0fad3beea35fb4717b70616fe155df168c7", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/29437", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "89e54b3b5d7d2e160832306723f1ea557420aa72", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
232775826
pes2o/s2orc
v3-fos-license
A Missense Mutation in the KLF7 Gene Is a Potential Candidate Variant for Congenital Deafness in Australian Stumpy Tail Cattle Dogs Congenital deafness is prevalent among modern dog breeds, including Australian Stumpy Tail Cattle Dogs (ASCD). However, in ASCD, no causative gene has been identified so far. Therefore, we performed a genome-wide association study (GWAS) and whole genome sequencing (WGS) of affected and normal individuals. For GWAS, 3 bilateral deaf ASCDs, 43 herding dogs, and one unaffected ASCD were used, resulting in 13 significantly associated loci on 6 chromosomes, i.e., CFA3, 8, 17, 23, 28, and 37. CFA37 harbored a region with the most significant association (−log10(9.54 × 10−21) = 20.02) as well as 7 of the 13 associated loci. For whole genome sequencing, the same three affected ASCDs and one unaffected ASCD were used. The WGS data were compared with 722 canine controls and filtered for protein coding and non-synonymous variants, resulting in four missense variants present only in the affected dogs. Using effect prediction tools, two variants remained with predicted deleterious effects within the Heart development protein with EGF like domains 1 (HEG1) gene (NC_006615.3: g.28028412G>C; XP_022269716.1: p.His531Asp) and Kruppel-like factor 7 (KLF7) gene (NC_006619.3: g.15562684G>A; XP_022270984.1: p.Leu173Phe). Due to its function as a regulator in heart and vessel formation and cardiovascular development, HEG1 was excluded as a candidate gene. On the other hand, KLF7 plays a crucial role in the nervous system, is expressed in the otic placode, and is reported to be involved in inner ear development. 55 additional ASCD samples (28 deaf and 27 normal hearing dogs) were genotyped for the KLF7 variant, and the variant remained significantly associated with deafness in ASCD (p = 0.014). Furthermore, 24 dogs with heterozygous or homozygous mutations were detected, including 18 deaf dogs. The penetrance was calculated to be 0.75, which is in agreement with previous reports. In conclusion, KLF7 is a promising candidate gene causative for ASCD deafness. Introduction Deafness can cause several inconveniences for dogs (Canis familiaris, CFA), as more attention is required to avoid undetected danger. Deaf dogs are not suitable as working dogs because their training is more challenging than for normal hearing dogs. In addition, they are more likely to be startled and show more tendency to bite [1]. More than 100 modern dog breeds have been reported to be affected by congenital deafness [2]. Hence, deafness seems to be a common disorder among dogs, particularly in breeds such as the Dalmatian, Bull Terrier, English Setter, English Cocker Spaniel, and Australian Cattle Dog [3]. Hearing loss or deafness can be categorized mainly by five criteria in dogs: (1) Cause (genetic or nongenetic, inherited or acquired); (2) association with other diseases The Australian Stumpy Tail Cattle Dog (ASCD) is a unique breed with a natural bob-tail, which should be distinguished from the Australian Cattle Dog breed. ASCD is alert, watchful and obedient, and talented in working and controlling cattle. It has been recognized as a standardized breed since 1963 by the Australian National Kennel Council. For a long time, general opinion held that the origins of the Australian Stumpy Tail Cattle Dog arose from European herding dogs and the Australian Dingo. However, recently it has been suggested that the ancestors of the Australian Stumpy Tail Cattle Dog and the Australian Cattle Dog, sharing a common origin, arrived in Australia with early free settlers, as their unidentified companions, between 1788 and c. 1800 (Clark, Noreen R. A Dog for the Job. (in prep. 2020)). Each pup should undergo a BAER test because this breed has a high deafness prevalence (https://www.akc.org/dog-breeds/australian-stump-tail-cattle-dog/ (accessed on 24 March 2021)). A research study of 315 ASCDs showed the incidence of congenital sensorineural deafness was 17.8% [12]. There was no evidence that congenital sensorineural deafness in ASCD has a left/right asymmetry or a sex-specific pattern, but there was a significant correlation between red (over blue) coat color and deafness [12]. No unique causative variants have been identified so far for any dog breeds, possibly in part due to the fact that deafness appears to be a comparatively heterogenous disease as described above. In addition, there are several hypotheses about the inheritance pattern of congenital sensorineural deafness (reviewed by [1]). In Border Collies, for instance, Ubiquitin Specific Peptidase 31 (USP31) and RB Binding Protein 6 (RBBP6) have been associated with adult-onset deafness [13], whereas in the Doberman Pinscher, an insertion in Protein Tyrosine Phosphatase Receptor Type Q (PTPRQ) and a missense variant in Myosin VIIA (MYO7A) have been shown to be causative for a form of deafness that includes vestibular disease [14,15]. Although chromosome 2 (CFA2), 6, 14, 17, 27, and 29 have been associated with hearing loss in Dalmatians, no causative variants have been identified so far [16]. In ASCD, congenital sensorineural deafness has been linked to a chromosomal region on CFA10 [12]. However, within a potential candidate gene Sry-related Hmg-box gene 10 (SOX10) located in this region, no causative alterations were detected. A recent genome-wide association study (GWAS) reported 14 chromosomes that were significantly associated with deafness in three canine breeds, and CFA3 was significantly associated with bilateral deafness in Australian Cattle Dogs [17]. In this study, three suggestive candidate genes near significantly associated regions were detected in these three dog breeds, including ATPase Na + /K + Transporting Subunit Alpha 4 (ATP1A4), Transformation/Transcription Domain Associated Protein (TRRAP), and Potassium Inwardly Rectifying Channel Subfamily J Member 10 (KCNJ10) [17]. However, none have been convincingly identified as causative mutations. To extend the identification of potential candidate genes causing deafness in ASCD we performed a genome-wide association study and whole genome sequencing (WGS) in deaf ASCD. We identified a unique missense variant in Kruppel-like factor 7 (KLF7) gene significantly associated with deafness in ASCDs. This variant was absent in 722 dogs of bioproject PRJN448733 (see below). As KLF7 plays an important role in the nervous system, is expressed in the inner ear, and seems to be involved in inner ear development [18,19], it was a convincing candidate for ASCD deafness. Ethical Statement The collection of dog blood samples was done by S. Sommerlad at the time of BAER testing. The collection of samples was approved by the "Niedersächsisches Landesamt für Verbraucherschutz und Lebensmittelsicherheit" (33.19-42502-05-15A506) according to §8a Abs. 1 Nr. 2 of the TierSchG. All ASCDs were tested and sampled under approval of The University of Queensland's Animal Ethics Committee. Phenotyping and Samples Fifty-nine Australian Stumpy Tail Cattle Dogs (Table S1) from a previous study [12] were used in this study. BAER testing was performed on 59 dogs [20], 28 were normal hearing dogs and 31 were diagnosed as deaf, of which 10 were bilateral deaf, 12 were left-sided deaf, and 9 were right-sided deaf (Table S1). Three bilaterally deaf ASCDs (#217, #253 and #330), and one control dog with normal hearing (#326) were used for next generation sequencing. Dog #326 was a littermate of #330. These four dogs were female and red in color; all but #330 had a speckled coat. DNA was extracted using a salting-out method as described [12]. All samples were pseudonymized using internal IDs. Furthermore, data from two repository were used in this study. One repository contain Variant Call Format (VCF) data of 722 canine individuals (https://www.ncbi. nlm.nih.gov/bioproject/PRJNA448733 (accessed on 24 March 2021)) [21]. It consists of 144 established breeds, 11 samples with mixed breed, 26 samples with unknown breed status, 104 village and feral dogs from different regions, and 54 wild canids from six species. An additional dataset consisted of 590 samples including 582 dogs from 126 breeds and 8 wolves (https://www.ebi.ac.uk/ena/data/view/PRJEB32865 (accessed on 24 March 2021)) [22]. Next Generation Sequencing and Variant Calling A total of 1.0 µg DNA per ASCD sample was used as input material for the DNA library preparations. Sequencing libraries were generated using NEBNext ® DNA Library Prep Kit following manufacturer's recommendations and indices were added to each sample. The genomic DNA was randomly fragmented to a size of 350bp by shearing, then DNA fragments were end polished, A-tailed, and ligated with the NEBNext adapter for Illumina sequencing, and further PCR enriched by P5 and indexed P7 oligos. The PCR products were purified (AMPure XP system) and resulting libraries were analyzed for size distribution by Agilent 2100 Bioanalyzer and quantified using real-time PCR. For #217, #253, #326, #330, a total of 599,770,692, 723,624,660, 743,641,356, 620,101,998 raw reads were obtained, respectively. Corresponding coverages were around 40× (paired-end reads, 2 × 150 bp). Next Generation Sequencing Data Analysis for Identification of Associated Variants Data after variant calling were analyzed with SNP & Variation Suite 8.8.3 (Golden Helix Inc., Bozeman, MT, USA). SNPs and indels were set to missing with read depth ≤ 10, genotype quality ≤ 15, alt read ratios for Ref_Ref ≥ 0.15, Ref_Alt outside 0.3 to 0.7, Alt_Alt ≤ 0.85. Variants were analyzed using autosomal recessive and dominant models, respectively. In the autosomal recessive filtering model, 3 deaf ASCDs were set as Alt_Alt, control ASCD as Ref_Ref or Alt_Ref. In the autosomal dominant filtering model, the 3 deaf ASCDs were set as Alt_Alt or Alt_Ref and controls as Ref_Ref. To further narrow the range of candidate variants, we compared the common variants of deaf ASCDs with 722 canine genomes to identify private variants. The shared variants in the three deaf dogs were filtered by BCFtools 1.9 with 'isec' option. Private variants were annotated using SnpEFF software [31] to determine high (loss of function) and moderate (missense) impact variants (Ensembl transcripts release 101). These functional variants were further checked by Integrative Genome Viewer (IGV) software to obtain real high quality variants [32]. Variant effects were predicted by SIFT [33], PolyPhen-2 [34], and PROVEAN [35]. Genotyping of KLF7 Variant in Australian Stumpy Tail Cattle Dogs Targeted genotyping of the KLF7 missense variant was performed in 59 ASCDs by PCR amplification using primers cfa_KLF7_Ex3_F (5 -AGACTCTCTCAGCCGTGGAT-3 ) and cfa_KLF7_Ex3_R (5 -GGCCAACTTGTACCACTACCT-3 ), resulting in a 295 bp fragment. Genotyping of PCR products were implemented by RFLP analysis after cleavage with the restriction enzyme HinP1I (NEB). The wild type allele was cleaved into two fragments, 236 bp and 59 bp, while the homozygous mutant remained uncut. Frequency distribution for alleles and genotypes was calculated using Fisher's Exact Test in these 59 ASCDs. Allelic and genotypic odds ratios were calculated according to [36]. Investigation of Human Deafness Genes in 3 Deaf Australian Stumpy Tail Cattle Dogs Human hearing loss or deafness genes were queried using online software GLAD4U with "hearing loss" and/or "deafness" as keywords [37]. After combining the three query results, 346 genes were chosen for further analysis (Table S3). The variants of these gene regions (including 1000 bp up-and downstream regions) were extracted by BCFtools from VCF files of the three deaf ASCDs and annotated by SnpEFF software. Variants with high (loss of function) and moderate (missense) impacts were selected for further analysis (Ensembl transcripts release 101). The genotype information of the chosen variants was further checked in 722 canines. Genome Wide Association Analysis The analysis was done using three bilateral deaf female dogs from three different litters. The hearing status of the individuals determined using BAER is shown in Tables 1 and S1. The three affected ASCDs were compared with 44 control dogs. 13 SNPs on 6 chromosomes (CFA3, 8, 17, 23, 28, 37) above the Bonferroni significance level were identified. The QQ-plot indicated that some associations might be due to population substructure. Associated SNPs are shown in Figure 1 and summarized in Table 2. The majority of the significantly associated SNPs (7/13) were located on CFA37 including SNP chr37:44793 (position according to CanFam3.1) with the highest −log 10 p-value = 20.02. A search for large structural variants (SVs) flanking the significantly associated regions on CFA3, 8, 17, 23, 28, and 37 using IGV was unsuccessful. Whole Genome Sequencing Reveals Four Potential Variants To further locate the candidate variants, next generation sequencing was performed in 3 deaf ASCDs (#217, #253, #330) and 1 normal hearing ASCD (#326). After quality control, a total of 4,208,002 SNPs and 2,298,760 indels were detected. According to previous deafness studies, sequence data were initially analyzed using a recessive model of inheritance. Using this model, 129,383 SNPs and 51,942 indels were detected. Using only variants that had been annotated and verified as mRNA transcripts (Ensembl release 101), 338 SNPs and 523 indels remained (Table S4). After filtering these variants against the 722 dog database, none of the homozygous Alt_Alt genotypes were exclusively present in the deaf ASCDs (Table S4). As there were no reports about such a high prevalence of deafness in the 722 control dogs and it can be assumed that the majority of the controls were hearing, these variants were presumably not causative. As no associated variants were found using the recessive inheritance model, a dominant inheritance model was applied. In this analysis, private variants only present in the three deaf ASCDs (Alt_Alt and Alt_Ref) compared to the 722 controls (Ref_Ref) were filtered, resulting in 270,980 SNPs and 351,927 indels. After quality control and functional annotating, 167 protein-changing variants (58 SNPs and 109 indels) remained (Table S5). These variants were further filtered against #326 (normal hearing littermate of #330) assuming that this dog should be homozygous wild type under the supposed model. After this step, four missense variants remained as potential causative candidates (Table 3). Within the 722 control dogs, no homozygous Alt_Alt or heterozygous carriers were detected for these 4 missense variants. In an additional dataset consisting of 590 dog samples, only two heterozygous individuals (Brussels Griffon dogs) were determined for the Microtubule associated protein 6 (MAP6) gene variant. To deduce which of the variants could be causative for deafness, protein function prediction tools were used. As shown in Table 4 only the variants in Heart development protein with EGF like domains 1 (HEG1) and KLF7 were predicted to be deleterious by at least two of the prediction tools. To further confirm the causative possibilities of the two remaining variants, their amino acid conservation was analyzed in the same 7 species. The missense variant in HEG1 gene (NC_006615.3: g.28028412G>C) resulted in an amino acid exchange of p.His531Asp (XP_022269716.1). In KLF7 gene (NC_006619.3: g.15562684G>A), the variant led to an exchange of p.Leu173Phe (XP_022270984.1). Especially in KFL7, the amino acid position seems to be highly conserved across several different species, as shown in Figure 2. [38], sequence logos are shown according to [39]. Genotyping of KLF7 Variant in ASCDs To verify the association of the KLF7 variant with ASCD congenital deafness, 27 normal hearing and 28 deaf ASCDs (21 unilaterally and 7 bilaterally deaf dogs) were used to investigate the KLF7 variant genotype distribution. As summarized in Table 5, 59 ASCDs including the 4 whole genome sequenced dogs were used to check the association of the KLF7 missense variant with ASCD deafness. Four dogs were homozygous carriers (A_A) and 14 heterozygous (A_G) among the 31 deaf ASCDs. Within the 28 normal hearing AS-CDs, 5 heterozygous and one homozygous carrier were detected. The penetrance of ASCD deafness was calculated to be 0.75. As determined by Fisher's exact test, homozygosity for the KLF7 variant was significantly associated with congenital deafness (p = 0.014). The odds ratio AA = 6.8 (95% CI [0.68, 67.25]), i.e., homozygous carriers are 6.8 times more likely to be deaf than wild type. Discussion Deafness is a common disorder among dogs, and the observed prevalence is highest in Dalmatians (29.9%) [3] and 17.8% in ASCD [12]. Even selective breeding based on deafness phenotyping decreased the prevalence in Dalmatians only to 17.8% [40]. Several other dog breeds also show rather high prevalence rates (>10%), e.g., Australian Cattle Dog and Bull Terrier [3]. To accelerate the decline of overall prevalence of congenital sensorineural deafness, it would be important to identify the genetic cause of the disorder to enable informed breeding. We used four ASCD DNA samples from a previous study of deafness in Australian Stumpy Tail Cattle Dogs for GWAS and WGS analysis. The previous study used a genome screen with 325 microsatellite (290 were used for linkage mapping) to determine a significantly linked deafness region on CFA10 [12]. However, SOX10, the only potential candidate gene in this region, had to be excluded, as it did not harbor any causative variants. Another promising candidate in the CFA10 region, i.e., Trio-and f-actin-binding protein (TRIOBP), had also to be excluded. In the above mentioned study, deafness was reported to be autosomal recessive inherited with incomplete penetrance [12]. As shown before, GWAS with multiple breeds can improve the accuracy of causative variant mapping [41,42]. Our analysis provided evidence for at least six highly associated chromosomal regions. However, due to the small number of affected dogs, some associated regions might have resulted from the close relationship of the dogs. This can be seen in the QQ-plot which showed convincing evidence for an association with some indication of a population substructure. In our study, more than half of the significant associated SNPs (7 out of 13) were located on CFA37, including the most significantly related SNP (chr37:44793, p = 9.54 × 10 −21 ). In a recent study of Dalmatian deafness, signals were also detected in this region [17]. However, there was no associated peak on CFA37 reported in the previous microsatellite-based study in ASCD. A possible explanation could be that there were only five microsatellite markers on CFA37, one of which had a low degree of polymorphism (3 alleles, PIC 0.5) [12]. This might have been insufficient to detect associations on this chromosome. An alternative explanation is that ASCD deafness may be heterogeneous. There may be more than one variant causing congenital deafness in this breed, and using limited family associations may reveal private mutations. Further genotyping analysis in a wider range of affected (28) and unaffected (27) ASCDs revealed that the KLF7 missense variant was still significantly associated with congenital deafness (Table 5). Furthermore, the penetrance of deafness in ASCD calculated based on the KLF7 variant was 0.75, which was in agreement with the previously calculated penetrance of 0.72 [12]. Altered allele (A) frequency is 24.58% (Table 5). If we take penetrance into consideration, the deafness frequency is (24.58% × 0.75) = 18.4%, which is also close to the previous investigation of 17.8% overall ASCD breed deafness frequency [12]. Several homozygous wild type individuals were detected among the deaf ASCDs suggesting additional genetic risk factors. This was not surprising, as canine congenital deafness seems to be a complex disorder and different regions were detected in other GWASs for deafness so far [17]. According to our GWAS, functional relationships with deafness of genes near the significantly associated loci on most chromosomes were unapparent (Table 2). Only the region on CFA37 was further supported by WGS. In the initial GWAS 651 variants on chromosome 37 (between CFA37:7217 to CFA37:30803691) were identified (Figure 1). Variant CFA37:15503029T>C with a p-value of 8.61 × 10 −6 was only 12,534 bp distant from KLF7. To evaluate LD over-pruning and potential effects on resolution, we repeated the GWAS using less stringent pruning parameters (-indep 1000 5 4). This increased the number of associated variants to 60,746. In agreement with the previous analysis, a variant with −log 10 p-value = 14.68 at position CFA37:15463045 remained in the vicinity of KLF7 (Table S6) and a significantly associated region spanning from CFA37:15463045 to CFA37:16433709 was detected harboring KLF7 (CFA37:15515563-15607345). As expected, a further reduction of pruning stringency resulted in more chromosomal regions above the significant threshold ( Figure S1). However, especially on CFA10, no significantly associated variants were identified. In addition, we applied whole genome sequencing of the deaf dogs and used a large number of available canine whole genome sequence data as controls to improve the accuracy and efficiency of causative variant identification. Several GWAS of canine complex hereditary deafness failed to identify causative variants with the exception of two associated genes (MYO7A, PTPRQ) causative for a specific form of canine congenital bilateral deafness with vestibular disease [14,15]. For next generation sequence analysis in the present study, functional variants within coding regions were primarily considered due to their direct impact on protein function [43]. We filtered all variants using an autosomal recessive model, however, no functional variants fulfilled this mode of inheritance. Again, the chromosomal region 1 Mb up-and downstream of SOX10 (CFA10:25680441-27690530) was checked using IGV, but no deafness associated variants including larger structural variants were identified. After WGS analysis and variant effect prediction, only two missense variants within HEG1 and KLF7 remained. HEG1 is involved in cardiovascular development [44] and therefore seemed unlikely to be involved in the development of deafness. However, the candidate variant (NC_006619.3: g.15562684G>A) in KLF7 (CFA37:15515563-15607345) was close to the significantly associated SNP CFA37:16399127 (p = 2.66 × 10 −8 ) ( Table 2). KLF7 is a zinc finger transcription factor and has been reported to play a role in the nervous system and is vital for neuronal morphogenesis that could function in axon outgrowth [18]. KLF7 was suggested to have potential functions in neurogenesis of mice, like neuronal differentiation and maturation [45]. KLF7 was also found to promote axon regeneration [46]. Furthermore, KLF7 is required for the development of sensory neurons [47], and it has been reported to play roles in neurotransmission and synaptic vesicle trafficking [48]. These two processes have important influences on the auditory system, and therefore disruption of KLF7 could lead to hearing impairment and dysfunction [49]. Indeed, KLF7 was confirmed to be expressed in the otic placode which will develop into ears, indicating KLF7 could have an effect on ear development [19]. KLF7 was also detected to be a fibroblast growth factor (FGF) responsive factor in ear progenitor induction processes, which implies it may be involved in early ear induction [50]. KLF7 has been considered as one high quality candidate gene for human branchio-oto-renal syndrome, which is an autosomal dominant disease with hearing loss as one clinical sign [51]. KLF7 was the nearby gene (50,519 bp distance) of one significant signal in adult hearing difficulty GWAS [52]. One recent GWAS of hearing-related traits with up to 330,759 individuals (UK Biobank) revealed 31 significant genomic risk loci for adult hearing difficulty, KLF7 was also detected to be significantly associated [53]. Furthermore, the protein sequence segments surrounding KLF7 variant are much more conserved than that of HEG1 among the same 7 species (Figure 2). Recently, KLF7 has been reported to directly regulate GATA Binding Protein 3 (GATA3) expression [54]. GATA3 is expressed in the otic placode and is involved in inner ear development [55]. Though the interaction between KLF7 and GATA3 was reported in chicken adipogenesis, KLF7 is quite conserved among several species (Figure 2). Knockdown of Paired Box Protein Pax-2 (PAX2) (inner ear development gene) led to a significant up-regulation of both KLF7 and GATA3 expression [19], which implies KLF7 and GATA3 are probably involved in the same pathway. Furthermore, GATA3 is the causative gene for human hypoparathyroidism, deafness, and renal dysplasia (HDR) syndrome [56]. Therefore, KLF7 could interact with GATA3 during the development of inner ear, and defects in KLF7 could affect GATA3 normal expression patterns in otic placode. This may be a potential cause of hearing loss in ASCD cases. The incomplete penetrance presented by the KLF7 variant in deafness may be related to its role as a transcription factor that is involved in a specific part of the hearing pathway. Our findings could provide clues for the functional analysis of the KLF7 in inner ear development. Functional analysis of KLF7 regarding ear development may provide further evidence for its role in deafness. Another intriguing possible pathway is suggested by the finding of a KLF binding site upstream of the M promoter of Microphthalmia-associated transcription factor (M-MITF) that induces gene expression changes in humans [57]. Although the aforementioned study was related to melanoma development, M-MITF has been identified as the locus responsible for white coat patterning in dogs [58]. Hereditary deafness has been reported to be associated with white pigmentation in several species, e.g., by affecting M-MITF isoform expression in pigs [59] and cows [60] as well as humans [61]. Canine deafness was also linked with white pigmentation due to the merle and piebald locus [62]. Congenital sensorineural deafness of English Bull Terrier is predominant in individuals with white coat color [63]. Similarly, congenital hereditary sensorineural deafness in the Australian Cattle Dog was negatively associated with bilateral facial masks, also individuals with pigmented body patches showed a lower risk of deafness [64]. An inverse association of pigmented head patches and congenital sensorineural deafness was also observed in Dalmatians, while on the other hand, a positive correlation was detected with blue irises [65][66][67][68][69][70][71]. In ASCD, congenital sensorineural deafness was moderately significant associated with red/blue coat color, but not with speckling and facial masks [12]. However, no functional alterations in genes related to coat color or pigmentation were detected after filtering for case-control setting in the present study. Thus far, no causative variants within genes involved in pigmentation have been identified in canine deafness. Some pigmentation genes have actually been excluded as candidates in different dog breeds, e.g., c-Kit (KIT) and melanocyte protein 17 (SILV) [72,73]. An alternative explanation is that deafness caused by dysfunctions of other biological processes may be more common, such as ear development and morphogenesis. This is highly relevant in the Gene Ontology (GO) category analysis of potential canine hereditary deafness genes [2]. In our study, KLF7 was reported to participate in inner ear development processes [50]. There is good evidence here that the KLF7 variant contributes to deafness, but the genotyping data supports the view that this is a multigene/multifactorial disease, and so this is one contributing mutation. Conclusions In summary, a missense variant within KLF7 gene has been identified to be significantly associated with congenital deafness in Australian Stumpy Tail Cattle Dogs. As KLF7 gene was reported to be expressed in the inner ear and associated with human hearing difficulties, our findings could provide clues for further elucidating novel genetic causes for human hearing loss.
2021-04-04T06:16:26.841Z
2021-03-24T00:00:00.000
{ "year": 2021, "sha1": "33e58cdc43b856afb34766f93b6793f36e04d7d7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4425/12/4/467/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7d512b3c314c8393642fb585fec3802479060abd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
21316746
pes2o/s2orc
v3-fos-license
Painful temporomandibular disorders , self reported tinnitus , and depression are highly associated Disfunção temporomandibular dolorosa , auto-relato de zumbido e depressão estão fortemente associados 1Universidade Estadual Paulista, Faculdade de Odontologia de Araraquara, Araraquara SP, Brazil; 2Universidade de São Paulo, Faculdade de Medicina, São Paulo SP, Brazil. Correspondence: Giovana Fernandes; Faculdade de Odontologia de Araraquara, UNESP; Rua Humaitá 1680; 14801-903 Araraquara SP Brasil; Email: giovana_fernandes@hotmail.com Conflict of interest: There is no conflict of interest to declare. Support: Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – CAPES. Received 29 May 2013; Received in final form, 25 June 2013; Accepted 02 July 2013. AbsTRACT Objective: The aim of this study was to investigate the association among painful temporomandibular disorders (TMD), self reported tinnitus, and levels of depression. Method: The sample consisted of 224 individuals with ages ranges from 18 to 76 years. The Research Diagnostic Criteria for Temporomandibular Disorders Axis I were used to classify TMD and Axis II were used for self reported tinnitus, and to score the levels of depression. The odds ratio (OR) with 95% confidence interval (CI) was applied. Results: The presence of painful TMD without tinnitus was significantly associated with moderate/severe levels of depression (OR=9.3, 95%; CI: 3.44-25.11). The concomitant presence of painful TMD and tinnitus self-report increased the magnitude of the association with moderate/severe levels of depression (OR=16.3, 95%; CI, 6.58-40.51). Conclusion: Painful temporomandibular disorders, high levels of depression, and self reported tinnitus are deeply associated. However, this association does not imply a causal relationship. Tinnitus is defined as a phantom sensation because sound is perceived in the absence of a physical sound source 1 .It is clinically heterogeneous, reflecting multiple etiologies, and its complexity is related to its biological and psychological components 2 . Studies have observed tinnitus complaints more often in patients with temporomandibular disorders (TMD) than in those without TMD [3][4] , and tinnitus patients had more TMD signs and symptoms 5 .Furthermore, signs of TMD may be a risk factor for the development of tinnitus 6 . Temporomandibular disorders refer to a group of conditions characterized by signs and symptoms, including pain in the temporomandibular joint (TMJ), and/or masticatory muscles, and/or TMJ sounds, and deviations or restriction in mandibular range of motion 7 . The first hypothesis for the association between tinnitus and TMD was described by Costen 8 , in 1934.Since then se veral hypotheses have been proposed [9][10] , but at present the most accept hypothesis is based on neural plasticity, especially the plasticity of somatosensory inputs to the cochlear nucleus [11][12] . Interestingly, high levels of depression have been associated with both conditions -tinnitus and TMD.As regards TMD, the literature findings suggest an association between this dysfunction and depression in different groups of TMD patients [13][14] , and the presence of chronic pain may be an important factor in this association 14 .On the other hand, tinnitus is considered a debilitating disorder, significantly associated with higher levels of depression 5,15 , and this overall association was independent of the presence or absence of the other health conditions 16 .Based on these statements, we hypothesize a possible triple association among chronic painful TMD, tinnitus, and depression.Therefore, the aim of the present cross-sectional, clinically-based study was to investigate a possible association among these three entities, assessing the odds of occurrence of moderate/severe depression levels in patients with or without painful TMD and tinnitus. mETHoD The sample consisted of 233 individuals: 183 adults consecutively recruited among patients with the chief symptom of orofacial pain, who sought care at a University-based specialty clinic (Universidade Estadual Paulista, Brazil).Fifty indivi duals without history of facial pain, and absence of TMD were also indentified and selected among patients seeking routine dental treatment at the same university. Exclusion criteria were the presence of odontalgia, neuropathy, intra-oral lesions, any chronic pain syndrome, impairments in cognition or language, individuals under 18 years of age, and presence of TMD pain for less than six months, which according to the International Association Study for the Pain 30 (IASP) is considered acute pain.Among the 233 individuals examined, nine were excluded because of the exclusion criteria.Overall, no individuals refused to participate in the study. This study was approved by the Research Ethics Committee, Araraquara School of Dentistry (Universidade Es tadual Pau lista, Brazil).A signed term of free and informed consent was obtained from each participant. TmD evaluation A standardized diagnostic protocol was applied to all patients equally by only one experience and trained dentist in accordance with the following instruments: Orofacial Pain Clinic Protocol: first, all participants were interviewed and systematically examined.Cervical, cranial, facial, dental and the other oral structures were evaluated.The objective was to detail the chief complaint, general pain characteristics (location, intensity, quality, duration, time of pain worsening, aggravation and alleviating factors) and medical history.The American Academy of Orofacial Pain 7 (AAOP) diagnostic criteria were applied for the differential diagnoses with other conditions that may mimic TMD. Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD), Portuguese version [18][19][20][21] .The RDC/TMD is largely used and details of its application are described elsewhere.In brief, questionnaire assessment and physical examination allow a dual axis approach to TMD assessment.The questionnaire was self reported.Study investigators were available when clarification was needed, but did not interfere with the responses.Axis I allows the physical pathology classification of TMD into 3 subtypes and 8 subgroups.Axis II of RDC/TMD assesses another domain, the psychological status.The Axis II Depression instrument have clinically relevant and acceptable psychometric properties for reliability and validity and use as instruments for indentifying TMD patients with high levels of distress, pain, and disability 22 .The RDC/TMD was applied to confirm and classify the absence or presence of TMD.Based on the Axis I of the RDC/TMD, patients were stratified according to their TMD status into: (1) No painful TMD (only disc displacement with reduction or disc displacement without reduction, with or without limited opening or non-TMD and/or osteoathrosis) and (2) Painful TMD (myofascial pain with or without limited opening, and/or TMJ arthralgia and/or osteoarthritis). self reported tinnitus The RDC/TMD questionnaire allowed obtainment of self reported tinnitus by means of the question "Do you have noises or ringing in your ears?" Finally, the study could be considered blind, since the RDC/TMD questionnaire is a self reported instrument and the examiner did not know the answers about tinnitus and level of depression grades while performing the physical examination of the TMD. Data analysis The sample was stratified according to the absence or presence of painful TMD and tinnitus to verify the association between them and also their isolated association with levels of depression.Furthermore, the sample was stratified according to the simultaneous presence of painful TMD and tinnitus to study the association with levels of depression.Statistical analyses were performed using SPSS 15.0 for Windows and GraphPadInStat 3.06.The Chi-Square Test, and odds ratio (OR) with 95% confidence interval (CI) were applied, and the significance level adopted was 0.05. DIsCussIoN Several studies have explored the association between high levels of depression and tinnitus, and high levels of depression and TMD, both separately.Thus, to our knowled ge, there is a lack of researches investigating multiple associations.Tinnitus, TMD and depression are highly prevalent and present great impact on individuals' lives, thus the fin dings of the present study contribute to the contemporary knowledge. The most important findings were: (1) There was an association between painful TMD and tinnitus self-report; (2) An association was also found between painful TMD and mo derate/severe levels of depression; (3) With regard to tinnitus, there was significant association with moderate/ severe levels of depression.(4) In patients with concomitant painful TMD with tinnitus self-report the magnitude of association with moderate/severe levels of depression was higher than that observed for painful TMD without tinnitus self-report patients. A fair number of studies have demonstrated an association between TMD and tinnitus [3][4][5][6] .The most plausible hypothesis for this association is that painful somatosensory stimuli originating from the face and TMJ (via the trigeminal nerve and spinal trigeminal tract), and neck (via the C2 dorsal root) could increase the activity of the cochlear nucleus (CN) in the auditory pathways, since it receives an excitatory projection from the somatosensory pathway.Increased activity in the CN (through pain impulses from the somatosensory pathway) would then be relayed to higher auditory centers, resulting in tinnitus 12 .In spite of the lack of details about the onset of tinnitus and other possible contributory factors presented in the patients' life, in the present study, painful TMD may be considered a potential risk factor for tinnitus. Recently, imaging studies have also demonstrated a high activation of non-auditory, limbic brain structures, such as the hippocampus and amygdala in tinnitus patients 23 , especially in the subcallosal area, in which there was a significant decrease of gray-matter volume 24 .The subcallosal area contains dopaminergic and serotonergic neurons and is associated with reduced serotonin levels in the brain.A serotonergic dysfunction is implicated in a number of clinicallyrelevant conditions, including depression 25 . An interesting finding of this study is precisely this association, in which the moderate/severe levels of depression are associated with tinnitus self-report.The relationship between high levels of depression and tinnitus is likely to be bidirectional.It seems that tinnitus leads to an increased level of depression but also that depression decreases tolerance to tinnitus, which in turn may lead to a "vicious circle" 16 . Interestingly, from the psychological aspect, the perception of tinnitus has many characteristics in common with the perception of chronic pain.The same "vicious circle" happens, in other words, TMD is associated to an increased level of depression and this, in turn, can be associated with painful TMD [13][14] , which may be seen accurately in the results of the present study.In this multiple association, it may be suggested that two sources will be acting in the limbic system accentuating the levels of depression.This increase could worsen the severity of TMD and tinnitus.Furthermore, more severe TMD could act in the maintenance of tinnitus.Higher degrees of TMD and tinnitus severity would lead to even higher levels of depression.This "vicious circle" may explain one of the reasons why painful TMD patients with tinnitus more often present Grade 2, 3 and 4 of the chronic pain scale than patients with painful TMD without tinnitus, as they would have a greater pain intensity and disability. This multiple association may make the patient with pain and tinnitus more biologically and psychologically complex, and this should be reflected in the diagnosis and therapy. This study has several limitations.Firstly, the study sample consisted of adults who sought care at a University-based orofacial pain specialty clinic, and therefore, is not representative of the general population.Secondly, the reference group was small, which probably tends to increase the overall odds ratios and patients were not evaluated for diagnoses of otologic diseases.Future studies using control groups adjusted for age and sex, and with the participation of an otolaryngologist are highly recommended. As regards interpretation of the results in terms of a potential causal link between tinnitus, TMD and depression, in a cross-sectional study, caution is needed.One cannot draw conclusions about whether depression is the consequence of the clinical symptoms or the expression of an under lying psychological potential risk factor for TMD or tinnitus.Obviously, the causal relationship among the entities cannot be established and goes far beyond that which the study design can affirm.In order to establish a causal relationship, appropriate studies with a longitudinal design are necessary.However, this study reflects an important characteristic of the sample: high percentages of individuals with chronic painful TMD and tinnitus self-report also presenting high levels of depression. The strengths of this study include the use of RDC/TMD, which are internationally accepted criteria 18 , including Axis II depression with acceptable psychometric properties 22 .Moreover, data were collected by 1 trained and experienced, blinded researcher.These characteristics increase the reliability of the collected data. In conclusion, the present study shows that tinnitus selfreport, chronic painful TMD and high levels of depression are deeply associated.However, this association does not imply a causal relationship.It is important for clinicians to understand this concept to avoid overly simplistic strategies when diagnosing and managing tinnitus, since tinnitus has been viewed as complex, multidimensional developmental processes where various physical, psychosocial and environmental factors are of the utmost importance.The interaction between otolaryngologists and dentists is strongly recommended when evaluating and managing patients suffering from chronic painful TMD and tinnitus. Table 2 . Association between tinnitus self-report and painful TMD. Table 3 . Association between painful TMD and levels of depression (Research Diagnostic Criteria for Temporomandibular Disorders, Axis II). Table 5 . Association between painful TMD and tinnitus self-report with levels of depression (Research Diagnostic Criteria for Temporomandibular Disorders, Axis II). Table 4 . Association between tinnitus self-report and levels of depression (Research Diagnostic Criteria for Temporomandibular Disorders, Axis II).
2018-04-03T04:47:59.355Z
2013-12-01T00:00:00.000
{ "year": 2013, "sha1": "81fb957b2bb4efea0f1b562d8817fce34063017f", "oa_license": "CCBYNC", "oa_url": "https://www.scielo.br/j/anp/a/HKXSBdk5wFmx9XpJsynh34y/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "81fb957b2bb4efea0f1b562d8817fce34063017f", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
2585369
pes2o/s2orc
v3-fos-license
Locoregional Recurrence Risk in Breast Cancer Patients with Estrogen Receptor Positive Tumors and Residual Nodal Disease following Neoadjuvant Chemotherapy and Mastectomy without Radiation Therapy Among breast cancer patients treated with neoadjuvant chemotherapy (NAC) and mastectomy, locoregional recurrence (LRR) rates are unclear in women with ER+ tumors treated with adjuvant endocrine therapy without postmastectomy radiation (PMRT). To determine if PMRT is needed in these patients, we compared LRR rates of patients with ER+ tumors (treated with adjuvant endocrine therapy) with women who have non-ER+ tumors. 85 consecutive breast cancer patients (87 breast tumors) treated with NAC and mastectomy without PMRT were reviewed. Patients were divided by residual nodal disease (ypN) status (ypN+ versus ypN0) and then stratified by receptor subtype. Among ypN+ patients (n = 35), five-year LRR risk in patients with ER+, Her2+, and triple negative tumors was 5%, 33%, and 37%, respectively (p = 0.02). Among ypN+/ER+ patients, lymphovascular invasion and grade three disease increased the five-year LRR risk to 13% and 11%, respectively. Among ypN0 patients (n = 52), five-year LRR risk in patients with ER+, Her2+, and triple negative tumors was 7%, 22%, and 6%, respectively (p = 0.71). In women with ER+ tumors and residual nodal disease, endocrine therapy may be sufficient adjuvant treatment, except in patients with lymphovascular invasion or grade three tumors where PMRT may still be indicated. Introduction Traditionally, postmastectomy radiation (PMRT) decisions have been guided by pathologic findings in breast cancer patients treated with initial surgery. In this setting, data from several studies have led to guidelines which have identified patients most likely to benefit from PMRT: those with primary tumors greater than five centimeters, four or more positive lymph nodes (pN2), or one to three positive lymph nodes (pN1) with high-risk features such as extracapsular extension (ECE) and lymphovascular invasion (LVI) [1][2][3]. However, these same recommendations do not necessarily apply to patients treated with neoadjuvant chemotherapy (NAC) where the initial extent of disease is unknown and can be modified in as many as 80% of patients [4]. There are no published randomized trials to guide the use of PMRT in women treated with NAC [5]. Retrospective studies have suggested that both advanced initial clinical stage and residual pathologic nodal disease (ypN) are associated with a higher risk of locoregional recurrence (LRR) in women 2 International Journal of Breast Cancer treated with NAC [6][7][8][9][10][11]. However, there are many instances in which the initial clinical stage is unclear despite physical exam and modern imaging. The inaccuracies of physical exam are best demonstrated by the results of the National Surgical Adjuvant Breast and Bowel Project (NSABP) B-04 which found that 40% of clinically node negative (cN0) patients on physical exam were actually pathologically node positive (pN+), while 25% of cN+ patients were actually pN0 [12]. Modern imaging has resulted in only modest improvements in detection of axillary nodal metastases, with broad sensitivity and specificity ranges reported for ultrasound (43.5-86.2% and 40.5-86.6%, resp.) [13][14][15][16], magnetic resonance imaging (MRI) (36-78% and 78-100%, resp.) [16][17][18][19][20], and full body fluorodeoxyglucose-(FDG-) positron emission tomography (PET)/computed tomography (CT) (20-100% and 75-100%, resp.) [21]. Therefore, clinical staging may not accurately reflect the extent of disease prior to NAC and may lead to under-or overtreatment with PMRT. Furthermore, other studies have indicated that residual nodal response following NAC plays a larger role in determining LRR risk than initial clinical stage or primary breast tumor response (ypT status) [9,22]. Patients with complete nodal response to NAC were found to have a very low risk of LRR despite having locally advanced disease initially at presentation [23,24]. Therefore, ypN status is arguably a more robust and consistent predictor of LRR in the NAC setting. Nevertheless, as there is heterogeneity in the risk of LRR among pN1 patients, there is also potentially a spectrum of LRR risk among ypN1 patients. Few studies have examined the impact of receptor status on LRR risk in ypN+ or ypN0 patients. The LRR risk is unclear in patients with estrogen receptor positive (ER+) tumors and ypN+ disease who are treated with adjuvant endocrine therapy without PMRT. The Early Breast Cancer Trialists Collaborative Group meta-analysis demonstrated that the addition of PMRT significantly improved 15-year breast cancer-specific survival in patients with a greater than 10% LRR risk [25]. The Athena Breast Health Network thus adopted an absolute LRR risk threshold of 10% before recommending PMRT in patients treated with NAC [26]. The aim of our research was to compare LRR risk among breast cancer patients with ER+ tumors (treated with adjuvant endocrine therapy) and those patients with non-ER+ tumors following NAC and mastectomy without PMRT. Given the shortcomings of initial clinical staging, we also sought to identify additional objective pathological factors that contribute to a five-year LRR risk of greater than 10%. Patient Population. At our institution, NAC is typically administered in patients with large primary tumor to breast size ratios, locally advanced or initially unresectable breast cancers, and/or triple negative and Her2+ tumors. Following approval from the institutional review board, the medical records of breast cancer patients treated with modern anthracycline and/or taxane-based NAC between 1997 and 2011 were reviewed. 553 breast cancer patients (with noninflammatory, nonmetastatic cancer) were identified. After excluding patients who underwent mastectomy and PMRT ( = 295) or those who underwent breast-conservation therapy (lumpectomy and radiation therapy) ( = 173), a total of 85 patients (87 breast tumors) who underwent NAC, mastectomy, and lymph node evaluation without PMRT were identified. Receptor status was determined from immunohistochemistry (IHC) testing of the biopsy specimen. Fluorescence in situ hybridization (FISH) testing was typically performed in cases of 2+ Her2-positivity. Tumors were considered Her2positive(+) with either 3+overexpression on IHC or gene amplification on FISH. 2.2. Treatments. Seventy-nine percent of patients received anthracycline-based chemotherapy; 64% of these patients received both an anthracycline and taxane. The remaining patients (21%) received taxane-based chemotherapy with the most common regimen consisting of docetaxel and cyclophosphamide for four cycles. Among patients with Her2+ breast cancers, 43% received trastuzumab. Following NAC, all patients underwent mastectomy, in which 66% underwent modified radical mastectomy, 21% underwent simple mastectomy with sentinel lymph node biopsy alone, and 13% underwent nipple-sparing mastectomy with sentinel lymph node biopsy alone. The median number of lymph nodes dissected in patients who underwent axillary dissection and sentinel lymph node evaluation alone was 12 (range: 1-38) and 3 (range: 1-10), respectively. Following surgery, 10% of patients received adjuvant chemotherapy. Adjuvant endocrine therapy was received by all estrogen receptor positive (ER+) patients, consisting of tamoxifen alone, an aromatase inhibitor alone, or both (i.e., tamoxifen followed by an aromatase inhibitor) in 61%, 20%, and 19% of patients, respectively. Data Collection and Statistical Analysis. Patients were divided by ypN status (ypN+ versus ypN0) and then stratified by receptor subtype (ER+/Her2− versus Her2+ versus triple negative). Pathologic factors, including biopsy grade within the primary, presence of LVI in the surgical specimen, and lymph node ratio (defined as the total number of positive lymph nodes divided by the total number of dissected lymph nodes in ypN+ patients), were analyzed in the various ypN patient and receptor subgroups to determine their additional impact on LRR risk. Institutional breast pathologists evaluated the biopsy specimen as well as the surgical specimens following neoadjuvant chemotherapy. LVI was assessed in the peritumoral tissue on hematoxylin and eosin-stained sections and identified as carcinoma cells present within an endothelial-lined lymphatic space or blood vessel (confirmation via a combined keratin/D240 cocktail assay). Biopsy grade was based on a three-tiered system which considered mitotic activity, tubule/gland formation, and nuclear pleomorphism. Numbers were allocated to various features and then totaled to assign the grade (3-5 equal to grade 1, 6-7 equal to grade 2, and 8-9 equal to grade 3). The presence of ECE and close or positive margins following mastectomy were not evaluated for their relationship with LRR due to the small numbers in our cohort (see Table 1). Triple negative IA-IB 3 2 6 IIA 14 11 5 IIB 22 5 10 IIIA-IIIC 0 4 1 Unknown 2 1 1 LRR was defined as tumor recurrence in the ipsilateral chest wall or regional lymph nodes (axilla, internal mammary, supraclavicular, or infraclavicular fossa) at any time point (with or without distant metastases). Time to LRR and time to follow-up were calculated from date of diagnosis. Actuarial rates of LRR were calculated using the Kaplan-Meier method. The log-rank test was performed to evaluate impact of receptor class on LRR, with statistical significance defined as a value of ≤0.05. Statistical analyses were performed using SPSS software (IBM SPSS version 19.0, Chicago, IL) and SAS software (SAS 9.3, Cary, NC). Results Patient and tumor characteristics are detailed in Table 1. The median age at diagnosis was 48 years (range: 30-87 years). Following NAC and mastectomy, 35 (40%) and 52 patients (60%) had positive and negative nodes, respectively. Among all patients, receptor status was ER+/Her2−, Her2+, and triple negative in 63%, 26%, and 26%, respectively. Median followup period was 52.6 months (range: 5.4-201.0 months). The initial clinical stage of the patients is presented in Table 2. Patients with clinical stage III breast cancers had significantly poorer five-year LRR than patients with stage I-II breast cancers (34% versus 8%, < 0.01), but none of these patients were hormone receptor positive. Although advanced clinical stage/tumor size has previously been associated with LRR, there are clinical scenarios in which the initial clinical stage may be unclear [9,10]. The focus of this study was to determine if objective pathological factors could unequivocally guide clinicians on LRR risk, particularly in patients with hormone receptor positive tumors treated with adjuvant hormone therapy. Discussion Our findings indicate that receptor and ypN status may identify groups of patients in which PMRT may be omitted after NAC and mastectomy. Patients with ypN+/ER+ tumors had a significantly lower LRR risk than women with ypN+/triple negative or ypN+/Her2+ tumors. Based on a previously established LRR risk threshold of less than or equal to 10% [26,27], our results suggest that ypN+/ER+ patients without LVI or grade 3 disease have a five-year LRR risk of 5% and may be sufficiently treated with adjuvant endocrine therapy and avoid PMRT. Among ypN0 patients, ER+ or triple negative (without high grade disease) status was also associated with a low five-year LRR risk (7% and 6%, resp.), although, in the ypN0/triple negative patients, grade 3 disease increased the five-year LRR risk to 13%. While the significance of ypN stage has been previously illustrated [6-9, 11, 26, 28-30], the influence of receptor status on LRR risk in the NAC setting has not been fully investigated. A recent combined analysis of NSABP-18 and B-27 demonstrated that in patients treated with NAC and mastectomy, clinical tumor size and nodal status and pathological primary and nodal response (after NAC) were significant predictors of LRR [9]. Notably, rates of LRR were significantly above 10% for all subsets of patients with ypN1 disease. However, neither NSABP B-18 nor B-27 contained information on receptor status, and tamoxifen was administered based on patient age rather than receptor status. We chose to focus our analysis on objective factors easily ascertained from pathology reports in order to simplify clinical decisions regarding PMRT in NAC breast cancer patients, as there are many situations in which the initial clinical stage is not clear. By incorporating receptor status along with appropriate endocrine therapy, we were able to identify a patient cohort with ypN1 disease who may not carry a greater than 10% LRR risk. Our results are consistent with the recently proposed lowrisk group suggested by a breast cancer physician panel of the Athena Breast Health Network [26]. Based on available literature and clinical case scenarios, the authors applied the American College of Radiology (ACR) Appropriateness Criteria modified Delphi methodology (for establishing expert consensus) to identify patients treated with NAC for whom PMRT may be safely omitted. Their low-risk group (corresponding to a less than or equal to 10% LRR risk) included patients with ypN0 (including triple negative status) tumors and those with ypN1, ER+ disease, age greater than or equal to 35 years, and no presence of LVI or ECE. Tumor grade was not included in their analysis, but high grade disease on biopsy appeared to predict for a higher LRR risk in our patients with ypN+/ER+ and ypN0/triple negative status. Compared to patients with ypN+/ER+ tumors, patients with ypN+ disease and triple negative or Her2+ tumors had a considerably higher LRR risk (greater than 30%) regardless of other pathological factors. These findings may be influenced by the inherent association of receptor status with tumor biology and more aggressive and advanced disease seen in women with Her2+ or triple negative disease at diagnosis ( Table 2). As illustrated in a recent meta-analysis [31], patients with Her2+ and triple negative tumors are expected to achieve much higher pathological complete response rates compared to women with ER+ tumors. Thus, residual disease in patients with ER+ tumors may more accurately represent the initial disease extent. For this reason, ypN1/ER+ patients may forego PMRT in the absence of factors, such as LVI or high grade disease, which are associated with high LRR risk in pN1 patients treated with initial surgery [32,33]. Residual nodal disease following NAC in patients with triple negative and Her2+ tumors is however concerning, as it may be indicative of even greater nodal burden prior to NAC and/or tumor resistance to systemic therapy. Therefore, when these patients do not develop a complete nodal response to NAC, they are at significant risk for LRR. Another factor which may influence LRR risk is the size of the residual primary tumor (ypT stage), especially in relation to pathological complete response (pCR) (ypT0/isN0), which is highly predictive for recurrence-free survival [34]. The small percentage of pCRs (15%) in our cohort did not afford a meaningful analysis. Among the entire cohort, ypT stage was not a predictor of LRR, regardless of achievement of a pCR. Our results did suggest a trend in decreased 5-year LRR risk for residual primary tumor sizes of ≤2 cm (including ypT0, ypTis, and ypT1) compared with residual primary tumor sizes of >2 cm (ypT2 or greater) (10% versus 21%, = 0.06). This result is consistent with prior literature [6]. Although ypT stage is an important factor, multiple studies have demonstrated that ypN status is a stronger predictor of LRR risk [9] and even overall survival [22]. Several strengths and limitations of this study warrant consideration. The majority of our ypN+/ER+ patients (92%) had less than four positive lymph nodes, and thus PMRT may still be needed in patients with ypN2 disease regardless of receptor status. Furthermore, only four subjects had positive nodes with ECE and 12 had close or positive margins following mastectomy. These small patient numbers precluded meaningful analysis with either of these pathologic factors, and therefore, our findings may not be broadly applied to patients with these tumor characteristics. Moreover, only 43% of the patients with Her2+ tumors in our study received trastuzumab, as they were treated prior to when this became standard of care. Among patients with Her2+ tumors in our study, the 5-year LRR risk for patients treated with and without trastuzumab was 12% ( = 10) and 32% ( = 13), respectively ( = 0.49). Although this difference was not statistically significant, our data and others suggest that trastuzumab may play an important role in LRR control [35]. However, even in patients treated with trastuzumab, the LRR rate was still above 10% and indicates that these patients may still derive benefit from PMRT. Lastly, our cohort of NAC patients treated with mastectomy and without PMRT is relatively large when compared with prior institutional series [30]. Furthermore, our study is unique in that all patients with ER+ tumors received endocrine therapy. Conclusion In conclusion, this study represents the first effort to examine the influence of receptor and nodal status on LRR risk among patients treated with NAC and mastectomy. Among patients with ER+ tumors and residual nodal disease, the risk of LRR is low and endocrine therapy appears to be sufficient adjuvant treatment, except in women with LVI or grade three tumors where PMRT may still be warranted. These observations corroborate and complement a recently proposed low-risk group of NAC breast cancer patients who may forego PMRT [26]. Conversely, PMRT appears warranted in women with ypN+ disease with Her2+ or triple negative tumors, who are at high risk of LRR regardless of other clinicopathologic factors. Thus, particularly in the setting of uncertain clinical stage, receptor and ypN status may help guide PMRT decisions. Our results must be validated in future, prospective studies. Disclaimer The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
2016-05-12T22:15:10.714Z
2015-07-21T00:00:00.000
{ "year": 2015, "sha1": "c1be664054fcbd0798957e0469925836ccabff8c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2015/147476", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8eb70052499cd4e862f70d33f520daaa628ed25f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14738762
pes2o/s2orc
v3-fos-license
Implementing specific oral tolerance induction to milk into routine clinical practice: experience from first 50 patients Background Although the natural history of cow’s milk allergy is to resolve during childhood or adolescence, a small but significant proportion of children will remain allergic. Specific oral tolerance induction to cow’s milk (CM-SOTI) provides a treatment option in these children with continuing allergy with high success rates. However current sentiment limits widespread availability as existing reports advise that it is too soon to translate CM-SOTI into routine clinical practice. Methods In January 2007 we implemented a slow up-dosing CM-SOTI program. Eligible subjects were identified at routine visits to our children’s allergy clinic. Persisting cow’s milk allergy was confirmed from recent contact symptoms or a positive baked milk challenge. As allergic symptoms are common during CM-SOTI, families were provided with ready dietetic access for advice on dosing and symptom treatment. Subjects were continuously monitored at subsequent clinic visits or telephonically, where no longer followed, for a median of 49 months. Results The first 50 subjects (35 males) treated ranged in age from 5.1 to 15.8 years (median 10.3 years). Full tolerance (250 mL) was achieved in 23 subjects, 9 without any symptoms, and a further 9 achieved partial tolerance with continued ingestion. Eighteen children failed to achieve any regular milk ingestion; 11 because of persistent or significant symptoms whilst 8 withdrew against medical advice. Allergic symptoms were predominantly mild to moderate in severity, although 2 cases needed treatment with inhaled salbutamol and a further 2 required intramuscular adrenaline. Clinical tolerance, both full and partial, persists beyond 5 years. Conclusion We have demonstrated that a CM-SOTI program can be successfully and safely implemented as routine clinical practice with acceptable compliance during prolonged home up-dosing, despite frequent allergic symptoms, and for up to 4 years after starting treatment. CM-SOTI can thus be put into practice more widely where there is appropriate support. Introduction The conventional management of food allergy of dietary avoidance is no longer regarded as the only acceptable treatment option. It leaves affected individuals at constant risk of the physical and psychological consequences of allergic reactions through inadvertent contact. This is particularly pertinent where the culprit allergen is commonly encountered in processed foods and thus difficult to avoid in everyday life, where the allergy is likely to persist or where the allergic reactions are potentially severe. 1 Also, as food allergy is increasing in prevalence and persisting longer, it poses increasing health and safety challenges to parents and carers. 2 As the natural history of cow's milk allergy (CMA) is resolution during childhood or adolescence, current management is allergen avoidance followed by gradual Dovepress Dovepress 2 Luyt et al reintroduction until tolerance develops. 3 There is however, a small but significant proportion of affected individuals in whom the allergy will persist. These are more commonly children with other food allergies, frequently to eggs, who are therefore subject to wider dietary restrictions. 4 These individuals would thus be ideal candidates for treatment of their CMA. Specific oral tolerance induction (SOTI) is one such treatment option whereby tolerance is achieved by oral exposure to increasing doses of the specific food allergen. 5 There are now a number of reports on SOTI to milk (CM-SOTI), although with widely varying up-dosing regimens in both the rate and concentration of dose increases. Nonetheless, all studies demonstrated the same three patterns of response: failure with persistent symptoms to even trace contact, partial tolerance raising the reaction threshold dose, and full tolerance allowing free contact. The frequency of the latter ranged from 62% to 80%. In addition to these high success rates, CM-SOTI was also shown to be a safe treatment, as sideeffects or allergic symptoms, although commonly reported in all studies, were predominantly mild to moderate in severity, affecting mostly the skin and gastrointestinal system. [6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21] Despite these favorable reports, most authors feel that it is too early to translate CM-SOTI into clinical practice. 22 The objectives of any treatment for food allergy should be to attain lasting tolerance, both immunological and psychological, in the affected individual. Unlike immunotherapy where immune tolerance, and hence symptom relief, persists well beyond completion of the treatment program, in SOTI, regular exposure is necessary to maintain immune modulation. 23 This may be stressful to the child and his or her family as omission of the daily dose of milk can lead to recurrence of symptoms. 24 However, a study of long-term follow-up of more than 4 years showed continuing dietary compliance with consequent maintenance of tolerance in most patients. 25 In 2007, in the Children's Allergy Service of the Leicester Royal Infirmary, we introduced a CM-SOTI clinical program using a slow up-dosing home administration protocol. 10 The objective of the study was to demonstrate the short and long-term effectiveness and safety of CM-SOTI as part of a clinical service. This report presents our experiences with the first 50 patients enrolled. Patient enrollment and inclusion criteria We started our CM-SOTI program in January 2007 after obtaining approval from our institution's clinical ethical committee. Children with persistent CMA were identified and recruited from the allergy clinic. Inclusion criteria for recruitment were children over 5-years-old with a history of acute-onset CMA who had experienced symptoms within the past 2 months following accidental CM exposure, or children who reacted to an open oral baked milk challenge ( Figure 1). A challenge test was performed to confirm persistence of CMA where there were no recent contact symptoms. We chose to use baked milk biscuits as the challenge food as this form of CM is less allergenic than raw milk. 26 Subjects who failed the challenge were enrolled for CM-SOTI. Where the subject completed the challenge without an allergic reaction, he or she continued with regular baked milk contact at home under dietetic supervision and with the advice to increase contact as tolerated. 27 If the subject failed to progress past milk biscuits, CM-SOTI was considered at a later stage. Clinical data recorded included concomitant food allergies and other atopic conditions, initial allergic symptoms and the most severe reactions and the age at which these symptoms occurred, and skin prick test results 28 from clinic allergy assessments. Allergic symptoms were graded into mild-moderate (involving the skin or gastrointestinal tract) or serious (involving respiratory or cardiovascular systems). Oral tolerance induction protocol When CM-SOTI was initiated, the subject was seen in our hospital medical daycare ward jointly by the clinician and the dietitian. A history was obtained and clinical examination performed to exclude any recent illness that would delay the program. Written informed consent was obtained and antihistamines and an adrenaline autoinjector were prescribed, with instructions and training to treat any allergic reactions. The family was then instructed on the up-dosing protocol and supplied with measuring pipettes for accurate dosing. They were advised that during up-dosing all other forms of dairy implementing oral tolerance induction to milk had to be avoided, and that if an illness occurred (eg, fever, common cold) appropriate treatment should be given but dose increases were to be postponed. The subject received the first dose in hospital and was monitored for 1 hour. All subsequent doses were ingested at home. As allergic symptoms are common during up-dosing, we ensured easy telephone access for the families for advice on CM dose adjustment and treatment ( Figure 2). We used the three-stage, 67-day, slow up-dosing protocol described by Staden et al. 9 In the first stage, a 1% solution (one drop of milk diluted with 99 drops of water) is used. The subject ingests 1 drop as the first dose. This is equivalent to about 0.02 mg of milk. There are eleven subsequent incremental steps in this stage to a top dose of 20 drops (0.33 mg). In the second stage, a 10% solution (one drop of milk diluted with ten drops of water) is used, starting at three drops (0.50 mg) and increasing over the subsequent eight steps to 20 drops (3.3 mg). The third stage, which uses pure milk, starts at step 22: equivalent to day 22 if up-dosing is uninterrupted. The rate of up-dosing accelerates as tolerance increases with, for example, a dose of 20 drops (33 mg) at step 30, 13 mL (429 mg) at step 40, 27.5 mL (908 mg) at step 50, and 150 mL (4,950 mg) at step 60. The targeted top dose of milk ingested daily is 250 mL (8,250 mg). Patient follow-up and outcome Parents were encouraged during up-dosing to contact the dietitian if the subject developed allergic symptoms. Once the subject achieved a stable ingestion dose of CM, as assessed by tolerance without symptoms, regular follow-up was maintained where possible. Subjects were reviewed at least every 6 months either in clinic, usually for concomitant allergic conditions or via telephone by the dietitian where clinic follow-up was no longer required. Outcomes were classified as full tolerance, partial tolerance, or treatment failure according to the individual's daily milk ingestion at the end of the up-dosing period. Subjects with full tolerance were able to ingest the targeted top dose of 250 mL of CM daily without symptoms, while those with partial tolerance could continually ingest smaller quantities. Those who failed experienced symptoms that precluded any progression of up-dosing or after achieving some partial tolerance, declined to continue when symptoms developed or for personal reasons. statistical analysis The statistical analysis was performed using chi-square analysis for categorical comparisons and the Wilcoxon matched-pairs signed-ranks test for continuous variables. Differences were considered significant when P,0.05 (Social Science Statistics). 29 Patient demographics and allergic presentations The 50 subjects (35 boys) enrolled in the CM-SOTI program ranged in age at onset from 5.1-15.8 (median 10.3) years. Other atopic conditions were reported in 41 subjects, with asthma in 25, rhinitis in 26, and eczema in 31. Concomitant food allergies were present in 35 subjects, with one other food allergy in 17 subjects, two in eleven subjects, and three or more in seven of the subjects. Most common other food allergies were to eggs and nuts. Allergic symptoms reported in the most severe reactions from previous milk ingestion affected the skin (erythema, urticaria, swelling, eczema) or gastrointestinal tract (vomiting, diarrhea, abdominal distension, abdominal pain) only in 31 subjects and the respiratory (wheeze, cough, stridor, difficulty breathing) and cardiovascular (pallor) systems, either alone or in addition, in the remaining 19. Nineteen participants were therefore considered to have experienced serious symptoms, and in eight of these subjects, symptoms had worsened in severity from their initial presentation. Allergic status was assessed at clinical encounter with skin prick tests; 17 participants tested negative, positive tests ranged in wheal size from 3 mm to 9 mm. Subjects with concomitant asthma or other allergies were more likely to achieve full tolerance (Table 1). Oral tolerance induction outcomes Twenty-three subjects accomplished full tolerance, eleven of whom completed the up-dosing protocol in the target 67 days; that is, without needing to slow up-dosing because of allergic symptoms. Nine participants had no symptoms. One subject experienced tingling of his tongue that passed spontaneously with 20 drops of 10% (day 21) and again at 3 mL neat (day 33), and another participant had mild diarrhea with seven drops 1% (day 7). One subject withdrew because of repeated symptoms of oral swelling on three attempts at four drops of neat. After a 3-month hiatus, the subject restarted treatment and achieved full tolerance in 67 days. Nine subjects had achieved partial tolerance and agreed to continue with regular ingestion at their individually-determined lower doses. These range from 30 drops of 10% to 100 mL of neat, equivalent to 4.2 mg and 3,400 mg, respectively (Table 2). Where subjects were on low doses requiring dilutions, the The remaining 18 subjects failed to achieve any regular milk ingestion. The family was advised to discontinue the program in eleven cases because of persistent or significant symptoms. Seven subjects withdrew against medical advice, even though in one case (MW), the subject demonstrated considerable tolerance (Table 3). Allergic symptoms were common, affecting 41 (82%) participants, were predominantly mild to moderate in severity (Table 4), and mostly resolved with dose adjustment and antihistamines. Inhaled salbutamol was necessary to manage respiratory symptoms in two subjects, and two other subjects required intramuscular (IM) adrenaline for anaphylaxis. The events leading up to anaphylaxis requiring adrenaline were as follows: Subject ST was highly atopic with asthma and eczema and multiple food allergies, including allergy to milk. He achieved tolerance at 30 mL neat milk daily and had been taking that dose for 3 weeks. The day after missing his daily dose, he took his usual dose and shortly thereafter undertook exercise. Subject RE also had asthma. In the weeks leading up to his reaction his asthma control had deteriorated, but his mother had not realized that this might be associated with milk contact and had not notified the dietitian. Follow-up Continuous contact and consequent follow-up has been maintained with all but two families where the subject achieved full tolerance and in all with partial tolerance. All subjects in the full tolerance group continue to ingest milk and other dairy products freely. The length of followup of those with full tolerance ranged from 4-64 months (median 49 months), with ten for .48 months, six for 24-47 months, and five for ,23 months; and of those with partial tolerance ranged from 12-57 months (median 49 months), with 5 for .48 months, one for 24-47 months, and three for ,24 months. Discussion CM-SOTI is now an established treatment for persistent CMA. However, as with any recently introduced therapeutic modality, concerns arise about protocols, efficacy, safety, and long-term effects, and consequently, whether the treatment can be implemented as routine practice outside closely monitored research conditions. Current opinion favors limiting the implementation of CM-SOTI with many protocols advocating up-dosing only in hospital, either as day case or inpatient procedures. 9,15, 16 We present our experience of implementing a slow home up-dosing CM-SOTI program into our routine clinical practice. In immune modulation, either allergen immunotherapy (AIT) or SOTI, the rates at which treatment doses are increased are arbitrary but are limited by side effects that prevent achievement of the target dose. In hymenoptera subcutaneous immunotherapy, for example, duration of updosing ranges from 3.5 hours to 15 weeks. 30,31 Similarly, there are rush 13,14,19 and slow 9,12,21 up-dosing protocols in CM-SOTI, with pros and cons to both. Rush protocols require hospital visits or admission for up-dosing and carry a greater risk of serious side effects, but supervised up-dosing encourages compliance and provides medical support when the patient is most at risk of an allergic reaction. 15 Slow protocols are timeconsuming and possibly onerous, with the consequent risk of poor compliance or protocol violations but are less likely to trigger serious allergic reactions. 16 Available resources, particularly the costs and availability of day case or inpatient hospital beds, could influence the choice of protocol. We have successfully implemented a home up-dosing CM-SOTI program. We chose the protocol because it afforded us the independence to initiate treatment in individual patients implementing oral tolerance induction to milk at their convenience, not the institution's, especially in the winter months when elective admissions are often deferred because of lack of available hospital beds. In our clinical program, we have so far in our first 50 patients achieved a success rate of full and partial tolerance of 64%. Our patient group were beyond the age at which natural tolerance is most likely to develop and hence had confirmed persistent milk allergy. 3 The rates of success previously reported ranged widely, from a similar two-thirds to as high as 100%. 8,12,19 Explanations for the differences in outcome may be the sample size (CM-SOTI treatment groups ranged from 4-30 subjects), 12,13 pretreatment tolerance, 15,16 individual severity, 14 and age range, with many series including children under 5 years of age 8,[10][11][12][13][14]17,19 and one series in under 5-year-olds exclusively. 20 Pretreatment tolerance varied widely from only trace contact triggering symptoms 14 to the ability to tolerate up to 100 mL of raw milk. 16 We included among those who failed to achieve any tolerance subjects who withdrew for personal rather than medical reasons, even though in some instances they were able to ingest significant amounts of milk (up to 100 mL in one case). This was a higher dropout rate than previously reported. 17,[19][20][21] Possible explanations are either because families were not subject to the same scrutiny as in clinical trials, the longer protocol is not patient-friendly, or because this is a larger cohort and so more representative of actual population compliance. Allergic symptoms commonly occur during CM-SOTI up-dosing. Most are mild and respond to antihistamines and dose adjustment, although they may occasionally be serious and require treatment with IM adrenaline. Accordant with the experiences of others, we also noted a high prevalence of side effects or allergic symptoms affecting 41 (82%) patients. IM adrenaline was used in only two participants; one patient was highly atopic with multiple other food allergies and the other had, as determined by further examination, very severe milk allergy with specific immunoglobulin E to casein of .100 kU/L. In response, we withdrew both patients from the program. In previous studies that include data on treatment, IM adrenaline was used in nine instances, three of which were at home. [15][16][17][18] Severe reactions are thus more frequent in hospital rapid up-dosing programs. 15 However, there is a highly reactive population of patients at risk of anaphylaxis irrespective of the speed of up-dosing. In our study, no clinical parameters, Luyt et al symptom severity at baseline food challenge, cow's milk specific immunoglobulin E .50 kU/L, and cow's milk skin prick test .9 mm wheal. Their clinical criterion seems to contrast with our experience, possibly because nearly all (96.3%) of their study patients were in the severe baseline group. They concluded that CM-SOTI was insufficiently safe in 25% of children. Further studies need to be conducted to identify these at-risk individuals to alert clinicians to provide closer supervision and support before instituting treatment, and thereby enhance safety, particularly of home up-dosing programs. SOTI contrasts with AIT, the only other immune modulation treatment modality in routine clinical practice, in two respects. Firstly, SOTI requires continued allergen exposure to maintain its effect, while AIT remains effective up to at least 10 years after completion of the 3-year treatment program. 9,24,32 The ultimate measure therefore of SOTI success is long-term compliance and whether allergic symptoms, particularly severe symptoms, occur including previous symptom severity, differentiated between subjects who did or did not achieve full tolerance; the latter had more frequent and severe allergic symptoms. Longo et al 14 have also demonstrated successful tolerance induction in a group of children selected for treatment because they presented with very severe cow's milk-induced reactions. The severity, therefore, of preceding allergic reactions does not seem to be a useful guide in predicting severe adverse reactions in CM-SOTI. Vásquez-Ortiz et al 21 No subject reported use of emergency care. The second difference between SOTI and AIT, particularly pollen AIT, is the clinical end-point. In SOTI, success is measured by total absence of symptoms, while AIT is regarded successful if the treatment effects are significantly better than placebo, even though the patients still experience symptoms. 32,33 Where pollen counts are low, all AIT treatment patients would be expected to be free from symptoms. This could be considered equivalent to partial tolerance in SOTI, so in this respect, both full and partial tolerance should be regarded treatment success. In conclusion, we have demonstrated that a CM-SOTI program can be successfully and safely implemented as routine clinical practice with acceptable patient compliance during up-dosing, when allergic symptoms are common, and for up to 4 years after initiation of treatment. While there remains a risk to individual patients of severe allergic reactions, we believe that the benefit of CM-SOTI is greater than the continuing risks of persistent allergy. We propose therefore that CM-SOTI, like AIT, could be more widely offered to patients after being made aware of its risks but with the protection of available emergency medication and physician support.
2017-03-31T01:16:24.233Z
2014-01-28T00:00:00.000
{ "year": 2014, "sha1": "64bbd93d92bfe4a9e92e1ab5b3e5174b11d6857a", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=18823", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7466987a4ea9bf6b821dbaffc26595858e6b41ee", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234011394
pes2o/s2orc
v3-fos-license
Fluorinated alcohols: powerful promoters for ring-opening reactions of epoxides with carbon nucleophiles Ring-opening reactions of epoxides with carbon nucleophiles are valuable transformations for constructing functionalized carbon-carbon bonds. Epoxide ring-opening methods typically require Lewis acidic additives and/or strong nucleophiles to overcome the activation barrier for these reactions. Fluorinated alcohol solvents present a desirable alternative, enhancing the efficacy of these reactions with weak and neutral carbon nucleophiles by promoting electrophilic activation of the epoxide. We present here a thorough review of the literature regarding epoxide ring-opening reactions with carbon nucleophiles in fluorinated alcohol solvents, concluding with a few recent examples with aziridines Introduction 1.General Nucleophilic additions to epoxides are a common theme in chemical reactivity, ranging from preparations of poly(ethylene glycol) from ethylene oxide, 1 to DNA alkylations of carcinogenic epoxide metabolites of polycyclic aromatic hydrocarbons. 2Despite the ring strain of the three-membered ring, epoxides are generally stable to long-term storage, due to the thermodynamic strength of the carbon-carbon and carbon-oxygen bonds.Therefore, many reactions of carbon nucleophiles with epoxides require either Lewis acidic reagents or catalysts to activate the latent electrophilicity of the epoxide, and/or highly nucleophilic main group organometallics, such as organolithium or organomagnesium compounds, which are also strongly basic. Our interest in this topic arises from previous work from our laboratory involving epoxide electrophiles with carbon nucleophiles.These transformations have used strong Lewis acids, such as trimethylsilyl triflate (TMSOTf) for the intramolecular tricyclization of diepoxyenolsilane 1 to tricyclic ketone 2, 3 and boron trifluoride-tetrahydrofuran (BF3-THF) for the intermolecular addition of alkyne 3 with epoxide 4 to produce alkynyl alcohol 5 (Scheme 1). 4 Both examples required careful control of reaction time and temperature to attain the optimized yields. In the past dozen years, other laboratories have reported several classes of ring-opening reactions of epoxides with carbon nucleophiles, using fluorinated alcohol solvents to promote these reactions under milder conditions or with greater efficiency than previously reported, including some transformations similar to those depicted in Scheme 1.The substantive 2004 review of Bégué et al. described fluorinated alcohol solvents activating ring-opening reactions of epoxides with several classes of heteroatom nucleophiles, including amines, thiols, and carboxylic acids. 5][8][9][10][11][12] This review focuses on ring-opening reactions of epoxides with carbon nucleophiles promoted by the fluorinated alcohol solvents 1,1,1,3,3,3-hexafluoro-2-propanol (HFIP) and 2,2,2trifluoroethanol (TFE, Figure 1).Our review concludes with a few examples of ring-opening reactions of aziridines with carbon nucleophiles in fluorinated alcohol solvents.To maintain focus, we have not extended this review beyond the heterocyclic epoxide and aziridine electrophiles, even though the analogous ring-opening reactions of aryl-substituted cyclopropanes with electron-rich aromatic nucleophiles in HFIP may be mechanistically related. 13Nonafluoro-tert-butyl alcohol (NFTB) promotes epoxide ring-opening with oxygen nucleophiles, including regioselective cascade cyclizations of polyepoxides terminated by alcohols, 14 but we have not yet uncovered examples of NFTB with carbon nucleophiles adding to epoxides. The electron-withdrawing fluoroalkyl groups are responsible for the low nucleophilicity and Brønsted acidity of fluorinated alcohol solvents.To illustrate, fluorinated alcohols exhibit enhanced acidity (pKa, Table 1, entry 1) and strong hydrogen bond donating ability (entry 3), especially with ethereal oxygens. 22,23This results in an aggregation-induced decrease in the *OH orbital energy (Figure 2).For HFIP in the solid state, this complexation takes a helical form. 23In addition, HFIP is a strongly ionizing solvent, and is more than five orders of magnitude less nucleophilic than ethanol (entries 5-7). 20,21,24,25Fluorinated alcohol solvents are more expensive than the non-fluorinated congeners.However, bulk prices are currently as low as $100 USD per kilogram, with TFE less expensive than HFIP.The low boiling points are desirable for recovering and recycling fluorinated alcohol solvents but present an upper limit on reaction temperature under refluxing conditions. 26,27 Aggregation of HFIP, depicting hydrogen bond donation with 1,4-dioxane. Fluorinated alcohol solvents are about one order of magnitude more toxic than ethanol or 2-propanol, with LD50 values ranging from 300 -600 mg/kg in mice (Table 2). 28,29The toxicity of TFE arises from metabolic oxidation pathways. 30Most biological studies focus on the in vivo production of TFE and HFIP as metabolites from fluorinated anesthetics and other fluoroorganic drugs. 31 3). 32he alkylations proceeded with regioselective addition at the benzylic carbon, and with complete stereospecificity, corresponding to inversion of configuration at the chiral carbon.For example, the parent indole (6a) reacted with styrene oxide (7) to give good yields of 8a at room temperature in TFE, with only trace amounts of the trifluoroethoxy byproduct 9 (entry 1).This reaction proceeded more rapidly and cleanly at reflux (entry 2).The corresponding reaction of 6a in aqueous acetone or aqueous ethanol solvent gave lower yield of product 8a, and required significantly longer reaction times (entries 3, 4).Methyl-substituted indoles 6b and 6c also gave good yields of the corresponding alkylated products 8b and 8c in TFE (entries 5 -7) vs. other solvents (entries 8, 9).However, electron-withdrawing substituents on the indole diminished nucleophilicity, so that 5-bromoindole 6d required 72 hours for partial conversion, with the yield of 8d diminished by the competing reaction of TFE with the epoxide (entry 10). AUTHOR(S) Table 3. TFE-promoted alkylations of indoles 6 with (R)-styrene oxide (7 The regioselectivity and stereospecificity outcomes were consistent with the fluorinated alcohol solvent stabilizing partial positive charge on the benzylic position in the transition state for the alkylation reaction (Figure 3). Westermaier and Mayr established that the scope of epoxide substrates with indoles was relatively limited: the reaction of 6c with trans-stilbene oxide (11) provided 13 in good yield, but the corresponding reaction with cis-stilbene oxide (12) proceeded slowly-albeit with stereospecificity-to generate the expected diastereomer 14 (Scheme 2).The aliphatic epoxide 1,2-epoxyhexane (15) underwent indole alkylation only sluggishly, at the unsubstituted carbon, to produce 16, with the competing addition of TFE giving byproduct 17.Sun, Hong, and Wang extended the alkylation of indole (6a) to spiroepoxyoxindole 18, exploring several conditions with fluorinated alcohol solvents (Table 4). 33TFE promoted the reaction even at room temperature (entry 1), with a better yield and shorter reaction time upon warming (entry 2).The more acidic and highly ionizing solvent HFIP gave a considerably faster reaction, albeit with a slight loss of regioselectivity (entry 3).However, selectivity was regained in water containing some HFIP (9 : 1 ratio) to provide 19 in excellent yield, with carbon-carbon bond-formation at the more substituted position (entry 4).Other organic solvents such as dichloromethane, dimethyl sulfoxide, methanol, tetrahydrofuran, or toluene did not give product 19.Westermaier and Mayr also described alkylations of pyrroles with styrene oxide (7) in TFE, affording regioisomer mixtures and double alkylation products with simpler pyrroles.Conversely, 1,2,5-trimethylpyrrole (20) selectively produced 21 as its major product (Scheme 3). 32Li and Qu subsequently reported alkylation of 1,3,5-trimethoxybenzene (22) in HFIP, with the electron-rich aromatic compound out-competing the HFIP solvent to favor the aromatic alkylation product 23 over the solvent addition product 24. 34Chiral non-racemic styrene oxide (R)-7 stereospecifically led to both products 21 and 23, with inversion of configuration.The yields were substantially lower with 1,4-dimethoxybenzene (35%) and with anisole (15%).The three-atom + two-atom annulations of epoxides with alkenes to form tetrahydrofurans have typically required transition metal catalysts, likely operating by radical or Lewis acid processes. 35,36However, Llopis and Baeza have reported catalyst-free conditions, simply by warming in HFIP solvent. 37Although the scope is limited to aryl-substituted epoxides, the yields are modest, and the diastereoselectivity is generally low, metal-catalyzed versions of this transformation also share these limitations. 35,36The reactions of styrene (25 and alpha-methylstyrene (26) with racemic styrene oxide (7) in HFIP solvent exemplify these results (Scheme 4).With chiral non-racemic styrene oxide (R)-7, the reaction with styrene (25) gives racemic product 27, whereas only partial racemization occurs upon forming tetrahydrofuran 28 from alpha-methylstyrene (26) (not shown).Annulations with ethyl 3-methyl-3-phenylglycidate (30), commercially available as a mixture of diastereomers, give higher diastereoselectivities upon reaction with styrene (25) and 1,1-diphenylethene (29).The regioselectivity is consistent with the less substituted carbon of the alkene adding to the phenylsubstituted carbon of the epoxide, with carbon-oxygen bond formation at the phenyl-substituted carbon arising from the alkene reactant.The yields of tetrahydrofuran products are diminished by competing dimerizations and trimerizations of the aryl alkene, 38 and solvent addition to the epoxide, forming byproducts including ether 24 (see Scheme 3 for structure).Scheme 4. Three-atom + two-atom annulations of aryl-substituted epoxides with aryl alkenes. Intramolecular carbon-carbon bond-forming reactions Li and Qu reported the intramolecular alkylation of epoxides tethered to electron-rich aromatic rings, using fluorinated alcohol solvents to activate the epoxide. 34Although the literature includes several Lewis acidcatalyzed methods for the cycloisomerization of 33 to 34, these workers explored the effects of highly ionizing solvents in the absence of Lewis acids (Table 5).In contrast to methanol or water (entries 1, 2), TFE promoted cyclization in excellent yield (entries 3, 4).HFIP was an even more effective promoter, affording the cyclized product in only five minutes at reflux (entries 5, 6).Nucleophilic addition occurred with high regioselectivity for the 6-endo-mode of cyclization, and with inversion of configuration at the benzylic carbon to provide the trans-disubstituted benzopyran 34 from the trans-disubstituted epoxide 33.The significant decrease in reaction time between TFE and HFIP is consistent with the increased acidity and ionizing power of HFIP, rather than hydrogen bond donation, which diminishes at higher temperature. 39,40age 10 © AUTHOR(S) The epoxide substrate 35 with an acid-sensitive benzylic ether, provided a valuable demonstration of the power of this HFIP-promoted transformation (Scheme 5).The previous synthesis of compound 36, closely corresponding to the catechin natural products, required a specialized combination of Lewis acid and hydrogen bond donor catalysts (AuCl3 / AgOTf / thiourea). 41In contrast, HFIP solvent promoted the slow but clean conversion of epoxide 35 into benzopyranol 36, arising from 6-endo-mode nucleophilic addition to the unsubstituted carbon of the epoxide. 34The Magauer laboratory reported the acid-catalyzed cycloisomerizations of neopentyl epoxides tethered to electron-rich aromatic rings. 42In the course of cyclization of substrate 37 to tetralin product 38, a methyl group underwent 1,2-alkyl shift.Cyclizations were unsuccessful or proceeded in low yield in most solvents (Table 6, entry 1 for a representative example) but improved in fluorinated alcohol solvents, with HFIP outperforming TFE (entries 2 vs. 3).The optimized conditions used sulfuric acid in HFIP at 0 °C (entry 5).HFIP forms hydrogen bonds with the conjugate base of sulfuric acid, increasing its Brønsted acid activity.The phenyl substrate 39a also produced the corresponding tetralin 40a (Table 7, entry 1). 42The reaction conditions tolerated aryl ether tethers in 39b to form chromane 40b (entry 2).With electron-donating substituents, cycloisomerization favored the para-isomers 40c-d with varying levels of regioselectivity (entries 3, 4).Aromatic rings with strongly electron-withdrawing substituents gave lower yields or did not cyclize.Neopentyl epoxide substrates containing cycloalkyl rings showed divergent behavior, depending on the degree of ring strain (Table 8). 42The cyclobutyl and cyclopentyl substrates 41a-41b favored the corresponding ring-expansion fused products 42a and 42b (entries 1, 2), whereas the cyclohexyl substrate 41c produced exclusively the spiro isomer 43c arising from a 1,2-methyl shift.a substrates and products are racemic. The partitioning of mechanistic pathways leading to products 42 and 43 is consistent with carbenium ion intermediates. 42As depicted in Figure 4 Biomimetic polyene-epoxide polycyclizations have typically required Lewis acid promoters or catalysts. 46,47n contrast, the Qu laboratory observed slow conversion of epoxydiene 46 when dissolved in HFIP, producing a mixture of tetracyclic product 47, the oxabicyclo[2.2.1]heptane byproduct 48, and an inseparable mixture of partially cyclized dienes 49 (Table 9, entry 1). 48Remarkably, epoxydiene 46 was inert in other fluorinated alcohol solvents (entries 2, 3).p-Toluenesulfonic acid (p-TSA) rapidly catalyzed the reaction of 46, but gave a mixture favoring the partially cyclized byproducts 49 (entry 4).The reaction rate dramatically increased, with improved selectivity for tetracyclic product 47, upon gradual addition of epoxydiene 46 to HFIP solutions of soluble organic salts with fluorine-containing non-nucleophilic anions (entries 5 -7).Tetraphenylphosphonium tetrafluoroborate (Ph4PBF4) in HFIP gave the best yield of compound 47 (entry 7).Although excess water hydrolyzed the epoxide of 46 to form a diol, up to 20 equivalents of water were compatible with tricyclization (entry 8).Deliberately adding catalytic hydrogen fluoride to HFIP gave a similar enhancement in the reaction rate (entry 9), supporting a proposal that BF4 -and PF6 -provided trace amounts of HF.However, replacing HFIP with dichloromethane, while including Ph4PBF4 and HF as additives, gave no reaction (entry 10).The authors concluded that HFIP played an essential role in promoting the polycyclization, likely by stabilizing the fluoride conjugate base.a n.r.= not reported. These scientists then applied these conditions to the cyclization of squalene oxide (50), the biosynthetic precursor of lanosterol and other steroid natural products (Scheme 6). 480][51][52][53] The AUTHOR(S) formation of compound 52 is consistent with a mechanism involving concerted cyclization of the three alkenes closest to the epoxide, thereby generating a tertiary carbenium ion 51.The authors proposed that the cations -including protonated epoxide and the tricyclic intermediate cation 51 -were stabilized by the nonnucleophilic solvent HFIP and/or the non-nucleophilic tetrafluoroborate anions.From 51, an intramolecular cascade of face-selective 1,2-hydride and 1,2-methyl migrations followed by deprotonation generated the principal product 52.In summary, although most epoxide alkylations are limited to electron-rich aromatic compounds, fluorinated alcohol solvents effectively replaced the Lewis acidic reagents and catalysts customarily used for these transformations.The intramolecular alkylations of epoxides tethered to polyenes have demonstrated the powerful combination of additive Brønsted acid sources in combination with HFIP. Ring-opening Reactions with Organopalladium Intermediates arising from Directed C-H Functionalization Directed C-H functionalization of aromatic rings has traditionally required strongly basic reagents, such as tertbutyllithium combined with N,N,N,N-tetramethylethylenediamine (TMEDA).Regioselectivity ortho-to a Lewis basic directing group (DG) arises from coordination with the electropositive metal, bringing the electronegative alkyl ligand into proximity to the aromatic C-H bond. 54,55The resulting functionalized aryllithium intermediates react with many electrophiles, and the literature documents several examples with epoxides. 56However, transition-metal catalysts offer milder conditions for directed C-H functionalization.The literature provides several examples in which HFIP has favored palladium acetate-catalyzed directed metalations of benzene rings, coupled with in situ alkylation of epoxides, presumably also activated by HFIP AUTHOR(S) for nucleophilic addition (Figure 5). 57-61A variety of Lewis basic directing groups (DG) are effective, including 2pyridyl (54), and a variety of carbonyl-or carboxyl-derived compounds 55 -59.The methods published to date have several common features: • All use palladium acetate as the catalyst, • The methods show broad scope with many substituents R 1 on the benzene ring, and • A variety of monosubstituted and 1,1-disubstituted epoxides give good yields (Figure 6).These methods also share common substrate limitations: • To date, heterocyclic aromatic rings are not functionalized under these conditions, and • 1,2-Disubstituted epoxides generally do not react, or give substantially lower yields. • No examples have been reported with styrene oxide (7) or other arylepoxides. Scope of reactions and conditions In 2015, the Kuninobu and Kanai laboratories collaboratively reported the regioselective alkylation of 2phenylpyridine (54) and derivatives with epoxides including phenyl glycidyl ether (60), catalyzed by palladium acetate (Scheme 7). 57The initial solvent choice, acetic acid, gave low yields due to acid-promoted decomposition of the epoxide.Diluting acetic acid with HFIP resulted in the substituted phenethyl alcohol 65 in excellent yield, provided that two equivalents of epoxide were used at room temperature, as the epoxide decomposed under these conditions at higher temperatures.These workers established that HFIP alone was not sufficient to promote this transformation.In addition to the 54 and 60 example, they also reported the analogous transformation with the N-methoxyamide 66 and methyl glycidate (63).In this example, the lactone ring of product 67 is presumably formed via acid-catalyzed intramolecular transacylation after the carboncarbon bond-forming step.Later in 2015, the Yu laboratory disclosed the directed alkylation of benzoic acids, including meta-toluic acid (68), with a broad scope of epoxide substrates (Scheme 8). 58Essential components of this reaction system included palladium acetate, potassium acetate, and HFIP solvent.Cesium acetate led to lower yields, and little or no product was formed when using sodium acetate or lithium acetate.Yields increased from 75% to 99% with the mono-N-protected amino acid ligand N-acetyl-tert-leucine.In a highly optimized example with benzyl glycidyl ether (61), product 69 was isolated in 99% yield.At room temperature, the reaction proceeded more slowly and required higher catalyst loading, but produced intermediate hydroxyacid 70.This compound underwent HFIP-promoted lactonization at reflux to form 69. By avoiding acetic acid as a cosolvent, a broad range of epoxide substrates were compatible with HFIP, even under reflux.This carboxyl directing group method was compatible with cyclohexene oxide (71), producing trans-fused 72 in satisfactory yield. © AUTHOR(S) O OH In 2020, the Cheong and Lee laboratories collaboratively published the corresponding directed alkylations with equimolar amounts of epoxides, using N-acyl aniline derivatives as the directing groups, including acetanilide (58), the corresponding N,N-dimethylurea 59, and 1-phenylpyrrolidin-2-one (77) (Scheme 10). 60otably, the Lewis basic oxygens of the directing groups were one atom further removed from the benzene carbon undergoing C-H functionalization, yet the optimized conditions were similar to those reported for most of the other directing groups. Mechanistic proposals Wang, Kuninobu, and Kanai reported the relative rates of reaction of 2-phenylpyridine (54) vs. 54-d5 with phenyl glycidyl ether (60), measuring a primary kinetic isotope effect kH / kD = 2.6, indicating that the ratedetermining step was C-H bond activation (Scheme 11). 57These scientists prepared a plausible dimeric palladacycle intermediate 79, but this palladacycle did not promote the ring-opening reaction with epoxide 60.They speculated that oxidation to Pd(IV) might be required for alkylation of epoxides.The Fang laboratory explored these results via density functional theory (DFT) studies. 61These scientists found that the lower energy pathway involved coordination of epoxide to a mononuclear palladacycle to form intermediate 80, followed by oxidative addition of the coordinated epoxide to Pd(IV) metalloxetane 81 (Figure 7, part a).The catalytic cycle concluded with proton transfer to form intermediate 82, followed by reductive elimination to yield the palladium complex with the directing group 83.In contrast, mechanisms involving only Pd(II) intermediates (Figure 7 In contrast, the Yu laboratory conducted a stoichiometric experiment with meta-toluic acid-derived palladacycle 85, which reacted with benzyl glycidyl ether (61) to produce the same compound 69 arising from catalytic conditions (Scheme 12).Moreover, the trans-stereochemistry of 72 arising from the reaction with cyclohexene oxide (71) suggested that the arylpalladium intermediate 85 reacted with inversion of configuration at the reactive carbon from the epoxide, without requiring a change in oxidation state from Pd(II). 58In summary, we note that the carboxylic acid readily forms the anionic carboxylate under the reaction conditions.This makes the attached aryl ligand in Pd(II) complex 85 more nucleophilic for alkylation with epoxides.Neutral directing groups may uniquely require a mechanism involving Pd(IV) for epoxide alkylations. Ring-opening Reactions with Terminal Alkyne Nucleophiles A pair of collaborative studies from the laboratories of Sedaghat and Khalaj have described three-component coupling / cyclization methods, combining terminal alkynes, epoxides, and the active methylene compounds malononitrile (89) or dimethyl malonate (90) (Table 10). 62,63Several Cu(I) catalysts gave good to excellent yields of highly functionalized pyrans 91 or 92, corresponding to the active methylene reactant.In all cases, the best yields arose with HFIP as solvent, although satisfactory results were also reported with polyethylene glycol 400 (PEG 400, entries 5, 13).Both solvents can activate the electrophilic epoxide by hydrogen bonding with the epoxide oxygen, however, these workers did not propose a role for HFIP in alkyne activation.The non-nucleophilic nature of HFIP apparently prevented the competing addition of solvent to the epoxide, even in the presence of tertiary amine.The stereochemistry of the trisubstituted alkenes of 91 -92 was not established in all cases, although the 1 H and 13 C NMR data suggested that isolated products may correspond to only one alkene stereoisomer. AUTHOR(S) • This suggests that HFIP promoted the copper-promoted formation of a copper acetylide intermediate, which has added to a hydrogen bonded-complex of epoxide with HFIP, giving 98. b) The combination of 4-phenylbut-3-yn-1-ol (99) and dimethyl malonate 90 catalyzed by (IPr)CuCl in the absence of base and in the polar aprotic solvent N,N-dimethylformamide (DMF) gave the malonate addition product 100, albeit in modest yield.The catalytic loading of (IPr)CuCl was not specified in this experiment. • The (IPr)CuCl catalyst may have deprotonated malonate (the imidazolium cation has pKa 21.1) 64 and promoted regioselective malonate addition to the alkyne, giving 100, in a step that did not require HFIP.c) The combination of alkynyl alcohol 99 and dimethyl malonate (90) catalyzed by (IPr)CuCl in the presence of tertiary amine and HFIP produced the dihydropyran 101 in good yield. • HFIP alone may have promoted the final intramolecular transacylation and tautomerization; the role of the tertiary amine in this scenario was unclear.In summary, the combination of moderately acidic HFIP solvent with basic tertiary alkylamine without solvent addition to the epoxide is quite interesting, as similar conditions promote hexafluoroisopropoxide nucleophilic addition to phosgene and thionyl chloride electrophiles. 65,66This work merits additional investigation and optimization, particularly the direct addition of alkyne to epoxide in the absence of an active methylene compound. Ring-opening Reactions of Aziridines with Carbon Nucleophiles in Fluorinated Solvents This review concludes with extensions of two approaches described earlier in this review, applied to aziridine electrophiles, promoted by HFIP.In 2019, the Zhao laboratory reported palladium-catalyzed C-H functionalization of 3-methoxybenzoic acid (102) and other arylcarboxylic acids, reacting with a relatively broad range of N-tosylaziridines including monosubstituted 103, producing the protected beta-arylethylamine 105, an important substructure in medicinal chemistry (Scheme 13). 67The principal competing process was ring-opening of aziridine with the carboxylic acid, which was suppressed by diminishing the cesium carbonate loading to substoichiometric amounts.The conversion increased with 2,4,6-trimethylbenzoic acid (104) as a substoichiometric additive.A solvent screen revealed that a protic alcohol solvent was required, with HFIP giving the best yields.In 2019, Samzadeh-Kermani described an organocatalytic synthesis of tetrahydropyridone imines, including compound 107 (Scheme 14). 68The carbon nucleophile was an aryl or alkyl isonitrile, with cyclohexyl isonitrile (106) as a representative case.Several Lewis acids gave competing isomerization of monosubstituted aziridine 103 to a N-tosylimine, but tetrabutylphosphonium acetate in refluxing HFIP promoted aziridine ringopening with nucleophilic addition of the isonitrile.The base for deprotonating malononitrile (89) may have been the anionic N-tosylamide from ring opening, or the acetate counteranion.Nucleophilic addition of dinitrile-stabilized carbanion to the alkylnitrilium cation from the initial isonitrile addition step explains the remaining carbon-carbon bond-forming step.The author proposed that the tetrabutylphosphonium cation may coordinate with one of the nitriles to promote intramolecular nucleophilic addition of the tosylamide to close the tetrahydropyridine ring. Conclusions This review describes the benefits of fluorinated alcohol solvents in promoting the ring-opening reactions of epoxides and aziridines with carbon nucleophiles.The advances presented herein fall into two categories: • significant electrophilic activation, due to the formation of complex structures and aggregations, such as the formation of an activating HFIP complex with epoxides, thereby allowing reactions with weak and neutral nucleophiles; and • safety and environmental benefits, especially where fluorinated alcohol solvents replace Lewis acid reagents, and even more so when the solvent is recycled.We anticipate that other researchers will find that fluorinated alcohol solvents enable other synthetically valuable transformations that have not been previously developed.The role of these solvents in activating C-H bonds is not well-established, warranting further investigation to increase our understanding of fluorinated alcohol solvents.Additionally, there is clearly room for significant future work in aziridine ring-opening reactions in fluorinated alcohols, an area with potential for synthesizing pharmaceutical substances. Figure 3 . Figure 3.The electrophilic aromatic alkylation mechanism promoted by hydrogen bonding and the ionizing power of TFE. isomer not shown. Scheme 12.A stoichiometric experiment with palladacycle 85 from meta-toluic acid, and a mechanistic proposal based on the stereochemistry of 72. Scheme 13.Palladium-catalyzed C-H functionalization of an arylcarboxylic acid with addition to an Ntosylaziridine, promoted by HFIP. Table 2 . Toxicity of HFIP and TFE opening Reactions with Neutral Carbon Nucleophiles, with Fluorinated Alcohol Solvents as Substitutes for Lewis Acid Promoters or Catalysts 2.1. Intermolecular carbon-carbon bond-forming reactions The alkylation of indoles with epoxides has typically required Lewis acid catalysis to activate epoxide C-O bond cleavage.In 2008, Westermaier and Mayr reported that indoles 6a -6d reacted with equimolar (R)-styrene oxide (7) in TFE solvent, without additional Lewis acid, to provide the alkylated products 8a -8d (Table Table 5 . Solvent screening for cycloisomerization of epoxide 33 a remainder was diol from epoxide hydrolysis.
2021-05-06T08:57:36.896Z
2021-02-02T00:00:00.000
{ "year": 2021, "sha1": "d6d078a8f56a2554ec8fbe220be99a6825201bcf", "oa_license": "CCBY", "oa_url": "https://www.arkat-usa.org/get-file/72798/", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "626edaae0450b3cd960be3c6f289a4541160d006", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
257970859
pes2o/s2orc
v3-fos-license
Lectin Receptor-like Kinase Signaling during Engineered Ectomycorrhiza Colonization Mutualistic association can improve a plant’s health and productivity. G-type lectin receptor-like kinase (PtLecRLK1) is a susceptibility factor in Populus trichocarpa that permits root colonization by a beneficial fungus, Laccaria bicolor. Engineering PtLecRLK1 also permits L. bicolor root colonization in non-host plants similar to Populus trichocarpa. The intracellular signaling reprogramed by PtLecRLK1 upon recognition of L. bicolor to allow for the development and maintenance of symbiosis is yet to be determined. In this study, phosphoproteomics was utilized to identify phosphorylation-based relevant signaling pathways associated with PtLecRLK1 recognition of L. bicolor in transgenic switchgrass roots. Our finding shows that PtLecRLK1 in transgenic plants modifies the chitin-triggered plant defense and MAPK signaling along with a significant adjustment in phytohormone signaling, ROS balance, endocytosis, cytoskeleton movement, and proteasomal degradation in order to facilitate the establishment and maintenance of L. bicolor colonization. Moreover, protein–protein interaction data implicate a cGMP-dependent protein kinase as a potential substrate of PtLecRLK1. Introduction Plants coevolve simultaneously with diverse microbial communities [1][2][3][4] and establish molecular mechanisms to either permit or prevent the establishment of a particular microorganism [5,6]. Because microbial interactions can benefit plant sustainability and productivity, it is important to understand the genetic and environmental factors that determine interactions and their outcome on plants and their surrounding environments. Understanding the ecological and evolutionary principles governing these interactions provides an opportunity to engineer microbes and plants to achieve more sustainable and productive ecosystems [7] and mitigate risks associated with introducing microbes into non-native environments [8,9]. Quite remarkably, recent studies have shown that a single plant host gene can be genetically engineered to selectively prevent [10] or permit colonization for a particular fungus [11,12]. How these 'susceptibility factors' evolved to functionally override all other levels of plant immunity is poorly understood. In a recent study, we applied quantitative trait locus (QTL) mapping in poplar, which is an important biofeedstock for pulpwood, lumber, and bioenergy, and identified a susceptibility factor implicated in fungal root colonization [12]. It was determined that Populus trichocarpa encode a G-type lectin receptor-like kinase (PtLecRLK1) that permits root colonization for Lacarria bicolor, a beneficial ectomycorrhizal (ECM) fungus that provides poplar soil nutrients and water in exchange for carbon. Most intriguingly, genetically engineering PtLecRLK1 into non-host plants (Arabidopsis and switchgrass) fully permits L. bicolor plant root invasion and the establishment of intracellular hyphae (referred to as Hartig net), a prerequisite for symbiosis [11,12]. Upon fungal recognition, plasma membrane (PM)-localized receptor-like kinases (RLKs) trigger coordinated signaling pathways for an extensive new transcriptional program in the plant host, particularly in the root, for cellular remodeling and metabolic alterations to accommodate the growing interaction [13][14][15][16][17]. A multi-omics assessment of PtLecRLK1 transgenic switchgrass roots identified dramatic changes in host transcription and translation, and the concurrent changes in the metabolite abundance that occurred with L. bicolor colonization [11]. Engineering PtLecRLK1 into a switchgrass plant changes its susceptibility to L. bicolor by reprogramming the expression of transcripts, proteins, and metabolites associated with intracellular transport, nutrient assimilation, carbohydrate metabolism, cell cycle and wall organization, and defense-related processes. Yet, despite this advancement, it remains to be determined how PtLecRLK1 recognition of L. bicolor alters intracellular signaling to reprogram the host for the development and maintenance of symbiosis. Therefore, the goals of this study were to identify phosphorylation-based signaling associated with PtLecRLK1 transgenic switchgrass roots and to develop a conceptual model for relevant signaling pathways. To this end, phosphoproteomics data were generated for wild-type and PtLecRLK1 transgenic switchgrass roots two months post-inoculation with L. bicolor. Plant Fungal Growth and Proteomics Sample Preparation Switchgrass PtLecRLK1 transgenic lines were generated as described previously [11]. Transgenic and wild-type switchgrass were co-cultured with L. bicolor liquid inoculum. Two-month post-inoculation, root tissues were collected for mass spectroscopy with at least three biological replicates. Each replicate was flash-frozen and ground under liquid nitrogen. Samples were processed for mass spectrometry measurement as described previously [11]. Briefly, the samples were dissolved in lysis buffer containing 4% SDS in 100 mM ammonium bicarbonate (ABC) buffer along with 1X Halt Protease Inhibitor Cocktail (Thermo Scientific; Waltham, MA, USA). The sample mixture was subjected to boiling, sonication, and centrifugation. The supernatant was collected and reduced with 10 mM dithiothreitol for 30 min and subsequently alkylated with 30 mM iodoacetamide in the dark for 15 min. The proteins were then isolated through a chloroform-methanol protein extraction protocol outlined previously [18] and reconstituted in 4% sodium deoxycholate solution. The protein concentration was quantified using a NanoDrop instrument (Thermo Scientific). The proteins were then digested using two consecutive aliquots of sequencing grade trypsin for three hours and then overnight at 37 • C at the ratio of 1:75 (trypsin to sample protein). Once digestion was complete, SDC was removed through precipitation with 1% formic acid and washed with ethyl acetate. The resulting peptides were lyophilized via SpeedVac (Thermo Scientific), desalted on Pierce peptide desalting spin columns (Thermo Scientific), and resuspended in 0.1% formic acid. A portion of the tryptic peptides (15 µg) was allocated for previously published proteomics measurement [11], while the remaining peptides were lyophilized and then resuspended in the manufacturerrecommended buffer for phosphopeptide enrichment. Phosphopeptide enrichment was carried out using phosphopeptide enrichment kits (Catalog number: A32992). Finally, the enriched phosphopeptides were lyophilized and then resuspended in 0.1% formic acid for phosphoproteomics measurement. LC-MS/MS Analysis and Proteome Database Search All samples were analyzed using an RSLCnano UHPLC system (Thermo Scientific) coupled with a Q Exactive Plus mass spectrometer (Thermo Scientific). The peptides were separated using a biphasic column (strong cation exchange and reversed phase) connected to nanospray emitter with a 75 µm inner diameter that was filled with 25 cm of 1.7 µm Kinetex C18 resin. For the phosphoproteome measurement, a single 1 µg injection of phospho-enriched peptides was analyzed with a 180 min gradient at a salt cut of 500 mM ammonium acetate. The Thermo Xcalibur software was used to acquire MS data in data-dependent acquisition (DDA) mode with MS2 acquisition set at top 10. All mass spectrometer data were processed in Proteome Discoverer 2.4 using MS Amanda [19] and Percolator [20]. MS data were searched against the P. virgatum and L. bicolor reference proteome database from DOE Joint Genome Institute (JGI), supplemented with transgenic and common laboratory contaminants sequences. The MS Amanda parameters for phosphopeptide identification were set as follows: MS1 tolerance = 10 ppm; MS2 tolerance = 0.02 Da; missed cleavages = 2; static modification = carbamidomethyl (C, +57.021 Da); dynamic modifications = oxidation (M, +15.995 Da) and phosphorylation (STY) (+79.966 Da). At both the peptide and PSM levels, the false discovery rate (FDR) was set to 1%. Data Analysis To perform differential abundance analysis on phosphorylated peptides, the peptide table was exported from Proteome Discoverer. Then, peptides with phosphorylation modification were extracted from the peptide table. These data were Log 2 transformed, and LOESS normalized using InfernoRDN tool previously published [21]. Additionally, the data matrix was mean centered across all conditions. Only peptides present in at least two out of three replicates (in any experimental conditions) were deemed valid for further analysis. Missing data were imputed using Perseus software [22], with random numbers drawn from a normal distribution with parameters: width = 0.3 and downshift = 2.8. The resulting matrix was subjected to Welch's t-test followed by Benjamini-Hochberg FDR correction to evaluate differential abundant proteins between the experimental groups. Finally, the differentially abundant phosphopeptides were mapped to their respective proteins to identify differentially abundant phosphoproteins. Bimolecular Fluorescence Complementation (BiFC) Assay BiFC assay was performed in Populus protoplasts as described by Zhang et al., 2020 [23]. In brief, the CDSs of PtLecRLK1 and its substrate candidate proteins were cloned into CFP c (pUC119-CD3-1068) and VENUS n (pUC119-CD3-1076) vectors through Gateway cloning, respectively. A total of 10 µg of CFP c -PtLecRLK1 plasmids and 10 µg of VENUS n -substrate plasmids were co-transfected in Populus protoplasts. After 18-20 h dark incubation, the reconstructed YFP signals were detected by a Zeiss LSM 710 (Jena, Germany) confocal microscope. ZEN software 2012 SP5 (Jena, Germany) was used for image processing. Results and Discussion Transgenic expression of PtLecRLK1 can convert non-host plant species to a host of L. bicolor. These transgenic plants can develop a hyphal network between plant cells, improve a plant's fitness in marginal growth conditions, and downregulate pathogenic defense [11]. These findings imply the potential of engineering the mycorrhizal symbiosis for improving plant health or productivity using PtLecRLK1. To uncover how PtLecRLK1 regulates beneficial plant-fungal interaction, we performed phosphoproteomics analysis in transgenic (host) and wild-type (non-host) switchgrass. Because this plasma-membrane receptor is predicted to recognize fungal-cell-wall-derived ligands to suppress plant immunity for long-term colonization, we posit that the resulting signaling cascades are not transient but persistent. Therefore, we sought to characterize the resulting changes in phosphorylation signaling associated with established mycorrhization 2 months post-inoculation. Across the experimental conditions, 284,588 peptide spectrum matches (PSMs) were identified, out of which 75% had phosphorylation evidence ( Figure 1A). These PSMs were mapped to 5140 phosphopeptides in 4469 unique modification sites across 2760 phosphoproteins (Supplemental Table S1). A majority (87%) of these sites belong to amino acid serine, 12% belong to threonine, and the remaining portion belongs to tyrosine ( Figure 1A). Most modification sites had a localization probability score of >90% ( Figure 1A). A Welch's t-test with an FDR correction at q < 0.05 and absolute log 2 fold change greater than 1 was implemented to identify phosphopeptide abundances that differed between transgenic PtLecRLK1 roots and wild-type (WT) roots during L. bicolor interaction. This quantitative analysis identified 1257 differentially abundant phosphopeptides ( Figure 1B), of which 610 and 650 phosphopeptides were significantly up-and down-regulated in transgenic PtLecRLK1 roots compared to wild-type (WT) ( Figure 1B) (Supplemental Tables S2 and S3). These phosphopeptides correspond to 603 and 647 differentially abundant phosphoproteins, respectively. The interpretation of quantitative phosphoproteomics can be challenging because differential phosphorylation events could be confused by simultaneous changes in protein abundance. Therefore, proteins previously determined to be differentially regulated in this pairwise comparison [11] were compared against the proteins with a significant change in phosphorylation. This comparison identified 73 phosphorylated proteins that were also observed to have regulated protein abundances, suggesting that the majority of these differentially phosphorylated proteins are regulated exclusively at the post-translational level ( Figure 1C). These 73 proteins impacted by several levels of regulation were excluded from the additional analyses. The KEGG enrichment analysis identified MAPK signaling, endocytosis, and phosphatidylinositol signaling as enriched pathways at FDR 0.05 among the proteins that were uniquely regulated at the post-translational level. Across the experimental conditions, 284,588 peptide spectrum matches (PSMs) were identified, out of which 75% had phosphorylation evidence ( Figure 1A). These PSMs were mapped to 5140 phosphopeptides in 4469 unique modification sites across 2760 phosphoproteins (Supplemental Table S1). A majority (87%) of these sites belong to amino acid serine, 12% belong to threonine, and the remaining portion belongs to tyrosine ( Figure 1A). Most modification sites had a localization probability score of >90% ( Figure 1A). A Welch's t-test with an FDR correction at q < 0.05 and absolute log2 fold change greater than 1 was implemented to identify phosphopeptide abundances that differed between transgenic PtLecRLK1 roots and wild-type (WT) roots during L. bicolor interaction. This quantitative analysis identified 1257 differentially abundant phosphopeptides ( Figure 1B), of which 610 and 650 phosphopeptides were significantly up-and down-regulated in transgenic PtLecRLK1 roots compared to wild-type (WT) ( Figure 1B) (Supplemental Tables S2 and S3). These phosphopeptides correspond to 603 and 647 differentially abundant phosphoproteins, respectively. The interpretation of quantitative phosphoproteomics can be challenging because differential phosphorylation events could be confused by simultaneous changes in protein abundance. Therefore, proteins previously determined to be differentially regulated in this pairwise comparison [11] were compared against the proteins with a significant change in phosphorylation. This comparison identified 73 phosphorylated proteins that were also observed to have regulated protein abundances, suggesting that the majority of these differentially phosphorylated proteins are regulated exclusively at the post-translational level ( Figure 1C). These 73 proteins impacted by several levels of regulation were excluded from the additional analyses. The KEGG enrichment analysis identified MAPK signaling, endocytosis, and phosphatidylinositol signaling as enriched pathways at FDR 0.05 among the proteins that were uniquely regulated at the post-translational level. In general, a large number of the phosphorylation modifications occurred on proteins and residues that have been previously implicated in plant defense and symbiosis (Supplementary Table S1). For instance, we observed a change in phosphorylation for CERK1, which is one of the most studied RLKs in fungal recognition [14,24,25] because it recognizes chitin found in most fungal cell walls [14,24,25]. In Arabidopsis, AtCERK1 has been mostly studied for its role in defense-related chitin recognition [24] where chitin recognition results in AtCERK1 phosphorylation at amino acids S266, S268, S270, S274, and T519 [24]. In our study, LC-MS/MS measurements identified a phosphorylation in the AtCERK1 homolog (Pavir.6NG335100) and this modification was only observed in transgenic roots colonized by L. bicolor (Figure 2A). Sequence alignment analysis shows that the identified S19/T20 phosphorylation aligns well with site S274 from AtCERK1 ( Figure 2B). Chitin-triggered plant defense mediated by CERK1 leads to a MAPK signaling cascade and our analysis identified several phosphorylated proteins involved in the MAPK signaling cascade, which were only observed in transgenic PtLecRLK1 roots during L. bicolor interaction (Figure 2A). In general, this observation suggests that chitin-triggered plant immunity through CERK1 is active. It is plausible that these molecular signatures are a result of having a higher amount of chitin exposed to plant root cells due to enhanced root colonization by transgenic PtLecRLK1. Alternatively, it is possible that CERK1 is playing an active role in mediating L. bicolor symbiosis within transgenic PtLecRLK1 roots. Recently, OsCERK1 was implicated in a symbiotic relationship [15,26] and was shown to be necessary for promoting the colonization of AM fungi during symbiosis [15,26]. Unlike Arabidopsis, rice and switchgrass CERK1 homologs lack LysM domains necessary for chitin recognition. Therefore, it is plausible that the observed phosphorylation alters a coreceptor specific to enabling symbiosis [15,26]. Because we have previously shown that transgenic PtLecRLK1 Arabidopsis roots can be colonized by L. bicolor, the presence or absence of the CERK1 LysM is less likely to be a crucial aspect of this engineered symbiosis and further work is needed to determine the impact of the observed protein modification. The substrate(s) of PtLecRLK1 are currently unknown. To identify putative downstream targets, protein-protein interaction (PPI) information was collected for PtLecRLK1 (Potri.T022200; v3.1) from the STRING database [27,28]. A cGMP-dependent protein kinase (PKG) (Potri.018G084900) was the only PPI reported ( Figure 3A). The homolog of this protein in switchgrass (Pavir.1NG172300) was uniquely phosphorylated in transgenic PtLecRLK1 roots during L. bicolor colonization. To further assess whether this PKG is a substrate protein of PtLecRLK1, a bimolecular fluorescence complementation (BiFC) assay was performed in poplar protoplasts and the assay suggests PtLecRLK1 and this PKG interact with each other ( Figure 3B). In plants, the role of PKG remains poorly understood. Unlike PKGs expressed in animals, those encoded in plant genomes are structurally unique because they contain an additional type 2C protein phosphatase (PP2C) domain [29]. PP2Ccontaining proteins are frequently shown to play crucial roles in biotic and abiotic stress responses, plant immunity, and plant development [30]. Recently, the Arabidopsis homolog of this PKG protein was described as an interacting protein of the calcium-associated protein kinase 1 (CAP1) and associated to root ammonium-regulated root hair growth [31]. Interestingly, four ammonium transporters (i.e., two isoforms of AMT1-1 Pavir.1KG399605; Pavir.7KG243500 and two isoforms of AMT 2 Pavir.9KG091401; Pavir.9NG008902) were significantly decreased in phosphorylation abundance in transgenic PtLecRLK1 switchgrass roots when compared to WT. These AMT proteins are dynamically regulated, existing in either an active or inactive transporter state, and their activity is controlled by the phosphorylation of a conserved threonine residue in the C-terminus [32] (Figure 2B). Phosphorylation of threonine negatively correlates with root ammonium uptake [32]. The decreased phosphorylated protein abundance of all AMT suggests that transgenic PtLe-cRLK1 roots during L. bicolor colonization is increasing the uptake of ammonium. Inside the plant root cell, ammonium is assimilated into glutamine with the help of glutamine synthetase (GS; Pavir.9KG542200) ( Figure 2B), and glutamine acts as a key nitrogen (N) donor for cellular N metabolism and storage. Phosphorylation of GS has been shown to substantially decrease GS activity [33]. Intriguingly, our analysis showed a significant decrease in the phosphorylation of GS in L. bicolor-inoculated transgenic plants compared to WT, suggesting higher GS activity in the transgenic plant compared to WT. Regulation of glutamine in transgenic PtLecRLK1 roots is further corroborated by the previous metabolomics analysis that showed an increased glutamine abundance in transgenic PtLe-cRLK1 switchgrass roots when colonized by L. bicolor [11]. As such, these results lend support to L. bicolor playing a role in host ammonium acquisition and nitrogen metabolism, which is to be anticipated for ECM symbiosis, and provides insights into concomitant cellular reprogramming post-invasion. Supplemental Table S3 for gene alias information). (B) Simplified model showing the regulation of ammonia uptake through ammonium transporter via phosphorylation modification. The phosphorylation of conserved threonine level negatively correlates with root ammonium uptake. Simplified sequence alignment for AMT1-1 is shown on the right with conserved threonine site. The substrate(s) of PtLecRLK1 are currently unknown. To identify putative downstream targets, protein-protein interaction (PPI) information was collected for PtLecRLK1 (Potri.T022200; v3.1) from the STRING database [27,28]. A cGMP-dependent protein kinase (PKG) (Potri.018G084900) was the only PPI reported ( Figure 3A). The homolog of this protein in switchgrass (Pavir.1NG172300) was uniquely phosphorylated in transgenic PtLecRLK1 roots during L. bicolor colonization. To further assess whether this PKG is a genic plants compared to WT, suggesting higher GS activity in the transgenic plant compared to WT. Regulation of glutamine in transgenic PtLecRLK1 roots is further corroborated by the previous metabolomics analysis that showed an increased glutamine abundance in transgenic PtLecRLK1 switchgrass roots when colonized by L. bicolor [11]. As such, these results lend support to L. bicolor playing a role in host ammonium acquisition and nitrogen metabolism, which is to be anticipated for ECM symbiosis, and provides insights into concomitant cellular reprogramming post-invasion. To further advance our phosphorylation network, PPI information was then collected from the STRING database for PKG. In contrast to our PtLecRLK1 search, there are a much larger number of probable substrates (71 interacting partners identified in poplar with STRING experimentally and co-expression determined score of >0.90), and this suggests that PKG may be a hub kinase for downstream signaling (Supplementary Table S4). Interestingly, among those predicted substrates is a recently discovered susceptibility gene expressed in wheat that has been exploited by a fungal pathogen resulting in stripe rust To further advance our phosphorylation network, PPI information was then collected from the STRING database for PKG. In contrast to our PtLecRLK1 search, there are a much larger number of probable substrates (71 interacting partners identified in poplar with STRING experimentally and co-expression determined score of >0.90), and this suggests that PKG may be a hub kinase for downstream signaling (Supplementary Table S4). Interestingly, among those predicted substrates is a recently discovered susceptibility gene expressed in wheat that has been exploited by a fungal pathogen resulting in stripe rust infection [10]. It has been shown that fungal invasion results in the phosphorylation of TaPsIPK1 protein, which then enters the nucleus and phosphorylates CBF1d to increase fungal susceptibility [10]. Remarkably, inactivating this susceptibility factor has been shown to confer robust rust resistance in a field trial without a negative impact on growth and yield [10]. The switchgrass homologs of TaPsIPK1 (Pavir.1KG067500) and TaCBF1 (Pavir.2NG380900) were found to be uniquely phosphorylated in L. bicolor-inoculated transgenic switchgrass (Figure 2A,B). In addition to these notable changes in phosphorylation status, our global analysis identified many other differentially abundant phosphoproteins related to plant defense, phytohormone signaling such as brassinosteroid signaling and ethylene response, ROS balance, endocytosis, cytoskeleton movement, and proteasomal degradation (Figure 2). Although it is outside the scope of this brief research communication to elaborately describe the implications of these findings for plant-fungal symbiosis, future studies can be targeted to interrogate the functional relevance of these pathways in depth for plant-fungal symbiosis. Conclusions PTMs, such as phosphorylation, represent a unique layer of regulation utilized by plants to adjust the molecular pathways necessary to either permit or prevent the establishment of a particular microorganism. Overall, this phosphoproteomics study facilitated the identification of phosphorylation-based relevant signaling pathways associated with PtLecRLK1 recognition of L. bicolor. This rich dataset along with our previously published multi-omics data have helped to provide a more detailed understanding of how PtLecRLK1 reprograms molecular pathways to facilitate the establishment and maintenance of L. bicolor colonization. Moreover, we detected an interaction and a putative PtLecRLK substrate that represents an exciting candidate for further interrogation of this signaling cascade. More broadly, this dataset can be used as a valuable resource for future research that focuses on cross-species comparisons to see if the PtLecRLK1-adjusted molecular pathways are conserved across multiple plant species. In practice, a deeper understanding of plant-fungal signaling pathways will be necessary to selectively engineer beneficial symbiosis while, figuratively speaking, leaving the door closed for pathogens. Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/cells12071082/s1, Table S1: Verbose result of identified peptides and proteins at a 1% false discovery rate. Raw abundance values are provided for each peptide; Table S2: Significantly regulated (FDR < 0.05; log2 FC > 1) phosphopetides and associated master protein accessions; Table S3: Functional annotation information for regulated phosphoproteins; Table S4: Interacting partners of cGMP-dependent protein kinase (PKG) dentified in poplar with STRING experimentally and co-expression determined score of >0.90.
2023-04-06T15:09:42.722Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "5e7c4b757819447084d26055e85910dfe0cf616c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4409/12/7/1082/pdf?version=1680588688", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3fad5a1dcf7ab6fea6b801f02254821d8aa0209e", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
100316709
pes2o/s2orc
v3-fos-license
Utilization of annealed aluminum hydroxide waste with incorporated fluoride for adsorptive removal of heavy metals a r t i c l e i n f o • More significant decrease of S BET and pH iep is observed at higher AT. • Incorporated-F inhibit the removal of Cd and As as compared to pristine Al(OH) 3 . • Elevated AT benefit Cd adsorption whereas inhibit the removal of As(III) and As(V). • Annealing these aluminas at elevated AT may control fluoride leaching control. The removal of fluoride by Al hydroxide [Al(OH) 3 ] adsorption and aluminum (Al) coagulation produces spent wastes of Al(OH) 3 -F ads and Al(OH) 3 -F coag . This study prepared the annealed Al(OH) 3 -F ads , Al(OH) 3 -F coag , and pristine Al(OH) 3 at annealing temperature (AT) of 200 • C, 600 • C, and 900 • C, and compared their removal behaviors towards cadmium [Cd(II)], arsenite [As(III)], and arsenate [As(V)]. Annealing treatment decreased their BET surface area (S BET ) and isoelectric point (pH iep ), and a more significant extent of decrease was observed at elevated AT. The incorporation of fluoride lowered their efficiency towards the removal of Cd(II), As(III), and As(V) as compared to Al(OH) 3 . The elevated AT benefited Cd(II) adsorption whereas it inhibited the removal of As(III) and As(V) by any of the obtained aluminas, owing to the shift of pH iep to lower pH ranges at elevated AT. The adsorption of Cd(II) increased whereas that of As(III) and As(V) decreased with elevated pH, with electrostatic interactions playing a determining role. The release of fluoride from Al(OH) 3 -F ads and Al(OH) 3 -F coag did occur, and it could be controlled by annealing them at elevated AT. The annealed aluminas showed good affinity towards different heavy metals and may be reclaimed for the treatment of industrial wastewater. Introduction The discharge of toxic heavy metals such as cadmium (Cd), arsenic (As), lead (Pb), chromium (Cr), and mercury ( serious pollution in rivers, soils, and underground waters in the past several decades in China. Long-term exposure to heavy metals can result in chronic damage to blood composition, lungs, brains, kidneys, liver, and other vital organs, and their mixtures show more significant toxicity than that they were separately existed [1]. Heavy metals also have acute toxicity, and high levels of inorganic arsenic at concentrations of above 60 mg/L in water can even be fatal. Industrial wastewaters are important heavy metal sources, and the control of point sources is easier and more cost-effective compared to the remediation of 'non-point' polluted water environments. However, the intentional or unexpected discharge of wastewater with high levels of heavy metals sometimes occurs in developing countries such as China. To avoid this as much as possible, stringent regulation of pollutant-producing factories is crucially important. On the other hand, the development of lowcost and easy-to-operate technologies also plays an important role. In the past several decades, various methods such as flocculation [2], adsorption [3][4][5], ion exchange [6], reverse osmosis [7], electro dialysis [8], and precipitation [9] have been developed for the removal of heavy metals. However, the widespread application of some technologies has been restricted by high costs, the production of excessive amounts of concentrated solutions, and the disposal of secondary sludge and so on. To enable long-term stable operation of treatment facilities, it is important to decrease the treatment costs as much as possible. On the other hand, some industries produce a great deal of solid wastes [10][11][12], and their safe disposal has received great concern. However, if these spent wastes can be reused for the treatment of wastewater, it not only achieves the reclamation of spent wastes but also reduces the cost of heavy metal removal. This concept would be extremely attractive and cost-effective from an engineering point-of-view, and the utilization of industrial wastes for the adsorption of heavy metals has been widely investigated in the past decade [10][11][12]. The wide occurrence of fluorosis creates the need for the removal of fluoride from drinking water. Aluminum (Al) based coagulants and adsorbents have been widely used for defluoridation [13,14], inevitably producing a great deal of discharged sludge and spent adsorbents globally. These inorganic solid wastes with high levels of Al and fluoride should be disposed of properly to avoid secondary pollution in soils and underground water and so on. The traditional disposal methods for sewage sludge include farmland application, landfilling, and incineration, whereas for these inorganic sludges and adsorbents, strategies such as landfilling, stabilization, pond disposal, and soil disposal are feasible [15]. In our previous study, in situ prepared AlOxHy, which can be freshly coated onto porous carriers to achieve adsorbent granulation, has been developed and its fluoride removal efficiency has been well evaluated [16]. Once eventually exhausted, the spent AlOxHy with adsorbed fluoride as expressed by Al(OH) 3 -F ads may be safely disposed of by cheap and convenient means such as solidification in roadbeds or encapsulation in cement-lime mixtures. Coagulation is also a cost-effective defluoridation method, and Al-F complex formation is involved in fluoride removal [13]. The wide application of coagulation for defluoridation is restricted by the large-scale production of spent sludge as expressed by Al(OH) 3 -F coag . The reclamation of these two solid wastes not only minimizes their discharge into water environments but also is economically valuable. Our previous study indicated that freeze-dried Al(OH) 3 -F ads and Al(OH) 3 -F coag are potentially attractive for the removal of heavy metals such as arsenic [17]. However, the experimental freezedrying procedure is not practically feasible from an engineering point-of-view. Generally, most discharged sludge is dewatered by filter pressing, and the ratios of moisture content can be as high as 70% or more. As for the long-distance transportation of the spent sludge with high moisture content, the cost is much too high and restricts its large-scale reclamation in areas distant from landfills. To promote their reclamation, these spent sludge may be used as raw materials to prepare commercial adsorbents after being annealed. This economically valuable strategy reduces the moisture content, decreases the transportation cost and expands their application range as much as possible. However, the adsorption efficiency of this spent sludge after thermal pretreatment should be well evaluated. Based on these considerations, we first prepared two solid wastes obtained from the defluoridation processes of adsorption and coagulation, which were respectively expressed as Al(OH) 3 -F ads and Al(OH) 3 -F coag , and pristine Al(OH) 3 was also prepared for comparison. The materials were annealed at different temperatures, and were then characterized by XPS, FTIR, and XRD to illustrate the effects of thermal treatment on their characteristics. Finally, the adsorptive behaviors of Cd(II), As(III), and As(V) on these solid wastes were evaluated in terms of adsorption kinetics and the effect of pH, and the mechanisms involved were proposed accordingly. Preparation of Al(OH) 3 -F solid wastes The adsorbents Al(OH) 3 -F ads , Al(OH) 3 -F coag , and pristine Al(OH) 3 were prepared according to the methods described in our previous study [17]. Briefly, the pristine Al(OH) 3 was prepared by the stoichiometric reaction between AlCl 3 and NaOH. To prepare the Al(OH) 3 -F coag , NaF solution was added into the AlCl 3 solution prior to dosing the NaOH solution whereas NaF solution was introduced after the hydrolysis of Al 3+ ions to produce Al(OH) 3 -F ads . To obtain their powders, the suspensions were filtered by 0.45-m membranes, washed with distilled water, and then freeze-dried. Annealing treatment The Al(OH) 3 -F ads , Al(OH) 3 -F coag , and Al(OH) 3 powders were respectively annealed in an oven using a heating rate of 8 • C/min up to 200 • C, 600 • C, or 900 • C, and then the annealing temperature (AT) was kept constant for 2 h. The obtained samples were stored in a desiccator before use. Batch adsorption experiments The initial concentrations of Cd(II), As(III), or As(V) were adjusted to 70 mg/L by diluting their stock solutions, and the pH was adjusted thereafter. The species distribution of Cd(II), As(III), and As(V) over wide pH ranges from 3 to 8, as calculated by the Visual MINTEQ software, is illustrated in Fig. S1. NaNO 3 solution was added to provide a background ionic strength of 0.01 M. Batch adsorption kinetic experiments were conducted in beakers with magnetic stirring (350 rev min −1 ) at pH 7.0 ± 0.2 after dosing 200 mg adsorbent into 1000-mL solution (25 ± 1 • C). pH was adjusted during adsorption using 0.1 M NaOH and 0.1 M HNO 3 to minimize pH variation. Sampling was carried out at intervals during adsorption and the obtained samples were filtered through 0.45-m filter membranes and kept at 4 • C for further analysis.
2019-04-08T13:04:11.087Z
2016-09-05T00:00:00.000
{ "year": 2016, "sha1": "a1fc000cf45d991efad40afb00bbeeb97decc3ec", "oa_license": "CCBYNCSA", "oa_url": "https://ir.rcees.ac.cn/bitstream/311016/36088/1/Utilization%20of%20annealed%20aluminum%20hydroxide%20waste%20with%20incorporated%20fluoride%20for%20adsorptive%20removal%20of%20heavy%20metals.pdf", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "8908a94ee29db5e31defd748fdff75b4492949bf", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Chemistry" ] }
91958087
pes2o/s2orc
v3-fos-license
Timing seed germination under changing salinity: a key role of the ERECTA receptor-kinases Appropriate timing of seed germination is crucial for the survival and propagation of plants, and for crop yield, especially in environments prone to salinity or drought. Yet, how exactly seeds perceive changes in soil conditions and integrate them to trigger germination remains elusive, especially once non-dormant. Here we report that the Arabidopsis ERECTA (ER), ERECTA-LIKE1 (ERL1) and ERECTA-LIKE2 (ERL2) leucine-rich-repeat receptor-like kinases synergistically regulate germination and its sensitivity to salinity and osmotic stress. Loss of ER alone, or in combination with ERL1 and/or ERL2 slows down the initiation of germination and its progression to completion, or arrests it altogether until better conditions return. That function is maternally controlled via the embryo surrounding tissues, primarily the properties of the seed coat determined during seed development on the mother plant, that relate to both seed coat expansion and subsequent differentiation, particularly the formation of its mucilage. Salt-hypersensitive er, er erl1, er erl2 and triple mutant seeds also exhibit increased sensitivity to ABA during germination, and under salinity show an enhanced upregulation of the germination repressors and inducers of dormancy ABA-insensitive-3, ABA-insensitive-5, DELLA encoding RGL2 and Delay-Of-Germination-1. These findings reveal a novel role of the ERECTA kinases in the sensing of conditions at the seed surface and the integration of developmental and stress signalling pathways in seeds. They also open novel avenues for the genetic improvement of plant adaptation to harsh soils. Highlight The ERECTA family of receptor-kinases regulates seed germination under salinity, through mucilage-mediated sensing of conditions at the seed surface, and interaction with secondary dormancy mechanisms. The Arabidopsis ERECTA gene family (ERf) encodes three closely related LRR-RLKs -ER, 1 0 3 ERL1 and ERL2 -known to synergistically regulate many aspects of plant development and 1 0 4 morphogenesis with prominent roles in organ shape, stomatal patterning, cell proliferation 1 0 5 and meristematic activity (Torii et al., 1996;Shpak et al., 2004;2005;Pillitteri et al., 2007; beyond a role in leaf heat tolerance (Shen et al. 2015). We earlier reported a role of ERECTA seeds germinated synchronously on NaCl-free media, but according to significantly different 4 0 5 kinetics when challenged with salinity stress (Fig. 8C). Remarkably, for each cross, F1 seed in particular the seed coat. Supporting this, when excised from their covering layers, "naked" Considering what properties of the seed coat the ERf might control to influence germination 4 1 8 in a salinity-dependent manner we first tested for a role in seed coat permeability. To that 4 1 9 end, seeds were incubated in tetrazolium red, a cationic dye classically used to detect seed 4 2 0 coat defects and abnormal permeability (Wharton, 1955;Molina et al., 2008). Similar 4 2 1 staining and tetrazolium salt reduction rates were observed across lines, except for significant 4 2 2 increases in er erl1 and to a small extent in erl1 seeds (Fig. 9A), suggestive of increased seed 4 2 3 coat permeability or NADPH-dependent reductase activity in these two mutants. We thus 4 2 4 next measured seed sodium contents after 24 h stratification with or without salt. They showed no significant genetic variation (Fig. 9B). These results indicate that the observed 4 2 6 differential germination response to salt among erf seeds cannot be ascribed to differences in 4 2 7 seed coat permeability and accumulation of sodium ions per se. During seed coat differentiation on the mother plant, the specialised epidermal cells secrete extrusion have been reported to be more sensitive to low water potential during germination 4 3 5 (Penfield et al., 2001;Yang et al., 2010). This prompted us to next examine mucilage release 4 3 6 by WT and erf seeds upon imbibition. We collected the loosely adhering mucilage which can 4 3 7 easily be detached from the seed surface, as opposed to the inner, cell wall-bound fraction. 3 8 Large genetic variation was observed in the amounts recovered, but that scaled with genetic 4 3 9 variation in seed size (Fig. 9C). Salinity caused a large increase in mucilage extrusion, but of Recent studies suggest the importance for germination of the mucilage physico-chemical 4 4 7 properties and attachment to the seed rather than simply its amount (Rautengarten et al., leading us to compare their ratios across the full spectrum of lines (Fig. 9D). This revealed 4 5 7 dramatically increased GalUA/Gal ratios in er erl1, er erl2 and er erl1/seg erl2 mucilage 4 5 8 compared to WT and other lines (P=0.027), and a trend for higher rhamnose to xylose ratios in mutant mucilage other than erl1 erl2, especially in er erl1 and er erl1/seg erl2 mucilage 4 6 0 (P=0.08). These results suggest that the ERf plays a role in the control of mucilage To test this and probe causality, we took an indirect, holistic approach, and compared the 4 6 6 germination kinetics of intact seeds and demucilaged seeds, deprived of the shell of loosely 4 6 7 adherent mucilage extruded during imbibition. Demucilaged seeds systematically germinated 4 6 8 more slowly than intact seeds on salt-free media (Fig. 9E), as is common. Under saline 4 6 9 conditions, this was also the case for WT, erl1, erl2 and erl1 erl2 seeds but, strikingly, in er 4 7 0 erl1, er erl2 and er erl1/seg erl2 seeds, mucilage removal had the opposite effect: These data demonstrate a critical role of the seed water soluble mucilage in mediating the 4 7 5 salinity-dependent function of the ERf in controlling the completion of seed germination. 7 6 Although appearing as distinct layers upon imbibition, mucilage and cell walls are tightly 4 7 7 bound. The suberised seed coat and underlying endosperm constitute a mechanically strong 4 7 8 barrier that needs to be weakened to enable radicle emergence. The micropylar endosperm 4 7 9 that surrounds the radicle tip is thought to be the major source of mechanical resistance to it was enhanced in er erl1/seg erl2 seeds compared to WT. dehydration and the embryo becomes quiescent. Germination brings that embryo from a 5 0 8 highly resilient to a highly vulnerable state, in direct contact with the outer environment, and 5 0 9 to a point of no return. How seeds monitor conditions in their immediate surrounding to 5 1 0 optimise the timing of germination initiation and its completion is mostly unknown. In this 5 1 1 study, we show that the Arabidopsis ERECTA family acts to control the timing of seed 5 1 2 germination according to external salinity and osmotic levels ( Fig. 1; Fig. 4). Loss of ER, or and osmotic stress, while not compromising seed viability, as germination readily resumes 5 1 5 upon the return of favourable conditions (Fig. 6). The ERf-mediated sensing of changing 1 It will be intriguing to unravel the downstream cascade. The salt-hypersensitive er erl1, er 6 2 9 erl2, er erl1/seg erl2 seeds show enhanced sensitivity to exogenous ABA, and enhanced 6 3 0 upregulation of ABI3, ABI5 and RGL2 under saline conditions compared to wild type (Fig. 7). salinity (Lopez-Molina et al., 2001;2002). Here we find that loss of the ERf sensitises seed 6 5 0 germination to salinity and frequently arrests it, and that this arrest is reversible, with 6 5 1 germination readily resuming upon stress release and progressing to completion as fast as in 6 5 2 seeds never exposed to stress (Fig. 6). Moreover, arrested seeds show an upregulation of the 6 5 3 DOG1 gene (Fig. 7), a major controller of coat-and endosperm-mediated dormancy as takes ABI5, and appears to be an agent of environmental adaptation of germination among promotion of fast germination under stress may be seen as desirable, it also exposes the 6 6 0 newly germinated seedling to risks of death should adverse conditions persist or worsen as 6 6 1 the embryo becomes directly exposed to the external environment with all its reserves already 6 6 2 burnt. In such circumstances, germination delay or arrest could then be a useful protective stress release, a mixed response that may balance risks of death and loss of fitness or ability 6 7 0 to complete the life cycle in time. In conclusion, plants must be endowed with a "surveillance" system for the perception and with developmental pathways. This study illuminates a key role of the ERf in that elusive for unravelling the mechanisms seeds have evolved to control germination and tune it to local 6 8 0 conditions for maximising chances of survival. course of seed germination. sensitises seed germination to salinity in a dose-dependent manner. biosynthetic gene expression levels in WT and er erl1/seg erl2 seeds. patterning than WT seeds. Supplementary Table S1. List of genotyping and RT-qPCR primers. We thank Josephine Ginty and Kefan Peng for assistance with seed permeability assays and Torii for seeds of proERf:GUS reporter lines; the Nottingham Arabidopsis Stock Centre and 7 1 0 SALK-Institute for mutant seeds; the Australian National University for funding. Behavior, 5:7, 796-801. Research, 14:1, 1-16. Springer. Chapter 4. pedicels in the wild type and in erecta. PLoS ONE, 7:9, e46262. mechanism for the temperature-and gibberellin-dependent control of seed germination. SALT-OVERLY SENSITIVE5 mediates Arabidopsis seed coat mucilage adherence and osmosensing. The Journal of General Physiology, 145:5, 389-394. Arabidopsis seed maturation, after-ripening, dormancy and germination. New Phytologist, 179:1, 33-54. Arabidopsis seed germination. The Plant Journal, 55:1, 77-88. Theoretical and Applied Genetics, 61:4, 385-393. bedding assay shows that RGL2-dependent release of abscisic acid by the endosperm regulates Arabidopsis seed germination via RGL2, a GAI/RGA-like gene whose Arabidopsis mutants with a reduced seed dormancy. Plant Physiology, 110:1, 233-240. genotype per plate). Different letters above bars denote significant differences by two-way 1 0 9 5 ANOVA and Tukey HSD pair-wise tests (P < 0.001). ERf gene expression in WT dry seeds ("Dry") and germinating seeds at: the end of imbibition differences were non-statistically significant by one-way ANOVA. and with the expression of major ABA and GA signalling genes. ANOVA and Tukey HSD pair-wise tests (P < 0.05). Different letters indicate significant differences by one-way ANOVA and Tukey HSD 1 1 7 2 pair-wise tests (P < 0.001). C-D, Crosses were made between flowers at similar positions 1 1 7 3 on the main inflorescence; seeds were harvested at the same time, 3 weeks after crossing. seeds during the three germination phases, n=4 seed pools per genotype and NaCl condition, of 300 seeds each. differences by two-way ANOVA and Tukey HSD pair-wise tests (P < 0.001). Germinated seeds (%) Figure 8. The ERf function in seed germination sensitivity to salinity is maternally controlled and shows partial overlap with an ERf function in the determination of seed size. Different letters indicate significant differences by two-way ANOVA and Tukey HSD pair-wise tests (P < 0.001). C, Time-course of germination for WT and er erl1 seeds, and F1 seeds generated from their reciprocal crosses. Similar results were obtained from crosses between WT and er erl2 flowers (data not shown). D, Size of F1 seeds from reciprocal crosses between WT and er erl1+/-erl2 flowers (n=86 to 143 seeds per cross). Different letters indicate significant differences by one-way ANOVA and Tukey HSD pair-wise tests (P < 0.001). C-D, Crosses were made between flowers at similar positions on the main inflorescence; seeds were harvested at the same time, 3 weeks after crossing. A, Seed coat permeability to tetrazolium red (n = 4 seed pools of 50 mg each; some s.e.m. are hidden by symbols). * denotes statistical significance (P < 0.001) by two-way ANOVA and Scheffe post-hoc test. B, Seed sodium content 24 h post-stratification on 0 mM or 150 mM NaCl media, (n=3 seed pools). Letters indicate significant differences by two-way ANOVA and Tukey HSD pair-wise tests (P = 0.42 and 0.39 for genotype effect under control and salt treatment, respectively). C, Correlation between mass of water soluble mucilage per seed and seed size. Means and s.e.m. (n = 4 seed pools per genotype, 40 mg seeds per pool, average seed weight and area determined on sub-aliquots; experiment replicated 3 times). Regression lines: 0 mM NaCl, y=36.6x-0.20, r 2 =0.84; 150 mM NaCl, y=36.3x+0.002, r 2 =0.81. Similar results were obtained with size expressed as area. D, GalUA/Gal and Rhm/Xyl ratios. Letters besides points indicate statistical significance of differences in GalUA/-Gal (P<0.05) by one-way ANOVA and Tukey post-hoc tests, compared to all unlabelled data points. P=0.08 for differences in Rhm/Xyl between er erl1/seg erl2 and WT. E, Testa rupture (TeR) and endosperm rupture (EnR) T 50 values for intact seeds and "demucilaged" seeds. Mean values per genotype (n = 3 plates; 30 seeds per genotype per plate). Labelled points denote genotypes where removal of the outer water soluble mucilage significantly advanced germination on 150 mM NaCl media. The 1:1 line represents the bisextrix, where mucilage removal is neutral. F, TCH3 gene expression in WT and er erl1/seg erl2 dry and imbibed seeds during the three germination phases, n=4 seed pools per genotype and NaCl condition, of 300 seeds each.
2019-04-03T13:11:59.430Z
2019-03-13T00:00:00.000
{ "year": 2019, "sha1": "b1379f17a0a2b032e03f9ef8e92e8677cfc3dfd5", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/jxb/article-pdf/70/21/6417/30959015/erz385.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "16d978acb91f5faf5cbb5d28a46c3c7a99407992", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology" ] }
152946
pes2o/s2orc
v3-fos-license
Japanese familial case with metaphyseal dysplasia, Schmid Type caused by the p.T555P mutation in the COL10A1 gene Metaphyseal dysplasia, Schmid type (MS, OMIM #156500) is a bone dysplasia that presents with autosomal dominant inheritance. MS belongs to the “Metaphyseal dysplasia” group in the “Nosology and Classification of Genetic Skeletal Disorders” (1). Its clinical symptoms include coxa vara, bowed leg, short stature, short limbs, and normal facial appearance. An orthopedic operation can be conducted for coxa vara, depending on its severity. Among MS patients, genetic mutations of the COL10A1 gene have been identified (2). The COL10A1 gene encodes the type X collagen α1 chain. Three type X collagen α1 chains form a homotrimer. Type X collagen expresses restrictedly in hypertrophic chondrocytes during endochondral ossification, and the role of type X collagen is not entirely clear. A large part of the pathogenic mutation in the COL10A1 gene is located in the carboxyl-terminal non-collagenous (NC1) domain and at Glycine 18 located at the boundary between the signal peptide domain and N-terminal non-collagenous (NC2) domain (3). This report describes a familial case with MS accompanied by a previously unreported mutation, p.T555P, located on the NC1 domain of a type X collagen α1 chain. Introduction Metaphyseal dysplasia, Schmid type (MS, OMIM #156500) is a bone dysplasia that presents with autosomal dominant inheritance. MS belongs to the "Metaphyseal dysplasia" group in the "Nosology and Classification of Genetic Skeletal Disorders" (1). Its clinical symptoms include coxa vara, bowed leg, short stature, short limbs, and normal facial appearance. An orthopedic operation can be conducted for coxa vara, depending on its severity. Among MS patients, genetic mutations of the COL10A1 gene have been identified (2). The COL10A1 gene encodes the type X collagen α1 chain. Three type X collagen α1 chains form a homotrimer. Type X collagen expresses restrictedly in hypertrophic chondrocytes during endochondral ossification, and the role of type X collagen is not entirely clear. A large part of the pathogenic mutation in the COL10A1 gene is located in the carboxylterminal non-collagenous (NC1) domain and at Glycine 18 located at the boundary between the signal peptide domain and N-terminal noncollagenous (NC2) domain (3). This report describes a familial case with MS accompanied by a previously unreported mutation, p.T555P, located on the NC1 domain of a type X collagen α1 chain. Case Report A 1-yr 11-mo old boy was referred to our hospital because of coxa vara at 1 yr and 6 mo of age. He was born by vaginal delivery at 39 wk. Asphyxia was not observed. His height and wt at birth were, respectively, 49.0 cm (0.00 SD) and 2506 g (-1.17 SD). His height at first arrival at our hospital was 77.5 cm (-2.40 SD) in a standing position. In the spine position, five fingers could be inserted between his knees. His facial appearance was normal. Metaphyseal flaring and fraying were evident in radiographs ( Fig. 1-a and Fig. 1-b). This patient showed no abnormality in biochemical analysis, such as analysis of his serum levels of calcium, inorganic phosphate and parathyroid hormone and alkaline phosphatase activity. His father was 175 cm tall (+0.72 SD) and had normal proportions. His mother showed short stature (140.0 cm, -3.42 SD) and a normal facial appearance. She had undergone osteotomy for correction of coxa vara when she was 6 yr of age. Based on the clinical symptoms shown by the patient and his mother, we diagnosed the patient as having MS. His height decreased from -2.40 SD at first arrival to -3.51 SD at 4 yr and 5 mo old ( Fig. 1-c). At 5 yr of age, abduction osteotomy was conducted to correct coxa vara. Methods To analyze the COL10A1 gene, genomic DNA was extracted from whole blood using a QIAamp DNA Blood Mini Kit (Qiagen Inc., Tokyo, Japan) after informed consent was obtained from the patient's guardian. A PCR reaction was conducted using the standard PCR method. The following primer pairs were used: G18 forward, 5'-TGATCTCTCATTTATTTATGGCACA, and G18 reverse, 5'-TGGGCTAATTCAGAAGTTGGA, for analysis of glycine at 18 and NC1 forward, 5'-CAGTCATGCCTGAGGGTTTT, and NC1 reverse, 5'-GGGAAGGTTTGTTGGTCTGA, for analysis of the NC1 domain of the type X collagen α1 chain. The PCR products were sequenced using a BigDye Terminator Cycle Sequencing FS Ready Reaction Kit and a Genetic Analyzer (ABI Prism 310; Applied Biosystems, Foster City, CA, USA). This study was approved by the ethical committee of the Okayama University Graduate School of Medicine, Dentistry and Pharmaceutical Sciences. Results A substitution from adenine at 1663 in the COL10A1 cDNA (c.1663 c>a) to cytosine that caused an amino acid change from threonine at 555 in the type X collagen α1 chain to proline (p.T555P) was identified in this patient. The c.1663 c>a substitution was also found in his mother, but it was not found in his father ( Fig. 2-a); it was also not found among 120 alleles of healthy controls (data not shown). This substitution was found in neither the Japanese SNP (JSNP) nor dbSNP databases. Discussion p.T555P in the COL10A1 gene was found to be deleterious by in silico analysis using PolyPhen-2. We inferred that the p.T555P mutation was a pathogenic mutation. No previous report has previously described the p.T555P mutation in the COL10A1 gene. The p.T555P mutation is located in the NC1 domain of the type X collagen α1 chain. Among the previously reported mutations located in the NC1 domain, we found that the p.T555P mutation Vol.24 / No.1 is located nearest to the amino-terminal of the NC1 domain. MS patients present with various clinical severities: some patients require osteotomy for correction of coxa vara; some patients require no osteotomy (5). Although the genotype-phenotype correlation in MS has not been fully clarified, the p.T555P mutation is presumed to result in a severe phenotype compared with patients reported previously, as our patient and his mother required osteotomy. The mutation in the NC1 domain reportedly disrupts NC1 folding and stability and subsequent molecular assembly and interaction in the cartilage matrix (3). To elucidate the pathogenicity of the p.T555P mutation, functional analysis is needed. Fig. 2. Results of the COL10A1 gene analysis. a) A substitution from adenine at 1663 in the COL10A1 cDNA to cytosine (c.1633 c>a) that results in an amino acid change from threonine at 555 in the type X collagen α1 chain to proline (p.T555P) was identified in this patient and his mother, but it was not identified in his father. b) Threonine at position 555 in Homo sapiens is conserved in various species. The conserved threonine is highlighted in the shaded box. c) PolyPhen-2 analysis of the p.T555P mutation.
2016-05-04T20:20:58.661Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "14dc5fed9dca34b2644a0948a605be2480e44bfa", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/cpe/24/1/24_9997/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "14dc5fed9dca34b2644a0948a605be2480e44bfa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54741104
pes2o/s2orc
v3-fos-license
CORROSION OF STEEL REINFORCED CONCRETE IN THE TROPICAL COASTAL ATMOSPHERE OF HAVANA CITY , CUBA The influence of chloride deposition rate on concrete using an atmospheric corrosion approach is rarely studied in the literature. Seven exposure sites were selected in Havana City, Cuba, for exposure of reinforced concrete samples. Two significantly different atmospheric corrosivity levels with respect to corrosion of steel reinforced concrete were observed after two years of exposure depending on atmospheric chloride deposition and w/c ratio of the concrete. Changes in corrosion current are related to changes in chloride penetration and chloride atmospheric deposition. The influence of sulphur compound deposition could also be a parameter to consider in atmospheric corrosion of steel reinforced concrete. INTRODUCTION Atmospheric corrosion aggressiveness in metals ranges from C1 to >C5 according to the ISO 9223 standard in tropical coastal atmospheres. 1,2 The influence of atmospheric parameters, chloride deposition and other contaminants on metals in a tropical climate have been extensively studied; [3][4][5][6][7][8][9][10] however, in the case of reinforced concrete, chloride atmospheric deposition is not directly related to corrosion of steel reinforced concrete because the rate of chloride penetration into the concrete cover to reach steel in a reinforced concrete surface depends on the concrete's properties, particularly on its capillarity and porosity. The development of the pore structure of hydrating Portland cement systems is fundamental to the behavior of concrete exposed to a variety of aggressive environments. It influences the mass transport of ions into the material and their interaction with concrete constituents as well as the diffusion characteristics of concrete. 11 Gradients of moisture content, hydrostatic pressure, stress, temperature, concentration of aggressive chemicals, and the diameter and distribution of permeable pores in the concrete, disturb the equilibrium of fluids in porous materials. Therefore, to restore the equilibrium the transference of fluids occurs. This transfer process is generally described in terms of adsorption, diffusion, absorption, and permeability. In addition, the physical state of water in the concrete pores also affects these processes. Atmospheric corrosion aggressiveness in metals is established by weight loss or environmental parameters. 1 Regarding steel reinforced concrete, the predominant type corrosion is localized corrosion which is usually called "pitting corrosion". This is not the case for metals exposed to the atmosphere, where generalized corrosion is very frequent, except in passive metals such as aluminum and others. Under this approach, an evaluation of atmospheric corrosivity regarding steel reinforced concrete should be different. In the case of coatings, time is an important factor in visually determining the deterioration caused by atmospheric aggressions; for example, the time taken to change color, the presence of corrosive products at the concrete surface, and the presence of blisters and other defects that are very important. From the corrosion point of view, concrete cover can be considered a protective composite coating deposited onto the steel. This protective coating is characterized by a relative high thickness and high porosity. In the case of paint and varnishes (coatings that are thinner and less porous than concrete), recommendations are made to use paints as protective coatings depending on the aggressiveness of the environment. 12 In the case of paints, the aggressiveness of the environment is usually defined by the durability of the coating and not by the metal loss. A similar situation could be considered in the case of a concrete cover, but the durability of a concrete cover depends mainly on corrosion of steel reinforced concrete. At the same time, the adsorption of corrosive agents and water of a steel reinforced concrete surface depends on the concrete cover characteristics. Concrete properties change over a relatively wide range due to their heterogeneity representing a key factor determining chloride ion penetration. This constitutes the difficulty establishing a relative prediction of reinforced concrete durability in coastal and marine zones based on atmospheric salinity deposition. The same situation occurs in metals and coatings, for there is a wide range of coatings and metal alloys. Under such circumstances, standard reference samples could be used as patterns for atmospheric corrosivity, in the same way as it occurs on metals. Significant corrosion of steel reinforced concrete starts when a chloride threshold is reached. After this point, the corrosive damage of the structure increases rapidly. 13 Time is therefore the most important parameter to determine the influence of the environment in reaching the critical chloride content or chloride threshold on a steel reinforced concrete surface in coastal and marine zones. This period of time does not depend only on the chloride deposition in the atmosphere and other atmospheric parameters, but also on the properties of the concrete, particularly capillarity, cover thickness, etc. Under such circumstances, the corrosion and its propagation starts, and depending on the environment and the properties of the concrete cover, is relatively fast. When generalized corrosion occurs, a corrosive attack takes place across almost all the metal surface, whereas when localized corrosion occurs, other areas remain free of significant corrosion. Vol. 36,No. 2 Thus, corrosion current could be an index of the influence of chloride content on the corrosion of steel reinforced concrete, although it should be applied considering the type of corrosion. Bastidas-Arteaga et al. described the influence of proximity to the sea on the probability of corrosion initiation. 14 These authors used a stochastic model and the impact of distance to the sea was observed by analyzing the results spanning a 30-year period. They observed that the probability of corrosion initiation is higher in a tropical environment than in other types of environments. Tropical environments are characterized by high values of temperature and humidity which reduce the time to onset of a corrosion process. The influence of distance to the seashore on the corrosion of steel reinforced concrete in the Yucatan Peninsula was extensively studied by Castro. 15 The influence of the atmospheric environment on different types of concrete was the main study of the CYTED project DURACON, conducted throughout 11 Ibero-American countries. 16 The initial results showed that in marine atmospheres, the chloride content in the environment should be considered a decisive factor when evaluating the probability of corrosion of steel reinforced concrete during the first years of study. 17 Salt depositions on testing devices are due to the salt particles that impact and remain on the apparatus surfaces during the transport of marine aerosol inland. The wet candle device is frequently used for this purpose, as part of standardized procedures for measuring the amount of chloride salts captured from the atmosphere on a given exposed area of the apparatus. 18 A decreasing tendency in the total amount of chlorides that penetrate concrete structures built in coastal atmospheres has been observed as a function of distance to the shoreline under natural exposures. 19 However, a direct relationship between chlorides from marine aerosol and their interaction with concrete structures has not been established. Chlorides present in the atmosphere, which can potentially be deposited onto concrete surfaces and penetrate into concrete, can be studied using different methods, including the wet candle device and dry plate device. The results can be correlated with chlorides accumulated in concrete. Recently, chloride penetration in different types of mortars was reported from a tropical country (Bangladesh). 20 A relatively small atmospheric chloride deposition was determined using the wet candle device in the coastal zone, where maximums oscillated around 60 mg/m 2 d. However, in the case of Brazil, Meira proposed that the chloride deposition rate on the wet candle device can be used as an environmental indicator, which could help to increase the expected service life of concrete structures or suggest the minimum thickness of the concrete cover required to attain the necessary service life. 21 In conclusion, the service life can vary by between 30 and 60% if the same concrete is in a marine atmosphere zone with 120 or with 500 mg/m 2 d, representing a more than four-fold increase in chloride deposition. The main purpose of the present paper was to provide valuable information for maintenance, construction, designing and protection of reinforced concrete structures in the coastal region of Havana City, where significant damage in concrete structures has been observed. Exposure sites Seven exposure sites were selected throughout Havana City, at different distances from the northern seashore, surrounded by different buildings ( Table 1). Cuba is a long and narrow island located almost parallel to the Equator. Havana City is located on the northern seashore on the West side of the island. Results obtained in the first and second years of exposure are included in the present paper. The influence of existing buildings undoubtedly produces local changes in the chloride atmospheric deposition rate and in the corrosion of steel reinforced concrete. Temperature and relative humidity on the 7 sites were measured (using data loggers) every 0.5 h during the first year. The measuring period was September 2007-August 2008. Concrete samples Three samples of reinforced concrete with a water/cement ratio 0.4, 0.5 and 0.6 were prepared and placed at each atmospheric exposure site. The dimensions of the samples were 200 x 200 x 200 mm. Four steel reinforcement bars 200 mm in length, two 20 mm from the concrete cover and two corresponding to 40 mm from the concrete cover, were placed in each reinforced concrete samples. To allow electrochemical measurements, 40 mm of the steel reinforcement bar protruded outside the concrete sample. The distance between the two steel reinforcement bars embedded in the concrete blocks was 10 mm. The reinforced concrete samples were designed to measure mainly the influence of a concrete cover (2 and 4 cm) on the corrosion of steel reinforced concrete under atmospheric conditions. Pairs of steel reinforcement bars were located at every cover distance to use one as a working electrode and another as an auxiliary electrode. The second was used for the application of other electrochemical techniques such as polarization curves and electrochemical impedance spectroscopy in other studies. The maximum coarse aggregate size was not located between the auxiliary and the working reinforcement steel bars in the samples because its size is larger than 1 cm, but a good electrolytic contact was established in the pair. Steel reinforcement bars in concrete were introduced after previous chemical cleaning using a hydrochloric acid solution (36%) at room temperature under an extraction hood and thoroughly rinsed. The rebars were immediately dried in order to remove any possible corrosive product from the surface and maintained under similar conditions. The aggregate combinations and their properties, the necessary water quantity, and the admixture volume for the preparation of the reinforced concrete sample are shown in Table 2. All samples were cured for 28 days by total immersion in water (curing tank). The origin of the different aggregates was as follows: Portland cement P-350 produced in a Cuban factory, washed sand sourced from a quarry, gravel grade A (limestone) 19 mm also quarried, as well as super plasticizer admixture to reduce the water content in the concretes by about 20-25%. A group of concrete samples was prepared to determine their physicochemical properties, such as compressive strength, density, ultrasonic pulse velocity, capillary absorption coefficient, and effective porosity. The slump was determined by using the Abrams cone method (Table 3). [22][23][24][25][26] The compressive strength was determined by using a 2 000 kN testing machine. The samples prepared for this test had a cylindrical shape (height = 30 cm and diameter = 15 cm) and were also cured for 28 days by total immersion in water. Three values of concrete compressive strength were determined for each w/c ratio. Ultrasonic pulse velocity can be used as a criteria of concrete homogeneity. A TIC PROCEQ model instrument was used to measure the ultrasonic pulse velocity. Transmitter and detector were placed in parallel, obtaining a direct measure of the ultrasonic pulse velocity. Two samples of concrete of the same dimension (200 x 200 x 200 mm) without reinforcement steels were evaluated for every w/c ratio. The evaluation was carried out for all the parallel faces; three per sample. The values obtained show that the concrete homogeneity was acceptable in general ( Table 3). The capillarity absorption coefficient and effective porosity were determined on the basis of the methodology established by Göran Fagerlund and according to the Cuban standard. Two cylindrical shape samples measuring 62 mm in diameter were extracted in samples with a w/c ratio 0.4, 0.5 and 0.6. These specimens were cut 20 mm thick. Each cylinder was cut into eight pieces. For every w/c ratio, 16 small cylinders measuring 20 mm height were obtained. All pieces were dried in an oven at 50 o C for 48 h to constant weight. The samples were covered with paraffin wax on their sides. Subsequently, the pieces were placed in desiccators under relative low humidity. The test cylinders were placed on a layer of silica sand about 20 mm thick where the water level was maintained 5 mm over the lower surface of the cylinders. A suction process took place only at the lower surface. After weighing, all cylinders were placed on the sand layer and later, at different times according to the testing program, were reweighed. An analytical balance (0.2 mg accuracy) was used. Measurement of pollution The atmospheric chloride deposition rate [ACDR] was obtained using a dry plate device, consisting of a piece of dry cotton fabric 320 x 220 mm, stretched over a wood holder, perpendicular to the wind, and exposed under a shed ( Figure 1). The piece of dry cotton was changed at the end of each month at each exposure site, and stored in a polymer bag (September 2007 to August 2008). The amount of chloride deposition on the dry cotton was analytically determined by potentiometric titration with an AgNO 3 solution 0.05 M. The deposition rate was then calculated. 27 The sulphur compound deposition rate [SOxDR] was obtained using alkaline plates 150 x 100 mm, also placed perpendicular to the wind ( Figure 1). The piece was changed and stored in the same manner as the dry cotton fabrics. The amount of sulphur compounds was determined by the incineration of alkaline plates in a furnace at a temperature of 750 o C. 28 Chloride penetration The chloride concentration was determined using the ASTM standard. 29 Chloride penetration [Cl -] was determined on the surface at 1 cm depth after 1 year of exposure and at 1 and 2 cm depth after 2 years of exposure, considering the regions where chloride penetration occurs due to capillary absorption, that is, depending on the capillarity coefficient of concrete. The influence of the atmospheric environment should be more significant in this part of the concrete cover. Chloride at the surface was determined by scraping to a depth of 1 mm using a metal spatula. At 1 and 2 cm depths, 10 g amounts of concrete powder were extracted by drilling where the drill bit diameter was smaller than the coarse aggregate size. Evaluation of corrosion current The corrosion current was measured by using a GECOR-8 instrument. This equipment is widely used for determining potential corrosion and current corrosion measurements in diagnostic work on site. It has a circular shaped sensor in which reference electrodes, in this case copper/copper sulfate, and an auxiliary electrode are found. A wet cloth was placed between the sensor surface and the concrete cover surface in order to ensure a good electrolytic contact in the measurement system. This measurement area had remained perpendicular to the wind during the first and second years of exposure. The steel reinforcement bars were placed in all cases with the 2 cm deep bar on the left and with the 4 cm on the right sides. The reinforcement steel protruding section of 4 cm was taken as the contact point with the equipment. This zone had an adhesive tape as temporal protection against corrosion. Epoxy coating was not applied to ensure good electrolytic contact. The free area of steel reinforcement for determining the corrosion rate was 62.54 cm 2 . The measurements were similarly performed on site, i.e., by pressing the sensor on the surface of the reinforced concrete sample until obtaining the corrosion current values ( Figure 2). The equipment uses a polarization resistance technique for this purpose. Statistical treatment of data Regression equations and correlation coefficients were obtained using the Statgraphics software version 5.0. It is important to note that, given the complex nature of atmospheric corrosion phenomenon, it is common to use statistical treatment of data. 30 Atmospheric parameters such as temperature, relative humidity, rainfall, pollutant deposition and others are not as regular and fixed as they are under laboratory conditions, changing according to atmospheric conditions, climatic season and other factors. Under these conditions, statistical treatment of data is necessary to elucidate the influence of a complex set of parameters on atmospheric corrosion of materials. It is important to take into account that the statistical treatment of data is a highly valuable tool, but should be applied taking into consideration the basis of the atmospheric corrosion phenomena. Atmospheric corrosivity classification of corrosion sites It is important to note that Station 2, a site located 360 m from the shoreline, should report a significant chloride deposition rate (ACDR); but the direct arrival of marine aerosol to this site is difficult because the shed housing the reinforced concrete sample is located to the rear of a 12-story building located directly on the shoreline (Table 1). A lower-than-expected corrosivity level for the distance to the shoreline should thus be obtained. Stations 2, 3 and 4 were closer to the shoreline of Havana Bay. In addition, the classification of atmospheric chloride deposition rate was S1, higher than at the other sites, except for Station 1. Stations 5, 6 and 7 were relatively far from the shoreline, under urban influence, and should therefore have lower atmospheric corrosivity (presuming non-significant influence of the urban environment). Under these conditions, instead of a similar prediction of corrosivity to that using the ISO 9223 standard, differences in corrosivity should exist among sites located at a salinity classification S3, other sites at a distance of less than 1 km to the shoreline (including Station 4) with an S1 classification and those sites situated at more than 1 km to the shoreline (Stations 5, 6 and 7) having So classification. According to the location of the sites, extreme atmospheric corrosivity (>C5) should be expected at the Station 1 site because it is located less than 20 m from Cuba's North shoreline, 3 followed by Stations 2 and 3 with a lower level of corrosivity. The influence of marine aerosol should be lower at sites located at a distances further than 1 km to the shoreline, except for Station 4 (there is a previous report of a C4 atmospheric corrosivity level for this site) because it is located close to Havana Bay. 2 It can be observed that the average temperature fluctuates between 26.1 and 28.6 o C (Table 1). Although changes in temperature are small, an increase in average temperature is observed at greater distances from the shoreline. These conditions could favor a slightly enhanced chloride penetration in samples near the shoreline because the condensation of moisture occurs more readily at lower temperatures. Regarding relative humidity, average values are lower for the two sites located far from the shoreline, corresponding to higher average temperatures. Sites located closer to the shoreline showed a relative humidity average of over 80%. These conditions show that the penetration of chlorides is likely to be enhanced at sites close to the shoreline because capillarity absorption of soluble chlorides should occur more readily. Besides, the chloride deposition rate is higher near the shoreline. Chloride diffusion diminishes as temperature increases, but in the nearest concrete surface Chloride penetration occurs mainly by capillarity absorption, and therefore the influence of differences in chloride diffusion should not be so significant given the small change in air temperature. Atmospheric chloride deposition rate. First year of exposure The determination of the atmospheric chloride deposition rate [ACDR] through a dry cotton device differs with respect to the wet candle device in that with the latter method the surface is constantly wet and is more sensitive to chloride aerosol. In the dry plate device, the surface of the cotton fabric absorbs water and contaminants due to its porosity, similar to the concrete's surface. The results, in general, are comparable, but the data obtained from the dry plate device are usually lower than those from the wet candle device. 1 Perhaps the dry plate device could represent a better approximation for a porous material system. The classification of salinity at the Havana sites has been reached based on the atmospheric chloride deposition rate [ACDR] determined by using a dry candle device. Changes in the ACDR of the closest site to the shoreline in Havana City are shown in Figure 3. The highest deposition rate is found during the Winter period (dry season: November to April). Changes in the ACDR of the other 6 exposure sites are shown in Figure 4. The same behavior is observed in the Winter period (dry season). Station 4 shows the highest deposition among the 6 sites. It is located 1500 m from the seashore, but at the same time was placed about 200 m from the inner shore of Havana Bay. The magnitude of ACDR is extremely high at Station 1 (20 m to the shoreline). These conditions are expected to be associated with a strong atmospheric corrosion rate at this site. 1 With respect to the other 6 sites, Station 4 had a C4 atmospheric corrosivity level, Stations 2 and 3 showed an ACDR classification of S1 while the other 3 sites had So. Some expressions such as [ACDR -] = ad -b to represent change in ACDR with distance to the shoreline have been proposed. 19 The ACDR yearly average data determined at seven sites in Havana were processed versus the distance to the shoreline and fitted to the following exponential relationship: [ACDR] =10170.7 [d] -1.06 (1) n = 7 r = -0.934 r 2 = 0.872 P<0.01 where, d is the distance to the shoreline (m). This implies that the same type of expression can be obtained in the case of Havana City, that is, a sharp decrease in the chloride deposition with longer distance from the seashore. It should be considered that this relationship was not obtained in flat territory, but for territory housing buildings of different sizes. Indeed, the salinity found for Stations 2 and 3 should be higher when there are no obstacles to the distribution of sea aerosol. Changes in coefficients of the above expression are probably due to environmental salinity spread through a space without obstacles. Sulfur compound deposition rate. First year of exposure The sulphur compound deposition rate (SOxDR) could be of natural origin or a product of man's behavior. It can be seen that the highest sulphur compound deposition rate corresponds to the exposure site located closest to the shoreline ( Table 1). The main source of SOxDR in this case is likely marine aerosol. It is also important to note ( Figure 5) that, akin to ACDR, the maximum SOxDR is determined for the Winter season (Nov-April). Station 4 had an SOxDR higher than all other Havana exposure sites, except for Station 1. The deposition rate was also higher in the Winter time; however, it could be possible that the origin of sulphur compounds may not have been due to marine aerosol alone, but also to industries and urban sources, according to the location of this site. 31 It is important to note that around Havana Bay there is a significant number of industries and other possible sources of contamination. A general pattern of behavior showing higher SOxDR in Winter time can be observed at all sites. This implies that the Sulphur compounds of natural origin are a significant part of the total Sulphur compound deposition. A similar exponential relationship to that obtained from chlorides confirms the influence of marine aerosol on SOxDR: (2) n = 7 r = -0.879 r 2 = 0.772 P<0.01 Chloride penetration profiles. First year of exposure Chloride salts are transported into concrete by a combination of capillary absorption and diffusion, with a preponderance of capillary absorption in the superficial layers, whereas wetting and drying cycles play an important role in the inner layers. 13 The chloride penetration (Cl -) determined (at the concrete surface and at a depth of 1 cm), depending on the distance to the shoreline, after one year of exposure is shown in Figure 6 for 3 water/cement (w/c) ratios. It can be observed that, in general, chloride concentration is lower at the surface than at a depth of 1 cm penetration. Reinforced concrete samples were evaluated at the end of the wet period (Summer), when the rains are more intense and frequent. This situation could cause the superficial concentration to be lower than at 1 cm penetration depth. The highest difference between the two layers is observed at w/c ratio of 0.6. The effective porosity of samples (Table 3) is almost double for samples with w/c = 0.6, compared to others with lower w/c ratios. The higher capillarity absorption coefficient and effective porosity favor the penetration of chloride through the concrete cover. It is interesting to note that a sharp decrease in Clis evident between Station 1 and Station 2, while Stations 3 and 4 showed a higher chloride penetration ( Figure 6). Station 2 was screened from direct arrival of marine aerosol because the site was located in a parking area to the rear of a 12-story building facing the shoreline and screening the direct arrival of marine aerosol. At the same time, Station 4 was located 1 500 m from the shoreline, but close to Havana Bay, explaining the higher chloride concentration determined. Station 3 was located at a longer distance to the shoreline, but with fewer obstacles to stop the arrival of marine aerosol, only a group of trees; however, there is a difference between the average atmospheric ACDR and Clpenetration at Station 2, evidencing higher ACDR and lower Clthan Station 3. The main difference between these two exposure sites was that Station 2 was completely screened from the direct arrival of marine aerosol, which was not the case for Station 3, where the trees in front can screen part but not all of the direct marine aerosol. Station 2 reported a lower average relative humidity (81.6%) compared to Station 3 (83.8%). Perhaps the presence of a high humidity at Station 3 and also greater influence of winds coming from the shoreline could have led to more favorable conditions for chloride penetration, instead of a lower level of chloride deposition. It is also important to note that Station 4 (higher average RH than Station 2 and slightly higher atmospheric chloride deposition) reported a higher chloride penetration than Station 2. A possible explanation could be that lower relative humidity could create conditions for lower chloride penetration at Station 2 versus Stations 3 and 4. A difference in sensitivity for chloride determination could have occurred between the surface of the dry plate device and the surface of the reinforced concrete samples. Small chloride particles could be absorbed by the dry plate device, but those small particles which arrive to the concrete surface could be absorbed by dust particles existing on the surface and carried by the wind. In the case of larger chloride particles, these could be absorbed by the dust existing on the surface of concrete, but remain on the surface because they absorb more water. In the above situation, it follows that for a higher relative humidity a higher chloride penetration is obtained. The existence of a higher humidity could thus facilitate chloride penetration. Using a calculation based on Computational fluid dynamics, Cole determined that the deposition ratio on a building relative to a wet candle device is 0.08 at the rear of a building. 32 Application of this calculation to Station 2 indicates that chloride deposition, if the blocking building had been absent, would have been 12.5 times higher. A very clear difference in chloride penetration is observed between the 3 w/c ratios tested after one year of exposure. This shows that corrosion of steel reinforced concrete starts earlier in samples having higher w/c ratio and consequently, higher capillarity absorption coefficients and effective porosity. At the site where an extreme corrosion rate of carbon steel is highly probable (Station 1), due to a very high ACDR, the highest chloride penetration is reported, especially at the highest w/c ratio. At the other sites located less than 1 km from the North shoreline (including Station 4 because it was placed near Havana Bay) could be considered a second level of corrosivity. A third level of corrosivity could be attributable to those sites located more than 1 km from the shoreline, perhaps including Station 2, due to lower chloride penetration. A longer exposure time will be necessary to clarify the existence of a third corrosivity level. It is important to note that a chloride threshold, considered 0.4% of cement weight, had not been reached in any of the reinforced concrete samples after one year of exposure (based on determinations at a penetration depth of 1 cm). A multilinear regression analysis carried out between the average chloride penetration (Cl -(%)) at 1 cm depth versus average ACDR determined by a dry plate device and water/cement ratio gave the following results: where w/c is the water/cement ratio. This shows that w/c ratio and atmospheric ACDR are two important variables determining chloride penetration at 1 cm depth, but less than 60% changes in chloride penetration at 1 cm depth are explained by the relationship obtained. where, K is the capillarity absorption coefficient (kg/m 2 s 1/2 ). In the case of effective porosity, results are as follows: where, ε is the effective porosity (%). The above regression equations show an almost equivalent influence of water/cement ratio, absorption coefficient, and effective porosity are obtained for chloride penetration at 1 cm depth after 1 year of exposure. Chloride profile after two years of exposure After two years of exposure, the chloride penetration at 1 and 2 cm depth was significantly higher at Station 1 (Figure 7). In the cases of w/c = 0.6 and w/c = 0.5, the chloride penetration was higher at 1 cm depth than at 2 cm. Chloride penetration significantly differed at Station 1 compared to all other sites located further inland. Concerning the corrosion of steel reinforced concrete, it should be significantly higher in Station 1 compared to the other sites. The results of statistical regression between Cland ACDR (first year average) in concrete at 1 and 2 cm of depth after 2 years of exposure are shown in Table 4. It can be noted that, except for the w/c = 0.5 after one year of exposure, a good statistical fit is obtained. For all the w/c ratios at a 1 cm depth and 2 years of exposure: Significant relationships were obtained between average ACDR corresponding to the first year of exposure and Clat 1 and 2 cm depth after 2 years taking into account w/c ratio. ACDR for the second year of exposure was not measured, but it is probable that the ratio of chloride deposition between sites in the second year was similar to the first year ratio. Changes are better explained by comparing two years of exposure. These results show the influence of ACDR on chloride penetration in samples exposed at Havana City sites. Corrosion current (CC) Regression equations obtained between corrosion current and chloride penetration after 2 years of exposure at different w/c ratios and concrete cover thicknesses are shown in Table 5. A significant regression is obtained at w/c ratios of 0.6 and 0.5. This could be explained by the fact that these two w/c ratios are more porous than the w/c ratio of 0.4 and the chloride penetration is higher. In the case of the latter w/c ratio, the corrosion of steel reinforced concrete should not be significant after two years of exposure. Only one statistically significant regression equation was obtained (4 cm cover). These regression equations show a direct relationship between corrosion current measured and chloride penetration in concrete. Statistical results obtained between corrosion current determined after 1 and 2 years of exposure and the average ACDR determined during the first year of exposure at different w/c ratios are shown in Table 6. Similar to chloride penetration, no significant regressions were obtained for the w/c ratio of 0.4; however, a good fit was obtained from the w/c ratio of 0.6 including the first year of exposure. An acceptable fit was also obtained from the w/c ratio of 0.5 after 2 years of exposure. This shows that ACDR is a very important parameter for determining corrosion current, and consequently, the corrosivity of the atmosphere with regard to steel reinforced concrete. Multilinear regression analyses between corrosion current, ACDR (first year average) and the w/c ratio for one and two years, were carried out. The results are as follow: Ic = Corrosion current (μA/cm 2 ). One A statistically significant probability for the influence of ACDR and w/c ratio on corrosion current determined was obtained; although the percentage explanation of results for the equation (r 2 ) was less than 70%. In general, corrosion current changes were according to Cland ACDR. This indicates that, for the Havana City coastal sites, atmospheric corrosivity affecting steel reinforced concrete after two years of exposure, depends on ACDR and w/c ratio. Normally, the influence of SOxDR on atmospheric corrosion of reinforced steel in coastal sites is not considered. The results of statistical regressions between the corrosion current vs. ACDR and SOxDR for different w/c ratios are shown in Table 7. The inclusion of SOxDR in statistical regression increased the statistical fit for the w/c ratio of 0.6. Significant regression equations were obtained when the SOxDR was included for the other w/c ratios. Only the parameters of w/c ratio of 0.5, one year of exposure and concrete cover 2 cm, yielded a non-significant regression equation. This suggests that SOxDR could also influence the corrosion of steel reinforced concrete. 33 Coefficient "c" represents the influence of SOxDR on corrosion current. As can be observed, except for w/c=0.6 in the first year, a positive sign is observed. This confirms that the influence of SOxDR increases corrosion current. A negative sign for coefficient "b", representing the influence of ACDR on corrosion current, was observed in regressions obtained for the w/c ratio of 0.4. This result could be due to the low porosity of the reinforced concrete samples. Perhaps the influence of ACDR was not significant at the present time of exposure for this w/c ratio. Reinforced concrete has been extensively used in all environments. Despite the effect of sulphate ions on attacking hardened concrete, which is relatively well researched in the literature, there is scant data available on the role of sulphate ions in the corrosion of steel reinforced concrete because of the lack of understanding about the role of these ions. However, the exposure of many reinforced concrete structures to sulphate-bearing soils and groundwater has brought attention to the role of sulphate ions in the corrosion of steel reinforced concrete, either when the sulphate ions are present alone or existing concurrently with chlorides. 33 Sulphate ion production in the atmosphere takes place either by photo-oxidation of dimethyl sulphide or oxidations of SO 2 in the atmosphere. The sulphate ions are formed from the photo-oxidation of dimethyl sulphide that is transferred from the sea to the atmosphere in the presence of sunlight. The process is referred to as gas particle conversion, and occurs at the wind -sea interface. 34 When marine aerosol is transported inland into an urban zone, it incorporates gases from anthropogenic sources, mainly SO 2 which becomes sulphate ions when reacting to the atmospheric ozone. These sulphate ions penetrate into reinforced concrete and react Table 7. Statistical regression between corrosion current of reinforced concrete, chloride deposition rate, and sulphur compounds deposition rate average in the first year of exposure at 2 and 4 cm cover for 1 and 2 of exposure in Havana sites with the chemical products involved in the cement hydrating process (Ca(OH) 2 ). The most frequent attack by sulphate ions is the reaction that forms plaster (CaSO 4 ) along with hydrated compounds that form secondary Ettringite (C 6 AS 3 H 31 ) which results in the expansion and cracking of cured concrete. 31 These results show the influence of SOxDR which should be considered in studies of atmospheric corrosion of steel reinforced concrete; however, a more in-depth study should be carried out to confirm this finding . It should also be considered that SOxDR is mainly of natural origin, being derived from chloride aerosol. 31 CONCLUSIONS Two significantly different corrosivity levels were observed after 2 years of exposure in Havana sites as a function of atmospheric chloride deposition. Changes in corrosion current are in accordance with changes in chloride penetration in concrete and atmospheric chloride deposition. Preliminary results show that the influence of atmospheric sulphur compound deposition should be considered in studies of atmospheric corrosion of reinforced steel. Chloride penetration diminishes in areas where airborne salinity does not directly reach the concrete surface due to the presence of obstacles such as buildings, trees, and such like.
2018-12-12T15:12:30.935Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "ef797dbe28ff4e9787cd9e0454ec31cdca523627", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/qn/v36n2/v36n2a04.pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b0f9b6f84d2d3625ff864818b52dffc6c5b06c29", "s2fieldsofstudy": [ "Materials Science", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }